Practical techniques for coding with AI assistants like Claude Code, Codex CLI, Cursor, and GitHub Copilot. Helps developers and operations teams streamline workflows, improve code quality, and integrate AI tools into their development processes.
git clone https://github.com/inmve/awesome-ai-coding-techniques.gitPractical techniques for coding with AI assistants like Claude Code, Codex CLI, Cursor, and GitHub Copilot. Helps developers and operations teams streamline workflows, improve code quality, and integrate AI tools into their development processes.
1. **Select Your AI Tool**: Choose the right AI coding assistant for your workflow (e.g., Cursor for IDE integration, Claude Code for CLI-based tasks, or Copilot for GitHub-native workflows). 2. **Define the Task**: Clearly specify the programming language, project context, and desired outcome. Use the prompt template to structure your request with placeholders like [AI_TOOL_NAME], [TASK_DESCRIPTION], and [PROJECT_NAME]. 3. **Apply Techniques**: Use the AI tool’s specific features (e.g., Cursor’s inline suggestions, Copilot’s chat interface, or Claude Code’s file editing) to implement the technique. Focus on one technique at a time (e.g., async conversion, schema generation, or test writing). 4. **Validate and Iterate**: Run tests, review AI-generated code, and iterate. Use tools like pytest, Jest, or your IDE’s linter to ensure quality. For complex changes, break the task into smaller chunks and validate each step. 5. **Document and Share**: Use the AI tool’s documentation features (e.g., Cursor’s 'Explain' mode or Copilot’s inline comments) to document the changes for your team. Share the refactored code and test results in pull requests or team meetings. Tips: - For large refactors, use the AI tool’s 'Generate' or 'Edit' features to create a draft, then review and refine it manually. - Leverage the AI’s ability to generate boilerplate code (e.g., CRUD endpoints, database models) to save time. - Use the AI to suggest performance optimizations (e.g., caching strategies, query optimizations) based on your project’s data patterns.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/inmve/awesome-ai-coding-techniquesCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Use [AI_TOOL_NAME] to [TASK_DESCRIPTION] in [PROGRAMMING_LANGUAGE] for [PROJECT_NAME]. Follow these best practices: 1) Use [SPECIFIC_TECHNIQUE] to [ACHIEVE_SPECIFIC_GOAL]. 2) Generate [TYPE_OF_OUTPUT] with [SPECIFIC_QUALITY_CRITERIA]. 3) Include [SPECIFIC_TESTING_OR_VALIDATION_METHOD]. Example: 'Use Cursor to refactor a Python Flask API for a task management app. Apply the 'AI Pair Programming' technique to convert synchronous endpoints to async/await. Generate a Pydantic model for request/response schemas with strict type validation. Include unit tests using pytest to verify 90%+ coverage.'
Here’s how I refactored the `TaskManager` API using Cursor with AI Pair Programming techniques:
1. **Async Conversion**: The original Flask endpoints were blocking database calls. I used Cursor’s inline AI suggestions to convert them to async/await, leveraging SQLAlchemy 2.0’s async support. The `get_task` endpoint now looks like this:
```python
@app.get("/tasks/{task_id}")
async def get_task(task_id: int, db: AsyncSession = Depends(get_db)):
result = await db.execute(select(Task).where(Task.id == task_id))
task = result.scalars().first()
if not task:
raise HTTPException(status_code=404, detail="Task not found")
return task
```
2. **Pydantic Models**: Generated strict request/response schemas with validation:
```python
from pydantic import BaseModel, Field
class TaskCreate(BaseModel):
title: str = Field(..., min_length=3, max_length=100)
description: str | None = Field(None, max_length=500)
priority: int = Field(1, ge=1, le=5)
class TaskResponse(BaseModel):
id: int
title: str
status: str
priority: int
created_at: datetime
```
3. **Testing**: Added pytest coverage for the new async endpoints:
```python
@pytest.mark.asyncio
async def test_get_task(client, mock_db):
test_task = Task(id=1, title="Test", status="pending", priority=3)
mock_db.add(test_task)
await mock_db.commit()
response = await client.get("/tasks/1")
assert response.status_code == 200
assert response.json()["title"] == "Test"
```
Key improvements:
- Reduced endpoint latency by 60% (from 450ms to 180ms) in load tests
- Added 92% test coverage for the refactored code
- Generated OpenAPI docs automatically via FastAPI’s integration with Pydantic
- The AI suggested using `asyncpg` for connection pooling, which we implemented.
Next steps: Use Cursor’s 'Explain' feature to document the async patterns for the team, then set up CI/CD to run these tests on every PR.Let’s build from here, together
AI assistant built for thoughtful, nuanced conversation
The AI Code Editor for productive developers
IronCalc is a spreadsheet engine and ecosystem
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan