Automatically generate high-quality markdown context files for AI coding agents like Claude, Cursor, and Gemini. Analyzes public GitHub repositories to create documentation for AI agents to use. Connects to GitHub and AI coding agents.
git clone https://github.com/saharmor/sidekick-dev-web.gitAutomatically generate high-quality markdown context files for AI coding agents like Claude, Cursor, and Gemini. Analyzes public GitHub repositories to create documentation for AI agents to use. Connects to GitHub and AI coding agents.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/saharmor/sidekick-dev-webCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Generate a high-quality markdown context file for a [PROGRAMMING_LANGUAGE] project focused on [SPECIFIC_DOMAIN]. Include key concepts, common patterns, and best practices. Ensure the content is accurate, concise, and actionable for an AI coding agent like Claude or Cursor.
```markdown
# Python Web Scraping Context
## Key Concepts
- **HTTP Requests**: Python's `requests` library for making HTTP requests.
- **HTML Parsing**: Using `BeautifulSoup` for parsing HTML content.
- **Data Extraction**: Techniques for extracting data from HTML elements.
- **Error Handling**: Best practices for handling network errors and parsing issues.
## Common Patterns
1. **Making a Request**:
```python
import requests
response = requests.get('https://example.com')
response.raise_for_status()
```
2. **Parsing HTML**:
```python
from bs4 import BeautifulSoup
soup = BeautifulSoup(response.text, 'html.parser')
```
3. **Extracting Data**:
```python
titles = soup.find_all('h2', class_='post-title')
for title in titles:
print(title.text)
```
## Best Practices
- **Rate Limiting**: Implement delays between requests to avoid overwhelming servers.
- **User-Agent Headers**: Use appropriate user-agent headers to mimic browser behavior.
- **Error Handling**: Always handle potential errors gracefully.
## Example Use Case
```python
# Scrape article titles from a news website
import requests
from bs4 import BeautifulSoup
response = requests.get('https://news.example.com')
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
titles = soup.find_all('h2', class_='post-title')
for title in titles:
print(title.text)
```
```