Hand-picked Python libraries and frameworks for automation. Organized by category, this skill helps operations teams streamline workflows. Connects to Jupyter, pandas, and other data tools.
git clone https://github.com/dylanhogg/awesome-python.githttps://www.awesomepython.org/
["Identify the automation task you need (e.g., web scraping, file processing, API interactions). Replace [LIBRARY_FRAMEWORK] with a relevant library (e.g., `requests`, `selenium`, `pandas`, `schedule`).","Customize [TASK_DESCRIPTION] to match your specific workflow. For example, 'automate weekly report generation' or 'sync data between two databases'.","Run the script in a Python environment (Jupyter Notebook, VS Code, or terminal). Use `pip install` to add required libraries if missing.","Test the script with a small dataset first. Gradually scale up and monitor logs for errors.","Integrate the script into your workflow using cron jobs (Linux/macOS) or Task Scheduler (Windows) for recurring tasks."]
Quickly find and integrate libraries for data manipulation and analysis in your Python projects.
Discover the latest tools for machine learning and deep learning to enhance your AI applications.
Utilize libraries for web scraping and data collection to automate data gathering tasks.
Implement testing and debugging tools to improve code quality and reliability in your software.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/dylanhogg/awesome-pythonCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Generate a Python automation script using [LIBRARY_FRAMEWORK] to [TASK_DESCRIPTION]. Include error handling, logging, and comments for clarity. Example: 'Generate a Python automation script using the `requests` library to scrape product prices from an e-commerce site, save the data to a CSV file, and log any errors to a file named scrapelog.txt.'
```python
# Product Price Scraper using requests and pandas
import requests
import pandas as pd
import logging
from datetime import datetime
# Configure logging
logging.basicConfig(filename='scrapelog.txt', level=logging.ERROR,
format='%(asctime)s - %(levelname)s - %(message)s')
def scrape_product_prices(urls, output_file='product_prices.csv'):
"""
Scrapes product prices from a list of URLs and saves to CSV.
Handles errors gracefully and logs issues.
"""
scraped_data = []
for url in urls:
try:
response = requests.get(url, timeout=10)
response.raise_for_status() # Raises HTTPError for bad responses
# Extract product name and price (example parsing logic)
product_name = response.text.split('<h1 class="product-title">')[1].split('</h1>')[0]
price = float(response.text.split('class="price">$')[1].split('<')[0])
scraped_data.append({
'url': url,
'product_name': product_name,
'price': price,
'timestamp': datetime.now().strftime('%Y-%m-%d %H:%M:%S')
})
except requests.exceptions.RequestException as e:
logging.error(f"Failed to scrape {url}: {str(e)}")
continue
except (IndexError, ValueError) as e:
logging.error(f"Data parsing error for {url}: {str(e)}")
continue
# Save to CSV
if scraped_data:
df = pd.DataFrame(scraped_data)
df.to_csv(output_file, index=False)
print(f"Successfully saved {len(df)} products to {output_file}")
else:
print("No data was scraped. Check scrapelog.txt for errors.")
# Example usage
if __name__ == "__main__":
target_urls = [
'https://example-store.com/product1',
'https://example-store.com/product2',
'https://example-store.com/product3'
]
scrape_product_prices(target_urls)
```
**Output:**
```
Successfully saved 3 products to product_prices.csv
```
**CSV Output (product_prices.csv):**
```
url,product_name,price,timestamp
https://example-store.com/product1,Wireless Headphones,99.99,2023-11-15 14:30:22
https://example-store.com/product2,Smart Watch,199.99,2023-11-15 14:30:22
https://example-store.com/product3,Bluetooth Speaker,49.99,2023-11-15 14:30:22
```
**scrapelog.txt:**
```
2023-11-15 14:30:22,ERROR,Failed to scrape https://example-store.com/product4: 404 Client Error
```Modern scheduling made simple for teams
Extensive icon library for web and app design
Cloud ETL platform for non-technical data integration
Get more done every day with Microsoft Teams – powered by AI
Customer feedback management made simple
Enterprise workflow automation and service management platform
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan