Run Anthropic's Claude Code CLI with OpenAI models like GPT-5-Codex and GPT-5.1 via a local LiteLLM proxy. Ideal for developers and operations teams needing to integrate advanced code generation and analysis into their workflows. Connects to existing CLI tools and development environments.
git clone https://github.com/teremterem/claude-code-gpt-5-codex.gitThe claude-code-gpt-5-codex skill enables users to run Anthropic's Claude Code CLI in conjunction with OpenAI models such as GPT-5-Codex and GPT-5.1 through a local LiteLLM proxy. This integration allows developers and AI practitioners to leverage advanced generative capabilities from OpenAI seamlessly within their existing workflows. By utilizing this skill, users can enhance their coding efficiency and automate repetitive tasks, making it easier to focus on more complex problems. One of the key benefits of this skill is its ability to streamline AI automation processes. With a moderate implementation time of approximately 30 minutes, developers can quickly set up the environment to start benefiting from enhanced coding assistance. Although specific time savings are currently unknown, the potential for increased productivity is significant, as users can automate code generation and debugging tasks, reducing manual effort and accelerating project timelines. This skill is particularly beneficial for developers, product managers, and AI practitioners who are looking to integrate sophisticated AI capabilities into their projects. By using the claude-code-gpt-5-codex skill, teams can improve their workflow automation, enabling them to deliver higher-quality software faster. For instance, a developer could utilize this skill to generate boilerplate code for a new feature, allowing them to allocate more time to critical development tasks. While the implementation difficulty is rated as intermediate, users will need familiarity with command-line interfaces and basic programming concepts to maximize the benefits of this skill. It fits perfectly into AI-first workflows, where automation and intelligent assistance are crucial for maintaining competitive advantage. As organizations increasingly adopt AI technologies, integrating skills like claude-code-gpt-5-codex can significantly enhance operational efficiency and innovation.
1. **Set Up LiteLLM Proxy:** Clone the LiteLLM repository and start the proxy server with `litellm --model gpt-5-codex --api_key YOUR_OPENAI_KEY`. Ensure the proxy is running on `http://localhost:4000`. 2. **Configure claude-code:** Update your `claude-code` CLI config (usually `~/.claude/config.json`) to use the LiteLLM proxy: ```json { "model": "claude-3-5-sonnet-20241022", "api_key": "your-litellm-key", "base_url": "http://localhost:4000" } ``` 3. **Run Analysis:** Execute the skill using the prompt template. Replace placeholders with your project details, e.g.: ```bash claude-code "Use claude-code-gpt-5-codex to analyze and refactor the [ecommerce_backend] codebase in [Python]. Focus on [security vulnerabilities in the authentication module]. Generate a diff with suggested changes and explain the trade-offs." ``` 4. **Review and Apply Changes:** Review the AI-generated diff and explanations. Test the changes locally using your project's test suite (e.g., `pytest`, `npm test`). 5. **Iterate:** Use the AI's feedback to refine the changes. For complex refactors, break the task into smaller chunks and validate each step. Monitor performance metrics post-deployment using tools like Datadog or Prometheus.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/teremterem/claude-code-gpt-5-codexCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Use claude-code-gpt-5-codex to analyze and refactor the [PROJECT_NAME] codebase in [LANGUAGE]. Focus on [SPECIFIC_AREA, e.g., performance bottlenecks, security vulnerabilities, or API integration]. Generate a diff with suggested changes and explain the trade-offs of each modification. Test the changes using the existing [TEST_COMMAND] command. Provide a summary of the most critical improvements and their expected impact on [METRIC, e.g., execution time, maintainability, or security posture].
### Analysis of `payment_service` (Python/Flask)
**Context:** The `payment_service` handles 5,000+ transactions daily with a 98% success rate but experiences intermittent timeouts during peak hours (10-15% latency spikes). The codebase uses Flask, SQLAlchemy, and Redis for caching.
**Key Findings:**
1. **Database Bottleneck:** The `process_payment` function executes 3 sequential SQL queries without indexing on `user_id` and `transaction_id`. Query 2 (`SELECT * FROM transactions WHERE user_id = ?`) takes 1.2s on average during peak load.
2. **Cache Inefficiency:** Redis keys are not prefixed with `payment_`, causing cache invalidation collisions with other services (e.g., `user_profile`).
3. **Thread Safety:** The `PaymentValidator` class uses a global `redis_conn` without connection pooling, leading to resource exhaustion under load.
**Suggested Changes:**
```diff
# Add composite index to transactions table
+ CREATE INDEX idx_transactions_user_id_status ON transactions(user_id, status);
# Refactor process_payment to use async queries
+ async def process_payment(payment_data):
+ user_id = payment_data['user_id']
+ async with db_session() as session:
+ transaction = await session.execute(
+ select(Transaction).where(Transaction.user_id == user_id)
+ )
+ if not transaction:
+ raise PaymentError("User not found")
+
+ # Use prefixed Redis keys
+ cache_key = f"payment_{user_id}_{payment_data['txn_id']}"
+ cached = await redis.get(cache_key)
```
**Trade-offs:**
- **Database Index:** Improves query speed by 60% but requires a migration (downtime: 2 minutes).
- **Async Refactor:** Increases code complexity but reduces timeout errors by 40% (validated via load testing).
- **Redis Prefixing:** Adds 5 lines of code but eliminates cache collisions entirely.
**Testing:**
- Ran `pytest tests/payment_test.py -v` (all tests passed).
- Load tested with `locust -f tests/load_test.py --headless -u 1000 -r 50` (latency reduced from 1.8s to 0.9s).
**Impact:**
- **Success Rate:** Expected to reach 99.5% (current: 98%).
- **Cost:** $200/month saved on cloud database queries.
- **Developer Time:** 4 hours for implementation + 2 hours for testing.
**Next Steps:**
1. Schedule database migration during off-peak hours.
2. Deploy async changes to staging for further validation.
3. Monitor Redis memory usage post-deployment.Single API for 100+ LLM providers
AI for humanity, built with safety first
Unlock data insights with interactive dashboards and collaborative analytics capabilities.
AI assistant built for thoughtful, nuanced conversation
Advanced foundation models via API and ChatGPT
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan