大模型API网关-全新AI大模型接口管理与API聚合分发系统 , 支持将多种大模型转换成统一的OpenAI兼容接口,Claude接口,Gemini接口,可供个人或者企业内部大模型API 统一管理和渠道分发使用(key管理与二次分发),支持国际国内所有主流大模型,gemini,claude,qwen3,kimi-k2,豆包等,提供单可执行文件, docker镜像,一键部署,开箱即用,完全开源,自主可控!本项目基于New-API和One-API,整合了NewAPI,OneAPI所有功能及众多第三方插件为一身,功能超强!
git clone https://github.com/aiprodcoder/MIXAPI.git大模型API网关-全新AI大模型接口管理与API聚合分发系统 , 支持将多种大模型转换成统一的OpenAI兼容接口,Claude接口,Gemini接口,可供个人或者企业内部大模型API 统一管理和渠道分发使用(key管理与二次分发),支持国际国内所有主流大模型,gemini,claude,qwen3,kimi-k2,豆包等,提供单可执行文件, docker镜像,一键部署,开箱即用,完全开源,自主可控!本项目基于New-API和One-API,整合了NewAPI,OneAPI所有功能及众多第三方插件为一身,功能超强!
[{"step":"Define your use case and model requirements. Decide which models you need (e.g., Qwen3 for local inference, Claude for proxy, Kimi-K2 for API key access) and which interfaces you want to expose (OpenAI, Claude, or Gemini compatible).","tip":"List all models you plan to use in a single deployment to avoid conflicts. Use the MIXAPI model registry (https://github.com/mixapi/models) to check compatibility."},{"step":"Deploy MIXAPI using your preferred method. Choose between a single executable (for quick testing), Docker (for production), or cloud deployment (AWS/GCP/Azure).","tip":"For production, use Docker with environment variables for configuration (e.g., `MIXAPI_MODELS`, `MIXAPI_API_KEYS`). Example: `docker run -d -p 8080:8080 mixapi/mixapi:latest`."},{"step":"Configure API keys and access control. Add your model provider keys (e.g., for Kimi-K2) and assign keys to users/teams. Use the MIXAPI admin dashboard or CLI to manage keys.","tip":"Enable key rotation (e.g., weekly) and set rate limits per key to prevent abuse. Use the `mixapi-admin` CLI: `mixapi-admin keys add --name team1 --limit 500`."},{"step":"Test and integrate the unified APIs. Replace your existing model endpoints with MIXAPI's OpenAI/ Claude/ Gemini-compatible URLs. Validate responses and performance.","tip":"Use tools like Postman or cURL to test endpoints. Example: `curl -X POST https://your-mixapi.com/v1/chat/completions -H 'Authorization: Bearer sk-123' -d '{\"model\": \"qwen3\", \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]}'`"},{"step":"Monitor and optimize. Set up logging, metrics, and alerts (e.g., Prometheus + Grafana). Adjust rate limits, model priorities, and caching based on usage patterns.","tip":"Use MIXAPI's built-in metrics endpoint (`/metrics`) to track request volume, latency, and errors. Adjust model weights for load balancing if using multiple providers."}]
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/aiprodcoder/MIXAPICopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Use MIXAPI to [ACTION] for [TARGET]. Convert [SOURCE_MODEL] to [TARGET_INTERFACE] (OpenAI/ Claude/ Gemini). Manage [API_KEYS] for [USER/TEAM]. Deploy via [METHOD: single executable/ Docker/ cloud]. Example: 'Use MIXAPI to automate customer support responses for my e-commerce platform. Convert my local Qwen3 model to an OpenAI-compatible API. Manage API keys for my 5-person customer service team. Deploy MIXAPI using Docker on AWS EC2.'
## MIXAPI Deployment & API Conversion Report **Deployment Summary:** MIXAPI v1.2.0 was successfully deployed on an AWS EC2 t3.medium instance (Ubuntu 22.04) using Docker. The container (`mixapi/mixapi:latest`) was configured with: - **Port:** 8080 (exposed to public IP) - **Models:** Qwen3-8B (local), Claude-3-Haiku (proxy), Kimi-K2 (API key) - **Interfaces:** OpenAI-compatible (primary), Claude-compatible (secondary) - **API Keys:** 12 managed keys (7 for internal team, 5 for external partners) - **Rate Limits:** 1000 RPM per key, burst limit 2000 RPM **Conversion Results:** 1. **Qwen3-8B (Local Model)** - Converted to OpenAI-compatible endpoint: `https://api.yourdomain.com/v1/chat/completions` - Tested with 50 concurrent requests (success rate: 98.7%) - Average response time: 1.2s (local GPU: NVIDIA A100) 2. **Claude-3-Haiku (Proxy Model)** - Converted to Claude-compatible endpoint: `https://api.yourdomain.com/v1/messages` - Tested with 100 requests (success rate: 99.2%) - Average response time: 2.8s (API latency included) 3. **Kimi-K2 (API Key Model)** - Unified under OpenAI-compatible endpoint with key rotation - Tested failover: Automatically switched to backup key after 503 errors - Response time: 3.5s (API-dependent) **Key Management:** - **Internal Team (5 users):** Assigned dedicated keys with 500 RPM limits - **External Partners (3 users):** Assigned shared key with 200 RPM limit - **Audit Logs:** All requests logged (user, model, timestamp, response time) **Performance Metrics:** - Total requests processed: 12,450 (24h period) - Average uptime: 99.98% - Error rate: 0.8% (mostly timeout retries) - Cost: $0.012 per 1000 tokens (local model) / $0.035 per 1000 tokens (proxy models) **Next Steps:** 1. Set up Prometheus/Grafana for real-time monitoring (instructions: [MIXAPI docs](https://github.com/mixapi/mixapi)) 2. Configure auto-scaling for high-traffic events (e.g., Black Friday) 3. Integrate with existing authentication system (JWT/OAuth2) 4. Schedule weekly key rotation for security compliance. **Recommendations:** - For high-volume use cases, consider deploying MIXAPI in a Kubernetes cluster with Redis caching. - Enable model-specific rate limiting to prevent abuse (e.g., 100 RPM for image generation models). - Use the built-in `mixapi-admin` CLI tool to bulk-manage keys and monitor usage. **Deployment Log:** ```bash # Pull and run MIXAPI sudo docker pull mixapi/mixapi:latest sudo docker run -d --name mixapi -p 8080:8080 \ -e MIXAPI_MODELS="qwen3:local,claude:proxy,kimi:api" \ -e MIXAPI_API_KEYS="internal_key1=sk-123,external_key1=sk-456" \ mixapi/mixapi:latest ```
Advanced foundation models via API and ChatGPT
Container platform for building, sharing, and running applications
Google's multimodal AI model and assistant
The #1 Decentralized ID platform
AI assistant built for thoughtful, nuanced conversation
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan