CursorLens is an open-source dashboard designed for the Cursor.sh IDE, enabling users to log AI code generations, track usage, and manage AI models seamlessly. Whether running locally or through an upcoming hosted version, it enhances coding efficiency and analytics.
claude install HamedMP/CursorLenshttps://www.cursorlens.com/docs/getting-started/installation
["Install CursorLens in Cursor.sh IDE: Open the Cursor.sh IDE, navigate to the Extensions Marketplace, search for 'CursorLens', and install the extension. Restart the IDE to activate it.","Configure Tracking Metrics: Go to CursorLens settings (File > Preferences > CursorLens) and define the metrics to track. Enable logging for [SPECIFIC_METRICS] such as model usage, generation frequency, or error rates. Save the configuration.","Run AI Code Generations: Use Cursor.sh’s AI assistant (e.g., inline chat or code generation) to generate code for your project. Ensure you’re using the models you want to compare (e.g., GPT-4 Turbo and Claude 3 Opus).","Analyze the Dashboard: Open the CursorLens dashboard (View > CursorLens Dashboard) to review real-time data. Filter by time period (e.g., last 7 days) and model to compare performance. Export the data as a CSV for further analysis.","Take Action on Insights: Use the dashboard insights to adjust your AI model usage. For example, if one model consistently underperforms in error rate, reduce its usage for critical tasks. Set up alerts for anomalies (e.g., error rate spikes) to proactively address issues."]
Log and analyze AI code generation metrics to improve development efficiency.
Manage and configure local AI models for tailored coding assistance.
Track and visualize usage patterns to identify areas for optimization.
Integrate seamlessly with Cursor.sh IDE for enhanced AI-assisted coding.
claude install HamedMP/CursorLensgit clone https://github.com/HamedMP/CursorLensCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Set up CursorLens in [CURSOR.SH IDE] to monitor AI code generation activity for [YOUR_PROJECT_NAME]. Configure the dashboard to track [SPECIFIC_METRICS: e.g., model usage, generation frequency, or error rates]. Compare the performance of [MODEL_A] vs [MODEL_B] during [TIME_PERIOD]. Export the insights as a CSV report titled '[REPORT_NAME]' and highlight any anomalies or trends in code quality or efficiency.
### CursorLens Dashboard Report: Project "E-Commerce API" (Generated: 2024-05-20) #### **Overview** - **Total AI Generations**: 1,247 - **Active Models**: 3 (Claude 3 Opus, GPT-4 Turbo, Llama 3 70B) - **Top Generating Model**: GPT-4 Turbo (45% of total, 561 generations) - **Average Generation Time**: 4.2 seconds - **Error Rate**: 3.1% (39 failures) #### **Model Performance Comparison (Last 7 Days)** | Model | Generations | Avg. Time | Error Rate | Cost (USD) | |------------------|-------------|-----------|------------|------------| | GPT-4 Turbo | 561 | 3.8s | 2.1% | $124.50 | | Claude 3 Opus | 412 | 4.5s | 3.4% | $98.75 | | Llama 3 70B | 274 | 4.9s | 4.7% | $18.20 | #### **Key Insights** 1. **GPT-4 Turbo** outperformed others in speed and accuracy, with the lowest error rate (2.1%) and fastest average generation time (3.8s). It was used most frequently, likely due to its balance of cost and performance. 2. **Claude 3 Opus** had a higher error rate (3.4%) but was still preferred for complex logic tasks, accounting for 33% of high-complexity code generations. 3. **Llama 3 70B** was the most cost-effective but suffered from higher error rates (4.7%), particularly in API integrations. Its usage dropped by 15% after the first 3 days of testing. #### **Anomalies Detected** - **High Error Spike**: On May 16th, Llama 3 70B generated 18 errors in 2 hours, all related to SQL query syntax. This coincided with a deployment of a new database schema. - **Performance Degradation**: GPT-4 Turbo’s generation time increased by 22% between May 18-20, likely due to API rate limiting during peak hours. #### **Recommendations** - **For Critical Paths**: Continue using GPT-4 Turbo for high-stakes features (e.g., payment processing) due to its reliability. - **For Prototyping**: Use Llama 3 70B to reduce costs, but implement additional validation steps for generated code. - **For Complex Logic**: Favor Claude 3 Opus, but pair it with manual review for error-prone tasks like database queries. **CSV Report Generated**: `ecommerce_api_cursorlens_report_20240520.csv` **Next Steps**: Schedule a team review to adjust model assignments based on these insights and set up automated alerts for error rate thresholds (e.g., >5% for any model).
Unlock data insights with interactive dashboards and collaborative analytics capabilities.
Automate invoicing and financial reporting for streamlined business management.
Real-time code suggestions and automated reviews
Orchestrate workloads with multi-cloud support, job scheduling, and integrated service discovery features.
CI/CD automation with build configuration as code
Enhance performance monitoring and root cause analysis with real-time distributed tracing.
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan