Analyzes Claude Code session logs to provide insights on coding patterns, token usage, productivity, and prompt quality. Helps developers improve efficiency and effectiveness in their coding workflows.
git clone https://github.com/hancengiz/claude-code-prompt-coach-skill.gitAnalyzes Claude Code session logs to provide insights on coding patterns, token usage, productivity, and prompt quality. Helps developers improve efficiency and effectiveness in their coding workflows.
["Export your Claude Code session logs as a `.json` file. In Claude, use the command `/claude code export` to generate the file.","Run the prompt template above, replacing [DEVELOPER_NAME], [START_DATE], [END_DATE], [PROGRAMMING_LANGUAGE], [TOKEN_USAGE_METRIC], and [SPECIFIC_ISSUE] with your specific details.","Review the analysis and focus on the top 3 actionable improvements. Use the recommendations to adjust your coding workflow or IDE settings.","For deeper insights, compare multiple time periods (e.g., before/after a tool update) to measure progress. Share the analysis with your team to align on best practices.","Tip: Use this skill weekly to track trends in token usage or prompt quality. Adjust the [SPECIFIC_ISSUE] placeholder to target recurring problems, like debugging time or library adoption."]
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/hancengiz/claude-code-prompt-coach-skillCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Analyze the Claude Code session logs for [DEVELOPER_NAME] from [START_DATE] to [END_DATE]. Identify their most frequently used [PROGRAMMING_LANGUAGE] libraries/frameworks and the average [TOKEN_USAGE_METRIC] per session. Highlight any repetitive or inefficient coding patterns, and suggest 3 concrete improvements to optimize their workflow. Focus on reducing [SPECIFIC_ISSUE] like redundant code or excessive context in prompts.
Here’s a detailed analysis of your Claude Code sessions from June 1-15, 2024. Over this period, you completed 42 coding sessions, with an average token usage of 12,450 tokens per session—a 15% increase from the previous month. Your most frequently used libraries were React (38% of sessions), Node.js (25%), and PostgreSQL (18%). **Key Findings:** 1. **Repetitive Patterns:** In 68% of your sessions, you reused the same `useEffect` hook template without modification, even when the dependencies changed. This added an average of 120 tokens per session. 2. **Prompt Efficiency:** Your prompts often included redundant context, such as repeating the project name or file structure in every query. This inflated token usage by ~20%. 3. **Error Handling:** You spent 18% of your sessions debugging the same `null reference` errors, suggesting a need for better input validation. **Recommended Improvements:** - **Template Optimization:** Use a pre-configured `useEffect` snippet in your IDE to reduce manual input. This could save ~15 minutes per week. - **Prompt Refinement:** Structure prompts to include only the relevant context. For example, instead of saying, 'In the Acme project, in the `src/components/UserCard.js` file...', try 'In `UserCard.js`, implement...'. - **Proactive Debugging:** Add input validation checks in your codebase to catch `null` values early, reducing debugging time by ~25%. Would you like me to generate a custom VS Code snippet for your `useEffect` hook or draft a prompt template for your next project?
Understand and maintain professional relationships
AI assistant built for thoughtful, nuanced conversation
Your one-stop shop for church and ministry supplies.
Automate your browser workflows effortlessly
Customer feedback management made simple
Enterprise workflow automation and service management platform
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan