Annotate and review AI coding agent plans visually, share with your team, and send feedback with one click. Integrates with Claude Code and Obsidian for streamlined collaboration and plan refinement.
git clone https://github.com/backnotprop/plannotator.githttps://plannotator.ai
["1. **Prepare Your Plan**: Ensure you have a clear and detailed AI coding agent plan ready for review. This could be a text-based plan or a visual flowchart.\n2. **Access plannotator**: Open the plannotator tool and import your AI coding agent plan. You can do this by uploading a file or pasting the plan directly into the tool.\n3. **Annotate and Review**: Use the visual annotation tools to highlight areas of concern, suggest improvements, and provide feedback. You can use different colors and markers to categorize your annotations.\n4. **Share with Your Team**: Once you have finished annotating the plan, share it with your team. You can do this by generating a shareable link or exporting the annotated plan to a file.\n5. **Send Feedback**: Use the one-click feedback feature to send your annotations and feedback to your team. This will notify them of the feedback and allow them to review it in real-time."]
Annotate coding agent plans to clarify requirements and expectations before implementation.
Review git diffs with inline annotations to provide precise feedback on code changes.
Share annotated plans with team members for collaborative decision-making.
Attach images and use annotation tools to visually communicate complex ideas.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/backnotprop/plannotatorCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Review the following AI coding agent plan and annotate it with visual markers for [SPECIFIC_CODEBASE]. Highlight areas of concern, suggest improvements, and provide feedback for the team. Integrate this with our Obsidian workspace under the [PROJECT_NAME] folder. Example: 'The agent's plan to implement user authentication is missing rate limiting. Add this as a critical annotation and suggest using Redis for rate limiting.'
After analyzing the AI coding agent's plan for the 'E-Commerce Checkout' project, several key annotations and feedback points were identified: 1. **Security Concern**: The plan lacks implementation of rate limiting for the payment processing API. Annotated with a red flag and suggested using Redis for rate limiting to prevent brute force attacks. 2. **Performance Optimization**: The database query for fetching user cart items can be optimized. Annotated with a yellow flag and suggested adding an index on the 'user_id' column in the cart table. 3. **Code Quality**: The plan includes a monolithic function for handling payment processing. Annotated with a blue flag and suggested breaking it down into smaller, single-responsibility functions. 4. **Testing**: The plan does not include automated testing for the payment processing logic. Annotated with a green flag and suggested adding unit tests using Jest and integration tests using Cypress. All annotations and feedback have been shared with the team in the Obsidian workspace under the 'E-Commerce Checkout' folder. The team can now review the annotations, discuss the feedback, and refine the plan accordingly.
Cloud ETL platform for non-technical data integration
IronCalc is a spreadsheet engine and ecosystem
Get more done every day with Microsoft Teams – powered by AI
Customer feedback management made simple
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan