Autonomous agent substrate for Claude Code CLI. Research→Plan→Implement workflows with quality gates, TDD enforcement, and multi-agent coordination. 4.8-5.5x faster development. Built on Anthropic's engineering research.
git clone https://github.com/VAMFI/claude-user-memory.gitAutonomous agent substrate for Claude Code CLI. Research→Plan→Implement workflows with quality gates, TDD enforcement, and multi-agent coordination. 4.8-5.5x faster development. Built on Anthropic's engineering research.
1. **Initialize the Agent**: Run `claude code --skill claude-user-memory` in your terminal to start the autonomous agent. Specify the project root directory when prompted. 2. **Define Scope**: Use the prompt template to specify your project (e.g., `[PROJECT_NAME]`, `[TECH_STACK]`) and sources (e.g., `[SOURCE_1]`, `[SOURCE_2]`). Include any constraints like deadlines or team roles (e.g., `[TEAM_MEMBER_ROLE]`). 3. **Monitor Execution**: The agent will output progress in real-time. Check the `research/` and `plans/` directories for detailed artifacts. Use `claude code --review` to pause and review the plan before implementation. 4. **Quality Gates**: The agent enforces TDD and will halt execution if tests fail. Use `claude code --test` to rerun tests or `claude code --debug` to inspect failures. 5. **Iterate**: After each phase, review the deliverables (e.g., `docs/`, `scripts/`) and adjust the plan if needed. Use `claude code --update` to modify the project scope dynamically.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/VAMFI/claude-user-memoryCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Use the claude-user-memory skill to autonomously develop a [PROJECT_NAME] feature for [TECH_STACK]. Start with a research phase to gather requirements from [SOURCE_1] and [SOURCE_2]. Then create a detailed implementation plan with test cases, edge cases, and performance benchmarks. Execute the plan using TDD, ensuring all tests pass before proceeding to the next phase. Coordinate with [TEAM_MEMBER_ROLE] for code reviews and quality gates. Deliver a production-ready solution with documentation and deployment scripts.
### Research Phase Analyzed the GitHub repository for the `nextjs-ecommerce` project and identified 3 critical gaps in the checkout flow: (1) missing address validation, (2) no real-time inventory checks, and (3) slow payment processing under high load. Cross-referenced with Stripe API documentation and Shopify’s performance benchmarks to confirm these are industry-standard pain points. Extracted 12 user stories from the product backlog and mapped them to technical tasks with estimated effort (2-5 story points each). ### Implementation Plan **Phase 1: Address Validation** - Task 1: Integrate Google Address Validation API with 99.8% accuracy requirement. - Task 2: Add client-side validation with debounced API calls to reduce latency. - Test Suite: 15 unit tests (Jest), 8 integration tests (Cypress), and 3 load tests (k6). - Performance Gate: Response time <200ms for 95th percentile. **Phase 2: Inventory Sync** - Task 1: Implement WebSocket connection to Shopify’s GraphQL API for real-time updates. - Task 2: Add optimistic UI updates with rollback on failure. - Test Suite: 10 unit tests (Jest), 6 E2E tests (Playwright), and 2 chaos tests (Gremlin). - Quality Gate: Zero data inconsistencies in 1000+ checkout simulations. **Phase 3: Payment Optimization** - Task 1: Replace Stripe’s default checkout with a custom form using Elements SDK. - Task 2: Implement server-side tokenization and client-side 3D Secure fallback. - Test Suite: 12 unit tests (Jest), 5 performance tests (k6), and 4 security tests (OWASP ZAP). - Performance Gate: 99.9% success rate under 1000 concurrent users. ### Execution & Coordination Executed Phase 1 in 4 hours with 100% test coverage. Deployed to staging with a canary release (5% traffic) and monitored for 24 hours. Addressed 3 bugs in validation edge cases (e.g., international addresses with non-Latin characters). Coordinated with the DevOps team to set up monitoring dashboards in Grafana. Phase 2 is now 60% complete with 8/10 tests passing. Next steps: Finalize inventory sync and begin Phase 3 performance tuning.
AI media measurement
modern communication for focus
AI assistant built for thoughtful, nuanced conversation
IronCalc is a spreadsheet engine and ecosystem
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan