Turn AI into a persistent, memory-powered collaborator. Universal MCP Server (supports HTTP, STDIO, and WebSocket) enabling cross-platform AI memory, multi-agent coordination, and context sharing. Built with MARM protocol for structured reasoning that evolves with your work.
git clone https://github.com/Lyellr88/MARM-Systems.gitTurn AI into a persistent, memory-powered collaborator. Universal MCP Server (supports HTTP, STDIO, and WebSocket) enabling cross-platform AI memory, multi-agent coordination, and context sharing. Built with MARM protocol for structured reasoning that evolves with your work.
[{"step":"Install and configure MARM-Systems","action":"Download the Universal MCP Server from [official repository]. Run the server with your preferred protocol (e.g., `./marm-server --protocol websocket --port 8080`). Ensure the MARM protocol is initialized with your desired reasoning structure (e.g., `marm init --reasoning hed`).","tip":"Use Docker for cross-platform consistency: `docker run -p 8080:8080 marm-systems/mcp-server:latest --protocol websocket`"},{"step":"Define memory policies and context schema","action":"Create a memory configuration file (e.g., `memory.yaml`) specifying retention periods, priority flags, and schema for your data types. Load it with `marm config --file memory.yaml`. For multi-agent systems, define shared context keys (e.g., `project_objective`, `team_roles`).","tip":"Start with a 7-day retention window for development and extend to 30+ days for production. Use YAML anchors to avoid repetition in schema definitions."},{"step":"Integrate with your AI workflows","action":"Connect your AI agents/tools to the MCP server. For Python agents, use the `marm-client` library: `from marm import Client; client = Client('ws://localhost:8080'); client.store_context(key='current_task', value='Optimize neural network inference')`. For CLI tools, pipe inputs via STDIO: `echo '{\"task\":\"debug GPU memory leak\"}' | ./marm-server --protocol stdio`.","tip":"Add a `context_id` parameter to all operations to maintain traceability across sessions. Example: `client.query(task='debug', context_id='session_20240610')`."},{"step":"Validate persistence and coordination","action":"Test memory persistence by restarting the server and querying for previously stored context. For multi-agent systems, verify context sharing by having one agent store data and another retrieve it. Use the `marm audit` command to review memory operations.","tip":"Simulate agent handoffs by running separate instances of the MCP server on different ports (e.g., 8080 for Agent A, 8081 for Agent B) and configure them to sync via a shared memory backend (e.g., Redis)."},{"step":"Monitor and refine","action":"Use the MARM dashboard (available at `http://localhost:8080/dashboard`) to monitor memory usage, reasoning traces, and agent interactions. Adjust retention policies or schema based on observed patterns. For debugging, enable verbose logging with `marm server --log-level debug`.","tip":"Set up alerts for memory limits (e.g., 80% capacity) to prevent data loss. Use the `marm export` command to back up critical context before major updates."}]
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/Lyellr88/MARM-SystemsCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Set up a persistent memory system for [PROJECT_NAME] using MARM-Systems. Configure the Universal MCP Server to support [PROTOCOL: HTTP/STDIO/WebSocket] for [USE_CASE: cross-platform collaboration/multi-agent coordination/context sharing]. Define memory retention policies for [DATA_TYPES: user inputs, tool outputs, reasoning steps, context snapshots]. Initialize the MARM protocol with [REASONING_STRUCTURE: step-by-step reasoning, hypothesis tracking, or decision logs] for [TEAM_SIZE: solo/team] workflows. Test the system by processing [SAMPLE_INPUT: a sample task or query] and verify memory persistence across [SESSIONS: restarts, device changes, or agent handoffs].
### MARM-Systems Setup Report for Project 'NeuralBridge'
**Configuration Summary:**
- Protocol: WebSocket (primary) + STDIO (fallback)
- Memory Retention: 30-day rolling window with priority flagging for critical context
- Reasoning Structure: Hypothesis → Evidence → Decision (HED) framework
- Team Size: 5 agents (Research, Analysis, Integration, Validation, Reporting)
**Memory Snapshot (Post-Setup):**
```json
{
"project_context": {
"name": "NeuralBridge",
"objective": "Develop a cross-platform neural network inference engine",
"milestones": [
{"id": "M1", "status": "completed", "date": "2024-05-15", "output": "Initial architecture validated"},
{"id": "M2", "status": "in_progress", "date": "2024-06-01", "output": "GPU acceleration module in testing"}
]
},
"agent_states": {
"Research": {"active_hypotheses": ["Quantum-inspired optimization may reduce inference latency"], "last_activity": "2024-06-10T14:32:00Z"},
"Integration": {"pending_tasks": ["Resolve CUDA compatibility issues with PyTorch 2.1"], "last_activity": "2024-06-10T15:15:00Z"}
},
"shared_context": {
"user_preferences": {"output_format": "markdown", "verbosity": "concise"},
"tool_versions": {"python": "3.11", "torch": "2.1.0", "cuda": "12.1"}
}
}
```
**Test Results:**
1. **Cross-Session Persistence:** Successfully restored context after server restart (verified via WebSocket reconnection).
2. **Multi-Agent Coordination:** Research agent's hypothesis propagated to Integration agent without manual input.
3. **Reasoning Trace:** HED framework captured 12 reasoning steps for the GPU acceleration module, including rejected alternatives (e.g., "FPGA acceleration ruled out due to development time constraints").
**Next Steps:**
- Configure priority memory for the upcoming client demo (due 2024-06-15).
- Enable real-time context sharing between the Validation and Reporting agents for automated report generation.Cloud ETL platform for non-technical data integration
IronCalc is a spreadsheet engine and ecosystem
Get more done every day with Microsoft Teams – powered by AI
Customer feedback management made simple
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan