Self-hosted orchestrator for AI autonomous agents. Run Claude Code & Open Code in isolated Linux workspaces. Manage skills, configs, and encrypted secrets with a git repo. Ideal for developers and operations teams.
git clone https://github.com/Th0rgal/openagent.gitSelf-hosted orchestrator for AI autonomous agents. Run Claude Code & Open Code in isolated Linux workspaces. Manage skills, configs, and encrypted secrets with a git repo. Ideal for developers and operations teams.
[{"step":"Install OpenAgent and initialize a new agent workspace","action":"Run `openagent init --name DataPipelineProcessor --workspace /path/to/workspace` in your terminal. This creates a git-managed directory with default configurations and encrypted secret storage.","tip":"Use `--template dev` for development environments or `--template prod` for production workloads with stricter resource limits."},{"step":"Configure skills and secrets","action":"Edit the `skills.yaml` file to specify which skills to load (e.g., file_operations, data_validation). Store secrets in `secrets.env` and encrypt them with `openagent encrypt --file secrets.env --key your-encryption-key`.","tip":"Commit the encrypted secrets file to git but never commit the encryption key. Use environment variables for the key in CI/CD pipelines."},{"step":"Define the agent's task and run it","action":"Create a task file (e.g., `tasks/process_sales_data.yaml`) with your instructions. Execute with `openagent run --task tasks/process_sales_data.yaml --workspace /path/to/workspace`.","tip":"Use the `--dry-run` flag to preview the agent's plan before execution. Add `--log-level debug` for detailed output during troubleshooting."},{"step":"Monitor and review results","action":"Check the workspace directory for output files and logs. Use `openagent logs --workspace /path/to/workspace` to stream real-time logs. Verify results against your success criteria.","tip":"Set up a cron job or CI/CD trigger to run the agent periodically (e.g., daily at 2 AM). Use `openagent status` to check agent health and resource usage."},{"step":"Update and iterate","action":"Modify the task or skills configuration based on results. Commit changes to git for version control. Use `openagent update --workspace /path/to/workspace` to pull the latest skill versions.","tip":"Create a `CHANGELOG.md` in your workspace to track agent updates and their impact on tasks."}]
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/Th0rgal/openagentCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Set up and run an autonomous AI agent using OpenAgent to [TASK]. Use the [SKILL] configuration from [REPO_PATH]. Ensure the agent operates in an isolated Linux workspace with access to [REQUIRED_TOOLS]. Log all actions to [LOG_FILE_PATH] and encrypt secrets using the provided [ENCRYPTION_KEY].
``` # OpenAgent Autonomous Agent Setup Report ## Configuration Summary - **Agent Name**: DataPipelineProcessor - **Workspace**: Isolated Ubuntu 22.04 container (ID: `agent-20240515-142211`) - **Skills Loaded**: - openagent::file_operations (v1.2.0) - openagent::data_validation (v2.1.3) - openagent::automated_reports (v1.5.0) - **Secrets Encrypted**: AWS_ACCESS_KEY, DATABASE_PASSWORD, API_TOKEN - **Git Repo**: https://github.com/yourorg/agent-configs (branch: main) ## Execution Log ### Phase 1: Environment Initialization (05/15/2024 14:22:15) - Created isolated workspace with 4GB RAM, 2 vCPUs - Installed dependencies: python3.11, pip, docker-cli, git - Mounted host directory `/data/pipeline-inputs` as `/workspace/inputs` - Verified network access to AWS S3, PostgreSQL, and internal API endpoints ### Phase 2: Skill Execution (05/15/2024 14:25:33) - **Task**: Process daily sales data from S3 bucket `sales-reports-2024` - **Steps Completed**: 1. Downloaded 12 CSV files (total size: 1.8GB) 2. Validated schema against `schema/sales_schema.json` (98.7% match) 3. Identified 47 records with missing customer IDs 4. Generated correction queries for data team 5. Created summary report in `/workspace/outputs/daily_summary_2024-05-15.md` ### Phase 3: Result Verification (05/15/2024 14:38:44) - Output files: - `/workspace/outputs/cleaned_sales_data.csv` (1,247,892 records) - `/workspace/outputs/validation_errors.json` (47 entries) - Memory usage: 2.1GB peak (within 4GB limit) - Execution time: 13m 29s - Status: ✅ Completed successfully ## Next Steps Recommended 1. Review validation errors in `/workspace/outputs/validation_errors.json` 2. Push cleaned data to `s3://sales-processed-2024/` 3. Update monitoring dashboard with new metrics ```
24/7 AI-powered support ticket resolution
AI assistant built for thoughtful, nuanced conversation
IronCalc is a spreadsheet engine and ecosystem
Customer feedback management made simple
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan