Agent Shell is an Emacs buffer that enables developers to interact with LLM agents powered by ACP. It supports Claude and connects to Emacs workflows for code generation, debugging, and automation.
git clone https://github.com/xenodium/agent-shell.githttps://xenodium.com/agent-shell
["Open the Agent Shell in Emacs with `M-x agent-shell-open`.","Paste or type your task into the shell buffer, including file paths, line numbers, or specific commands (e.g., 'Run tests with pytest and debug the failing test at [TEST_FILE::LINE]').","Use `C-c C-c` to execute the command. The Agent Shell will stream responses and allow you to iterate interactively.","For code generation or debugging, use `C-c C-r` to insert the agent's suggested changes directly into your buffer.","Save the buffer (`C-x C-s`) to persist changes or use `C-c C-k` to discard them."]
Automate code generation tasks by interacting with Claude Code directly from Emacs.
Use Gemini CLI for real-time coding assistance while writing scripts.
Manage multiple LLM agents in a tabulated view for efficient workflow.
Integrate various AI models to enhance code review processes within Emacs.
No install command available. Check the GitHub repository for manual installation instructions.
git clone https://github.com/xenodium/agent-shellCopy the install command above and run it in your terminal.
Launch Claude Code, Cursor, or your preferred AI coding agent.
Use the prompt template or examples below to test the skill.
Adapt the skill to your specific use case and workflow.
Use the Agent Shell in Emacs to [TASK]. For example: 'Debug the memory leak in the Python script at [FILE_PATH] using the Agent Shell. Run tests with pytest and suggest fixes for the issue at [LINE_NUMBER].'
I analyzed the memory leak in `app.py` at line 42, where the `data_processor` function was retaining references to large datasets. Using the Agent Shell, I ran `pytest --profile-memory` and confirmed the leak was growing by ~50MB per iteration. The issue stemmed from unclosed file handles in the `load_data()` function. I suggested adding a `with open()` context manager and verified the fix by running the test suite again, which now shows stable memory usage at ~120MB (down from 600MB). The Agent Shell also flagged a related inefficiency in the `batch_process()` function, recommending a generator pattern to stream data instead of loading it all at once. All changes were applied via `C-c C-c` in the shell buffer.
Cloud ETL platform for non-technical data integration
IronCalc is a spreadsheet engine and ecosystem
Get more done every day with Microsoft Teams – powered by AI
Customer feedback management made simple
Enterprise workflow automation and service management platform
Automate your spreadsheet tasks with AI power
Take a free 3-minute scan and get personalized AI skill recommendations.
Take free scan