AI-powered terminal assistant that autonomously implements features, fixes bugs, refactors code, writes tests, and manages files β all from a single unified terminal interface.
Traditional AI coding assistants work as separate tools β you switch contexts between your editor, terminal, and chat window. Commandor brings the AI directly into your terminal as a first-class citizen, capable of:
- π Autonomous execution β Describe a task, watch it get done
- π§ Full tool access β Read, write, edit, search, run commands
- πΎ Persistent sessions β Conversations survive restarts
- π― Smart context β Auto-summarizes long conversations, tracks token usage
- π‘οΈ Safety first β Dangerous operations flagged, human-in-the-loop mode
- π Beautiful TUI β Modern Textual interface with real-time streaming
Built on LangGraph for robust agent orchestration and Textual for a rich terminal experience.
Commandor offers four distinct interaction modes, each optimized for different workflows:
| Mode | Command | Description |
|---|---|---|
| Autonomous | /agent <task> |
The AI works independently, using all tools freely until the task is complete. Best for straightforward tasks. |
| Assist | /assist <task> |
Human-in-the-loop: The AI proposes actions and asks for your confirmation before executing each tool. Perfect for learning or high-stakes changes. |
| Plan | /plan <task> |
Two-phase: First, the AI generates a numbered plan for your review. You can accept, edit, or reject it before execution begins. Great for complex refactors. |
| Chat | /chat <message> or /ask |
Pure conversation β no tool access. Use for questions, explanations, code reviews, or brainstorming. |
Inline file contents directly into your prompts without manual copying:
/agent refactor this function: @src/utils.py
/agent write tests for @app/models.py
/chat explain @DockerfileSupports glob patterns too:
/agent fix all type errors in @**/*.py- Auto-save: Every conversation is automatically saved with a unique thread ID
- Name & organize: Use
/sessions save <name>to name important sessions - Resume later:
/sessions resume <name>continues where you left off - Multiple sessions: Keep separate contexts for different projects or tasks
- Checkpoint storage: Uses SQLite (
~/.commandor/checkpoints.db) for durability
Watch token usage and performance as you work:
- Context window usage (with visual progress bar)
- Input/output token counts
- Context condensation events (when history is summarized to save space)
- Model name and execution time
Commandor works with the leading AI providers. Configure one or all:
| Provider | Models | Setup |
|---|---|---|
| Google Gemini | gemini-2.5-flash, gemini-2.5-pro, gemini-1.5-pro, gemini-1.5-flash |
GEMINI_API_KEY |
| Anthropic Claude | claude-3-5-sonnet-20241022, claude-3-7-sonnet-20250219, claude-3-opus-20240229, claude-3-5-haiku-20241022 |
ANTHROPIC_API_KEY |
| OpenAI GPT | gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o3-mini |
OPENAI_API_KEY |
| OpenRouter | 100+ models (including anthropic/claude-3.5-sonnet, google/gemini-2.5-pro) |
OPENROUTER_API_KEY |
Switch providers on the fly with /provider <name> and models with /model <id>.
The agent has access to a comprehensive toolkit for software development:
read_file_toolβ Read files with optional line rangeswrite_file_toolβ Create or overwrite files (with diff preview)edit_file_toolβ Surgical string replacement (preserves formatting)patch_file_toolβ Apply unified diffs (usespatchcommand or pure-Python fallback)
glob_toolβ Find files by pattern (e.g.,**/*.py,*.ts)grep_toolβ Search file contents with regexlist_directory_toolβ Explore directory structureget_project_files_toolβ List all source files by extension
run_command_toolβ Execute shell commands (with timeout protection)cd_toolβ Change working directory (native support, updates prompt)get_directory_toolβ Get current working directoryget_git_info_toolβ Git status, branch, recent commitsget_environment_toolβ OS, Python version, shell, user info
get_project_files_toolβ Enumerate project source files- Session management via
/sessionscommands (see below)
All tools include rich diff displays when modifying files, so you always see exactly what changed.
- Python 3.9 or higher
- API key from at least one supported provider (Gemini, Anthropic, OpenAI, or OpenRouter)
pip install commandor-aigit clone https://github.com/ravin-d-27/Commandor.git
cd Commandor
pip install -e .For enhanced experience, install additional packages:
pip install commandor-ai[dev] # Testing & linting toolsOn Windows, pyreadline3 is automatically installed for better command-line editing.
Launch Commandor and run the setup wizard:
commandor
/setupThe wizard will:
- List available providers
- Prompt for API keys (or skip if you'll use environment variables)
- Let you choose a default provider
- Save everything to
~/.commandor/config
Edit the config file directly:
# Create the config directory
mkdir -p ~/.commandor
# Edit config
nano ~/.commandor/configExample config:
default_provider: openrouter
providers:
gemini:
enabled: true
api_key: null # Will fall back to GEMINI_API_KEY env var
default_model: gemini-2.5-flash
anthropic:
enabled: true
api_key: "your-key-here" # Can store directly (file protected at 600)
default_model: claude-3.5-sonnet-20241022
openai:
enabled: true
api_key: null
default_model: gpt-4o
openrouter:
enabled: true
api_key: null
default_model: anthropic/claude-3.5-sonnet
agent:
max_iterations: 50
max_tokens_per_response: 4096
confirm_destructive: true
auto_scroll: true
ui:
color_scheme: auto
show_thinking: true
verbose: trueYou can also set API keys via environment variables (takes precedence over config file):
export GEMINI_API_KEY="your-key"
export ANTHROPIC_API_KEY="your-key"
export OPENAI_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"Launch the full Textual UI:
commandorFeatures:
- Single-pane terminal: shell commands and AI chat coexist
- Real-time streaming of AI responses and tool outputs
- Command history (β/β)
- Tab completion for slash commands
- Rich syntax highlighting and markdown rendering
Quick start:
# Run a shell command
ls -la
# Ask a question
/chat what's the difference between async/await and threads?
# Autonomous task
/agent refactor this codebase to use type hints
# Use file references
/agent write tests for @src/main.pyRun a single task without entering the TUI:
# Autonomous agent
commandor -a "fix the bug in main.py"
commandor --agent "add comprehensive tests for the auth module"
# Assist mode (with confirmations)
commandor --assist "migrate from SQLite to PostgreSQL"
# Plan mode
commandor --plan "design a new authentication system"
# Chat mode
commandor --chat "explain how LangGraph works"Options:
-p, --provider <name>β Override the default provider-m, --model <id>β Use a specific model--setupβ Run the interactive configuration wizard--versionβ Show version information
Examples with provider/model selection:
commandor -a "review this PR" -p anthropic -m claude-3-7-sonnet-20250219
commandor --chat "explain quantum computing" -p openai -m o1| Command | Mode | Description |
|---|---|---|
/agent <task> |
Autonomous | Execute task independently using all tools |
/assist <task> |
Assist | Execute with confirmation before each tool call |
/plan <task> |
Plan | Generate a plan, review, then execute |
/chat <message> |
Chat | Conversational Q&A (no tools) |
/ask <question> |
Chat | Alias for /chat |
/retry |
Any | Re-run the last AI command |
/reset |
Any | Clear conversation memory, start fresh session |
| Command | Description |
|---|---|
/providers |
List all providers with configuration status |
/provider <name> |
Switch active provider (gemini, anthropic, openai, openrouter) |
/model <id> |
Set model for current provider (e.g., claude-3-7-sonnet-20250219) |
| Command | Description |
|---|---|
/sessions |
List all saved sessions with metadata |
/sessions save <name> |
Name the current session |
/sessions new <name> |
Create fresh named session and switch to it |
/sessions resume <name> |
Switch to a saved session (loads its conversation history) |
/sessions rename <old> <new> |
Rename a session |
/sessions delete <name> |
Delete a session and its checkpoints |
| Command | Description |
|---|---|
/setup |
Interactive API key configuration wizard |
/setup <provider> |
Configure a specific provider (e.g., /setup anthropic) |
/help |
Show comprehensive help with all commands |
/clear or Ctrl+L |
Clear the terminal screen |
/exit or Ctrl+Q |
Exit Commandor |
| Command | Description |
|---|---|
/export [filename] |
Save conversation as Markdown (default: commandor-YYYYMMDD-HHMMSS.md) |
Any input that doesn't start with / is executed as a shell command in the current working directory.
Special handling:
cd <path>β Changes the working directory natively (no subprocess). Supports~, relative paths, and environment variable expansion.- All other commands run in your default shell (
$SHELLor/bin/bash)
Examples:
# Navigate
cd ~/projects/myapp
# Git operations
git status
git diff HEAD~1
# Package managers
npm install
pip install -r requirements.txt
# Build & test
pytest tests/
python -m pytest --cov
# Project exploration
find . -name "*.py" | head -20
ls -R | grep ".js"- Theme: Batman-inspired dark mode (black background, gold accents)
- Layout: Single-pane terminal with input at bottom
- Streaming: Real-time token-by-token response rendering
- Panels:
- Status bar (top): provider, model, context usage, session name
- Log area (center): conversation history, tool outputs, errors
- Stream preview (bottom, temporary): live "thinking" indicator
| Key | Action |
|---|---|
β / β |
Navigate command history |
Tab |
Auto-complete slash commands |
Ctrl+L |
Clear screen |
Ctrl+Q |
Quit |
The status bar displays context usage with a visual progress bar:
βββββββββ 12.3k/128k (9%) β 12.3k tokens used of 128k limit
- Automatically detects model context limits
- Shows percentage of context window used
- Updates in real-time as conversation grows
When usage exceeds 80% of the context window, Commandor automatically summarizes the conversation history to free up space (you'll see a small β» context condensed indicator).
- Linux/macOS:
~/.commandor/config - Windows:
%USERPROFILE%\.commandor\config
# Default provider (gemini, anthropic, openai, openrouter)
default_provider: openrouter
# Provider-specific settings
providers:
gemini:
enabled: true # Enable/disable this provider
api_key: null # null = use GEMINI_API_KEY env var
default_model: gemini-2.5-flash
anthropic:
enabled: true
api_key: null # or set directly (file permissions: 600)
default_model: claude-3.5-sonnet-20241022
openai:
enabled: true
api_key: null
default_model: gpt-4o
openrouter:
enabled: true
api_key: null
default_model: anthropic/claude-3.5-sonnet
# Agent behavior
agent:
max_iterations: 50 # Maximum tool calls per task
max_tokens_per_response: 4096 # Max tokens per LLM response
confirm_destructive: true # Always ask before rm, drop_db, etc.
auto_scroll: true # Auto-scroll log during streaming
# UI settings
ui:
color_scheme: auto # auto/dark/light (Textual theme)
show_thinking: true # Show AI reasoning blocks
verbose: true # Show detailed tool output- Config file (
api_keyfield) β if set and non-null - Environment variable β
GEMINI_API_KEY,ANTHROPIC_API_KEY, etc. .envfile β Legacy support:~/.commandor/.env- None β Provider will be marked as unconfigured
docker pull ravind2704/commandor:latestdocker run -it ravind2704/commandordocker run -it \
-e GEMINI_API_KEY=your_key \
-e ANTHROPIC_API_KEY=your_key \
-e OPENAI_API_KEY=your_key \
-e OPENROUTER_API_KEY=your_key \
ravind2704/commandor# Mount current directory into container
docker run -it \
-v $(pwd):/workspace \
-w /workspace \
-e OPENAI_API_KEY=your_key \
ravind2704/commandordocker build -t commandor .
docker run -it commandorCommandor/
βββ commandor/ # Main package
β βββ __init__.py # Package metadata
β βββ __main__.py # CLI entry point (argparse, TUI launcher)
β βββ main.py # Legacy terminal entry (kept for compatibility)
β βββ textual_app.py # Textual TUI application
β βββ agent_bridge.py # Streaming event bridge (TUI β LangGraph)
β βββ config.py # ConfigManager, setup wizard, API key resolution
β βββ api_manager.py # (deprecated β functionality moved to config.py)
β βββ session_manager.py # Named session persistence (JSON registry)
β β
β βββ agents/ # **Note: directory is `agent/` (singular)**
β β βββ __init__.py
β β βββ executor.py # run_agent(), _run_* mode runners, metrics
β β βββ lc_graph.py # LangGraph factory (build_agent_graph, etc.)
β β βββ lc_models.py # build_model() β provider model factory
β β βββ lc_tools.py # All @tool-decorated functions
β β βββ modes.py # Mode descriptions
β β βββ prompts.py # (if exists) Additional prompt templates
β β
β βββ providers/ # AI provider integrations
β β βββ __init__.py
β β βββ base.py # AgentResult dataclass, provider base
β β βββ factory.py # Provider factory (if exists)
β β βββ gemini.py # Gemini-specific logic
β β βββ anthropic.py # Anthropic-specific logic
β β βββ openai.py # OpenAI-specific logic
β β βββ openrouter.py # OpenRouter-specific logic
β β
β βββ utils/ # Utility modules
β β βββ __init__.py
β β βββ file_ops.py # Low-level file read/write/edit/patch
β β βββ shell.py # Shell execution, cd, git, env info
β β βββ diff_display.py # Rich diff rendering for file changes
β β
β βββ widgets/ # Textual UI components
β βββ __init__.py
β βββ terminal_widget.py # Main unified terminal (shell + AI)
β βββ chat_panel.py # (if exists) Alternative chat UI
β
βββ tests/ # Test suite (if exists)
βββ pyproject.toml # Modern Python packaging
βββ setup.py # Legacy setuptools (still used for install)
βββ requirements.txt # Dev dependencies (optional)
βββ Dockerfile # Container image definition
βββ LICENSE # MIT License
βββ README.md # This file
LangGraph Integration:
- Uses
create_react_agent()from LangGraph for ReAct pattern - Checkpointer:
SqliteSaverat~/.commandor/checkpoints.dbfor persistence - Thread IDs scoped by mode:
{mode}_{uuid}to separate chat/agent/plan histories
Streaming Pipeline:
terminal_widget.pyβ_run_ai()β spawns worker- Worker calls
agent_bridge.stream_agent_events() stream_agent_events()builds LLM, constructs graph, calls_iter_graph()- Events (
TokenEvent,ToolCallEvent, etc.) yielded back to UI TerminalWidget._on_ai_event()renders each event type
Context Summarization:
- Hook
_make_summarize_hook()runs before each LLM call - Checks
_approx_tokens(messages)against threshold (80% of context window) - If exceeded, summarizes old messages into a single
HumanMessagewith summary - Prevents context overflow while preserving key information
# Read a file
/chat show me the contents of @commandor/config.py
# Create a new module
/agent create a new module @utils/helpers.py with functions for validation
# Edit a file
/agent in @app/main.py, replace the print statement with proper logging
# Apply a patch
/agent apply this diff to @src/components/Button.tsx:
# --- a/src/components/Button.tsx
# +++ b/src/components/Button.tsx
# @@ -10,7 +10,7 @@
# - return <button className="btn">{children}</button>
# + return <button className="btn primary">{children}</button># Find all test files
/agent find all test files in the project
# Search for a function
/chat where is the authenticate_user function defined?
# Understand architecture
/plan analyze the project structure and document the main components# Check git status first
git status
# Then ask AI to fix conflicts
/agent resolve the git conflicts in @src/auth.py
# Run tests, then fix failures
pytest tests/ -v
/agent fix the failing tests and re-run/plan refactor the user authentication system to use JWT tokens
# AI will output a plan like:
# 1. Read current auth implementation (read_file_tool)
# 2. Identify user model and login flow
# 3. Design JWT integration strategy
# 4. Add JWT secret to config
# 5. Implement token generation in login endpoint
# ...
# You review, edit if needed, then approve. AI executes step by step.# Start a task
/agent implement OAuth2 login
# Name it for later
/sessions save oauth-login
# Later, resume
/sessions resume oauth-login
# List all sessions
/sessions
# Delete old session
/sessions delete old-projectCause: The provider isn't configured with an API key.
Solutions:
- Run
/setupinside Commandor and enter your key - Set the environment variable (
export GEMINI_API_KEY=...) - Edit
~/.commandor/configand add the key under the provider - Test status:
/providers(shows β/β for each)
Cause: Scripts directory not in PATH or virtual environment not activated.
Solutions:
# Check if installed
pip show commandor-ai
# If in venv, activate it
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Or use full path
python -m commandorCause: Very long conversations can hit model token limits.
Solutions:
- Commandor auto-summarizes at 80% capacity, but you can also:
- Use
/resetto start fresh - Start a new session:
/sessions new <name> - Use a model with larger context (e.g.,
gemini-2.5-prohas 2M tokens)
- Use
Cause: SQLite database corruption (rare, usually from abrupt termination).
Solutions:
- Delete
~/.commandor/checkpoints.dbβ Commandor will recreate it - Sessions themselves (in
~/.commandor/sessions.json) are safe to keep
Tips:
- Always use
cd_toolto navigate to the correct project directory first - Check file paths are relative to CWD (shown in prompt)
- Some tools require files to exist; use
list_directory_toolto verify - For shell commands, ensure you have execute permissions
Solutions:
- Update Textual:
pip install -U textual - Try a different color scheme: edit
~/.commandor/config, setui.color_scheme: dark - Disable live streaming: set
ui.verbose: false - Run with
TERM=xterm-256colorif colors are broken
- Gemini: Ensure API key has Generative Language API enabled
- Anthropic: Check you're using an API key from console.anthropic.com
- OpenAI: Verify organization and billing are set up
- OpenRouter: Some models require credits; check your balance
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# With coverage
pytest --cov=commandor# Quick interactive test
commandor
/setup # configure a provider
/agent create a simple Python hello world script# From Python
from commandor.agent.executor import test_providers
results = test_providers()
print(results)
# Output: {'gemini': {'status': 'ok'}, 'anthropic': {'status': 'no_api_key'}, ...}# Clone
git clone https://github.com/ravin-d-27/Commandor.git
cd Commandor
# Create venv
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
# Install in editable mode with dev deps
pip install -e ".[dev]"
# Run locally
commandor --version- Python: Follow PEP 8, use
rufffor linting - Imports: Sort with
rufforisort - Types: Use type hints (checked with
mypy) - Commits: Conventional Commits recommended (feat:, fix:, docs:, etc.)
pip install pre-commit
pre-commit install
# Runs ruff, mypy, etc. on staged files# Build distribution
pip install build
python -m build
# Check
twine check dist/*
# Upload to PyPI (maintainers only)
twine upload dist/*Contributions are welcome and appreciated!
- Report bugs: Open an issue with steps to reproduce, expected vs actual behavior, environment details
- Request features: Describe the use case and proposed solution
- Submit PRs:
- Fork the repo
- Create a feature branch (
git checkout -b feat/amazing-feature) - Make changes, add tests if applicable
- Ensure tests pass (
pytest) - Open a PR with clear description
- Support for more providers (Groq, Together, etc.)
- Enhanced diff viewer (side-by-side, syntax highlighting)
- Export formats (JSON, HTML, PDF)
- Plugin system for custom tools
- Windows-specific improvements
- Performance optimizations for large repos
- Better error recovery and retry logic
Please be respectful and constructive. Harassment or toxic behavior will not be tolerated.
MIT License β see LICENSE file for full text.
Short version: Use this software for any purpose, modify it, distribute it. Just include the original license and copyright notice.
Built with these amazing open-source projects:
- LangGraph β Agent orchestration
- Textual β TUI framework
- Rich β Terminal formatting
- LangChain β LLM abstractions
- OpenAI / Anthropic / Google β Model providers
Author: Ravin D
- GitHub: https://github.com/ravin-d-27
- Email: ravin.d3107@outlook.com
- Issues: https://github.com/ravin-d-27/Commandor/issues
- Current Version: 0.2.0
- Status: Actively maintained
- Last Major Update: Recent commits include session autosave, metrics monitoring, and classifier improvements
- Roadmap: See GitHub Projects for upcoming features
If you find Commandor useful, please consider:
- Starring the repository on GitHub
- Reporting bugs and suggesting improvements
- Sharing with your network
- Contributing code or documentation
Your support helps keep the project alive! π
