The AFM Python Interpreter is the shared runtime for Python-based Agent-Flavored Markdown (AFM) implementations. It provides a modular, plugin-based architecture that supports multiple execution backends, with LangChain provided as the reference backend.
- Pluggable execution backends - Support for multiple LLM frameworks via the
AgentRunnerprotocol (LangChain included as reference implementation) - Support for all interface types:
- Console chat (interactive CLI)
- Web chat (HTTP API + optional UI)
- Webhook (WebSub-based event handling)
- Multi-interface agents - run multiple interfaces simultaneously
- MCP support for tools (Model Context Protocol)
- Validation - dry-run mode to validate AFM definitions
pip install afm-cli
export OPENAI_API_KEY="your-api-key-here"
afm run path/to/agent.afm.mdexport OPENAI_API_KEY="your-api-key-here"
uv run afm path/to/agent.afm.mdConfiguration via environment variables or CLI options:
OPENAI_API_KEY,ANTHROPIC_API_KEY, etc. (Required based on provider)- HTTP port can be set via
-por--port(default: 8085)
The Docker image bundles the LangChain execution backend.
# Using the pre-built image
docker run -v $(pwd)/path/to/agent.afm.md:/app/agent.afm.md \
-e OPENAI_API_KEY=$OPENAI_API_KEY \
-p 8085:8085 \
ghcr.io/wso2/afm-langchain-interpreter:latest run /app/agent.afm.md
# Or build locally
docker build -t afm-langchain-interpreter .uv run pytestThis is a uv workspace with three packages:
python-interpreter/
├── packages/
│ ├── afm-core/ # Core: parser, CLI, models, interfaces, protocols
│ │ ├── src/afm/
│ │ └── tests/
│ ├── afm-langchain/ # LangChain execution backend
│ │ ├── src/afm_langchain/
│ │ └── tests/
│ └── afm-cli/ # User-facing metapackage
├── Dockerfile # Container build
├── pyproject.toml # Workspace configuration
└── uv.lock # Dependency lock file
Apache-2.0