Agentic Dataflow-Oriented Robotic Architecture -- a 100% Rust framework for building real-time robotics and AI applications.
Built and maintained with agentic engineering -- code generation, reviews, refactoring, testing, and commits are driven by autonomous AI agents.
- Features
- Installation
- Quick Start
- CLI Commands
- Dataflow Configuration
- Architecture
- Language Support
- Examples
- Development
- Contributing
- License
- 10-17x faster than ROS2 Python -- 100% Rust internals with zero-copy shared memory IPC for messages >4KB, flat latency from 4KB to 4MB payloads
- Apache Arrow native -- columnar memory format end-to-end with zero serialization overhead; shared across all language bindings
- Single CLI, full lifecycle --
adora runfor local dev,adora up/startfor distributed prod, plus build, logs, monitoring, record/replay all from one tool - Declarative YAML dataflows -- define pipelines as directed graphs, connect nodes through typed inputs/outputs, optional type annotations with static validation, override with environment variables
- Multi-language nodes -- write nodes in Rust, Python, C, or C++ with native APIs (not wrappers); mix languages freely in one dataflow
- Reusable modules -- compose sub-graphs as standalone YAML files with typed inputs/outputs, parameters, optional ports, and nested composition (compile-time expansion, zero runtime overhead)
- Hot reload -- live-reload Python operators without restarting the dataflow
- Programmatic builder -- construct dataflows in Python code as an alternative to YAML
- Fault tolerance -- per-node restart policies (never/on-failure/always), exponential backoff, health monitoring, circuit breakers with configurable input timeouts
- Distributed by default -- local shared memory between co-located nodes, automatic Zenoh pub-sub for cross-machine communication, SSH-based cluster management with label scheduling, rolling upgrades, and auto-recovery
- Coordinator persistence -- optional redb-backed state store survives coordinator crashes and restarts
- OpenTelemetry -- built-in structured logging with rotation/routing, metrics, distributed tracing, and zero-setup trace viewing via CLI
- Record/replay -- capture dataflow messages to
.adorecfiles, replay offline at any speed with node substitution for regression testing - Topic inspection --
topic echoto print live data,topic hzTUI for frequency analysis,topic infofor schema and bandwidth - Resource monitoring --
adora topTUI showing per-node CPU, memory, queue depth, network I/O, restart count, and health status across all machines;--onceflag for scriptable JSON snapshots - Trace inspection --
trace listandtrace viewfor viewing coordinator spans without external infrastructure - Dataflow visualization -- generate interactive HTML or Mermaid graphs from YAML descriptors
- Communication patterns -- built-in service (request/reply), action (goal/feedback/result), and streaming (session/segment/chunk) patterns via well-known metadata keys; no daemon or YAML changes required
- ROS2 bridge -- bidirectional interop with ROS2 topics, services, and actions; QoS mapping; Arrow-native type conversion
- Pre-packaged nodes -- node hub with ready-made nodes for cameras, YOLO, LLMs, TTS, and more
- In-process operators -- lightweight functions that run inside a shared runtime, avoiding per-node process overhead for simple transformations
cargo install adora-cli # CLI (adora command)
pip install adora-rs # Python node/operator APIgit clone https://github.com/dora-rs/adora.git
cd adora
cargo build --release -p adora-cli
PATH=$PATH:$(pwd)/target/release
# Python API (requires maturin >= 1.8: pip install maturin)
# Must run from the package directory for dependency resolution
cd apis/python/node && maturin develop --uv && cd ../../..macOS / Linux:
curl --proto '=https' --tlsv1.2 -LsSf \
https://github.com/dora-rs/adora/releases/latest/download/adora-cli-installer.sh | shWindows:
powershell -ExecutionPolicy ByPass -c "irm https://github.com/dora-rs/adora/releases/latest/download/adora-cli-installer.ps1 | iex"| Feature | Description | Default |
|---|---|---|
tracing |
OpenTelemetry tracing support | Yes |
metrics |
OpenTelemetry metrics collection | Yes |
python |
Python operator support (PyO3) | Yes |
redb-backend |
Persistent coordinator state (redb) | No |
prometheus |
Prometheus /metrics endpoint on coordinator |
No |
cargo install adora-cli --features redb-backendImportant: The PyPI package is
adora-rs, notadora. The import name isadora(from adora import Node), butpip install adorainstalls an unrelated package.
cargo install adora-cli # or use install script below
pip install adora-rs
git clone https://github.com/dora-rs/adora.git && cd adora
adora run examples/python-dataflow/dataflow.ymlThis runs a sender -> transformer -> receiver pipeline. Here's what the Python node code looks like:
# sender.py -- sends 100 messages
from adora import Node
import pyarrow as pa
node = Node()
for i in range(100):
node.send_output("message", pa.array([i]))# receiver.py -- receives and prints messages
from adora import Node
node = Node()
for event in node:
if event["type"] == "INPUT":
print(f"Got {event['id']}: {event['value'].to_pylist()}")
elif event["type"] == "STOP":
breakSee the Python Getting Started Guide for a full tutorial, or the Python API Reference for complete API docs.
cd examples/rust-dataflow
adora run dataflow.yml# Terminal 1: start coordinator + daemon
adora up
# Terminal 2: start a dataflow (--debug enables topic inspection)
adora start dataflow.yml --attach --debug
# Terminal 3: monitor
adora list
adora logs <dataflow-id>
adora top
# Stop or restart
adora stop <dataflow-id>
adora restart --name <name>
adora down# Bring up a multi-machine cluster from a config file
adora cluster up cluster.yml
# Start a dataflow across the cluster
adora start dataflow.yml --name my-app --attach
# Check cluster health
adora cluster status
# Tear down
adora cluster downSee the Distributed Deployment Guide for cluster.yml configuration, label scheduling, systemd services, rolling upgrades, and operational runbooks.
| Command | Description |
|---|---|
adora run <PATH> |
Run a dataflow locally (no coordinator/daemon needed) |
adora up |
Start coordinator and daemon in local mode |
adora down |
Tear down coordinator and daemon |
adora build <PATH> |
Run build commands from a dataflow descriptor |
adora start <PATH> |
Start a dataflow on a running coordinator |
adora stop <ID> |
Stop a running dataflow |
adora restart <ID> |
Restart a running dataflow (stop + re-start) |
| Command | Description |
|---|---|
adora list |
List running dataflows (alias: ps) |
adora logs <ID> |
Show logs for a dataflow or node |
adora top |
Real-time resource monitor (TUI); also adora inspect top |
adora topic list |
List topics in a dataflow |
adora topic hz <TOPIC> |
Measure topic publish frequency (TUI) |
adora topic echo <TOPIC> |
Print topic messages to stdout |
adora topic info <TOPIC> |
Show topic type and metadata |
adora node list |
List nodes in a dataflow |
adora node info <NODE> |
Show detailed node status, inputs, outputs, and metrics |
adora node restart <NODE> |
Restart a single node within a running dataflow |
adora node stop <NODE> |
Stop a single node within a running dataflow |
adora topic pub <TOPIC> <DATA> |
Publish JSON data to a topic |
adora param list <NODE> |
List runtime parameters for a node |
adora param get <NODE> <KEY> |
Get a runtime parameter value |
adora param set <NODE> <KEY> <VALUE> |
Set a runtime parameter (JSON value) |
adora param delete <NODE> <KEY> |
Delete a runtime parameter |
adora trace list |
List recent traces captured by the coordinator |
adora trace view <ID> |
View spans for a specific trace (supports prefix matching) |
adora record <PATH> |
Record dataflow messages to .adorec file |
adora replay <FILE> |
Replay recorded messages from .adorec file |
| Command | Description |
|---|---|
adora cluster up <PATH> |
Bring up a cluster from a cluster.yml file |
adora cluster status |
Show connected daemons and active dataflows |
adora cluster down |
Tear down the cluster |
adora cluster install <PATH> |
Install daemons as systemd services |
adora cluster uninstall <PATH> |
Remove systemd services |
adora cluster upgrade <PATH> |
Rolling upgrade: SCP binary + restart per-machine |
adora cluster restart <NAME> |
Restart a dataflow by name or UUID |
| Command | Description |
|---|---|
adora doctor |
Diagnose environment, connectivity, and dataflow health |
adora status |
Check system health (alias: check) |
adora new |
Generate a new project or node |
adora graph <PATH> |
Visualize a dataflow (Mermaid or HTML) |
adora expand <PATH> |
Expand module references and print flat YAML |
adora validate <PATH> |
Validate dataflow YAML and check type annotations |
adora system |
System management (daemon/coordinator control) |
adora completion <SHELL> |
Generate shell completions |
adora self update |
Update adora CLI |
For full CLI documentation, see docs/cli.md. For distributed deployment, see docs/distributed-deployment.md.
Dataflows are defined in YAML. Each node declares its binary/script, inputs, and outputs:
nodes:
- id: camera
build: pip install opencv-video-capture
path: opencv-video-capture
inputs:
tick: adora/timer/millis/20
outputs:
- image
env:
CAPTURE_PATH: 0
IMAGE_WIDTH: 640
IMAGE_HEIGHT: 480
- id: object-detection
build: pip install adora-yolo
path: adora-yolo
inputs:
image: camera/image
outputs:
- bbox
- id: plot
build: pip install adora-rerun
path: adora-rerun
inputs:
image: camera/image
boxes2d: object-detection/bboxBuilt-in timer nodes: adora/timer/millis/<N> and adora/timer/hz/<N>.
Input format: <node-id>/<output-name> to subscribe to another node's output. Long form supports queue_size, queue_policy (drop_oldest or backpressure), and input_timeout. See the YAML Specification for details.
Type annotations: Optionally annotate ports with type URNs for static and runtime validation. See the Type Annotations Guide for the full type library.
nodes:
- id: camera
path: camera.py
outputs:
- image
output_types:
image: std/media/v1/Imageadora validate dataflow.yml # static check (warnings)
adora validate --strict-types dataflow.yml # fail on warnings (CI)
adora build dataflow.yml --strict-types # type check during build
ADORA_RUNTIME_TYPE_CHECK=warn adora run dataflow.yml # runtime checkModules: Extract reusable sub-graphs into separate files with module: instead of path:. See the Modules Guide for details.
nodes:
- id: nav_stack
module: modules/navigation.module.yml
inputs:
goal_pose: localization/goalCLI --> Coordinator --> Daemon(s) --> Nodes / Operators
(orchestration) (per machine) (user code)
| Layer | Protocol | Purpose |
|---|---|---|
| CLI <-> Coordinator | WebSocket (port 6013) | Build, run, stop commands |
| Coordinator <-> Daemon | WebSocket | Node spawning, dataflow lifecycle |
| Daemon <-> Daemon | Zenoh | Distributed cross-machine communication |
| Daemon <-> Node | Shared memory / TCP | Zero-copy IPC for data >4KB, TCP for small messages |
- Coordinator -- orchestrates dataflow lifecycle across daemons. Supports in-memory or persistent (redb) state store.
- Daemon -- spawns and manages nodes on a single machine. Handles shared memory allocation and message routing.
- Runtime -- in-process operator execution engine. Operators run inside the runtime process, avoiding per-operator process overhead.
- Nodes -- standalone processes that communicate via inputs/outputs. Written in Rust, Python, C, or C++.
- Operators -- lightweight functions that run inside the runtime. Faster than nodes for simple transformations.
binaries/
cli/ # adora CLI binary
coordinator/ # Orchestration service
daemon/ # Node manager + IPC
runtime/ # In-process operator runtime
ros2-bridge-node/ # ROS2 bridge binary
record-node/ # Dataflow message recorder
replay-node/ # Recorded message replayer
libraries/
core/ # Descriptor parsing, build utilities
message/ # Inter-component message types
shared-memory-server/ # Zero-copy IPC
arrow-convert/ # Arrow data conversion
recording/ # .adorec recording format
log-utils/ # Log parsing, merging, formatting
coordinator-store/ # Persistent coordinator state (redb)
extensions/
telemetry/ # OpenTelemetry tracing + metrics
ros2-bridge/ # ROS2 interop (bridge, msg-gen, arrow, python)
download/ # Download utilities
apis/
rust/node/ # Rust node API (adora-node-api)
rust/operator/ # Rust operator API (adora-operator-api)
python/node/ # Python node API (PyO3)
python/operator/ # Python operator API (PyO3)
python/cli/ # Python CLI interface
c/node/ # C node API
c/operator/ # C operator API
c++/node/ # C++ node API (CXX bridge)
c++/operator/ # C++ operator API (CXX bridge)
examples/ # Example dataflows
| Language | Node API | Operator API | Docs | Status |
|---|---|---|---|---|
| Rust | adora-node-api |
adora-operator-api |
API Reference | First-class |
| Python >= 3.8 | pip install adora-rs |
included | Getting Started, API Reference | First-class |
| C | adora-node-api-c |
adora-operator-api-c |
API Reference | Supported |
| C++ | adora-node-api-cxx |
adora-operator-api-cxx |
API Reference | Supported |
| ROS2 >= Foxy | adora-ros2-bridge |
-- | Bridge Guide | Experimental |
| Platform | Status |
|---|---|
| Linux (x86_64, ARM64, ARM32) | First-class |
| macOS (ARM64) | First-class |
| Windows (x86_64) | Best effort |
| WSL (x86_64) | Best effort |
| Example | Language | Description |
|---|---|---|
| rust-dataflow | Rust | Basic Rust node pipeline |
| python-dataflow | Python | Python sender/transformer/receiver |
| python-operator-dataflow | Python | Python operators (in-process) |
| python-dataflow-builder | Python | Pythonic imperative API |
| c-dataflow | C | C node example |
| c++-dataflow | C++ | C++ node example |
| c++-arrow-dataflow | C++ | C++ with Arrow data |
| cmake-dataflow | C/C++ | CMake-based build |
| Example | Language | Description |
|---|---|---|
| module-dataflow | Python | Reusable module composition |
| typed-dataflow | Python | Type annotations with adora validate |
| Example | Language | Description |
|---|---|---|
| service-example | Rust | Request/reply with request_id correlation |
| action-example | Rust | Goal/feedback/result with cancellation |
See docs/patterns.md for the full guide.
| Example | Language | Description |
|---|---|---|
| python-async | Python | Async Python nodes |
| python-concurrent-rw | Python | Concurrent read-write patterns |
| python-multiple-arrays | Python | Multi-array handling |
| python-drain | Python | Event draining patterns |
| multiple-daemons | Rust | Distributed multi-daemon setup |
| rust-dataflow-git | Rust | Git-based dataflow loading |
| rust-dataflow-url | Rust | URL-based dataflow loading |
| Example | Language | Description |
|---|---|---|
| python-logging | Python | Python logging integration |
| python-log | Python | Basic Python log output |
| log-sink-tcp | YAML | TCP-based log sink |
| log-sink-file | YAML | File-based log sink |
| log-sink-alert | YAML | Alert-based log sink |
| log-aggregator | Python | Centralized log aggregation via adora/logs |
| Example | Language | Description |
|---|---|---|
| benchmark | Rust/Python | Latency and throughput benchmark |
| ros2-comparison | Python | Adora vs ROS2 comparison |
| cuda-benchmark | Rust/CUDA | GPU zero-copy benchmark |
| Example | Description |
|---|---|
| ros2-bridge/rust | Rust ROS2 topics, services, actions |
| ros2-bridge/python | Python ROS2 integration |
| ros2-bridge/c++ | C++ ROS2 integration |
| ros2-bridge/yaml-bridge | YAML-based ROS2 topic bridge |
| ros2-bridge/yaml-bridge-service | YAML ROS2 service bridge |
| ros2-bridge/yaml-bridge-action | YAML ROS2 action client |
| ros2-bridge/yaml-bridge-action-server | YAML ROS2 action server |
Rust edition 2024, MSRV 1.85.0, workspace version 0.1.0.
# Build all (excluding Python packages which require maturin)
cargo build --all \
--exclude adora-node-api-python \
--exclude adora-operator-api-python \
--exclude adora-ros2-bridge-python
# Build specific package
cargo build -p adora-cli# Run all tests
cargo test --all \
--exclude adora-node-api-python \
--exclude adora-operator-api-python \
--exclude adora-ros2-bridge-python
# Test single package
cargo test -p adora-core
# Smoke tests (requires coordinator/daemon)
cargo test --test example-smoke -- --test-threads=1cargo clippy --all
cargo fmt --all -- --checkcargo run --example rust-dataflow
cargo run --example python-dataflow
cargo run --example benchmark --releaseWe welcome contributors of all experience levels. See the contributing guide to get started.
This repository is maintained with AI-assisted agentic engineering. Code generation, reviews, refactoring, testing, and commits are driven by autonomous AI agents -- enabling faster iteration and higher code quality at scale.
Apache-2.0. See NOTICE.md for details.