Skip to content

conductor-oss/conductor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5,524 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Logo

Conductor - Internet scale Workflow Engine

GitHub stars Github release License Conductor Slack Conductor OSS

Orchestrating distributed systems means wrestling with failures, retries, and state recovery. Conductor handles all of that so you don't have to.

Conductor is an open-source, durable workflow engine built at Netflix for orchestrating microservices, AI agents, and durable workflows at internet scale. Trusted in production at Netflix, Tesla, LinkedIn, and J.P. Morgan. Actively maintained by Orkes and a growing community.

conductor_oss_getting_started


Get Running in 60 Seconds

Prerequisites: Node.js v16+ and Java 21+ must be installed.

npm install -g @conductor-oss/conductor-cli
conductor server start

Open http://localhost:8080 β€” your server is running with the built-in UI.

Run your first workflow:

# Create a workflow that calls an API and parses the response β€” no workers needed
curl -s https://raw.githubusercontent.com/conductor-oss/conductor/main/docs/quickstart/workflow.json -o workflow.json
conductor workflow create workflow.json

Note: Running this command twice will return an error on the second call β€” the workflow already exists. This is expected behavior. Use conductor workflow update to modify an existing workflow.

conductor workflow start -w hello_workflow --sync

See the Quickstart guide for the full walkthrough, including writing workers and replaying workflows.

Docker Image for Conductor:

docker run -p 8080:8080 conductoross/conductor:latest # replace latest with the published version to pin to a specific version

All CLI commands have equivalent cURL/API calls. See the Quickstart for details.


Why Conductor is the workflow engine of choice for developers

Durable execution Every step is persisted. Survives crashes, restarts, and network failures with configurable retries and timeouts.
Deterministic by design Orchestration is separated from business logic β€” determinism is architectural, not developer discipline. Workers run any code; the workflow graph stays deterministic by construction.
AI agent orchestration 14+ native LLM providers, MCP tool calling, function calling, human-in-the-loop approval, and vector databases for RAG.
Dynamic at runtime Dynamic forks, tasks, and sub-workflows resolved at runtime. LLMs generate JSON workflow definitions and Conductor executes them immediately.
Full replayability Restart from the beginning, rerun from any task, or retry just the failed step β€” on any workflow, at any time.
Internet scale Battle-tested at Netflix, Tesla, LinkedIn, and J.P. Morgan. Scales horizontally to billions of workflow executions.
Polyglot workers Workers in Java, Python, Go, JavaScript, C#, Ruby, or Rust. Workers poll, execute, and report β€” run them anywhere.
Self-hosted, no lock-in Apache 2.0. 5 persistence backends, 6 message brokers. Runs anywhere Docker or a JVM runs.

Ship Agents, Not Framework Code

Conductor workers are plain code β€” any language, any library, any I/O. No determinism constraints, no SDK ritual. The orchestration layer is declarative and machine-readable, so LLMs generate and compose workflows natively. If an agent crashes at iteration 12, it resumes from iteration 12.

An autonomous think-act agent in Conductor: discover tools via MCP, reason with an LLM, call the chosen tool, repeat until done.

{
  "name": "autonomous_agent",
  "description": "Agent that loops until the task is complete",
  "version": 1,
  "tasks": [
    {
      "name": "discover_tools",
      "taskReferenceName": "discover",
      "type": "LIST_MCP_TOOLS",
      "inputParameters": {
        "mcpServer": "${workflow.input.mcpServerUrl}"
      }
    },
    {
      "name": "agent_loop",
      "taskReferenceName": "loop",
      "type": "DO_WHILE",
      "loopCondition": "if ($.loop['think'].output.result.done == true) { false; } else { true; }",
      "loopOver": [
        {
          "name": "think",
          "taskReferenceName": "think",
          "type": "LLM_CHAT_COMPLETE",
          "inputParameters": {
            "llmProvider": "openai",
            "model": "gpt-4o-mini",
            "messages": [
              {
                "role": "system",
                "message": "You are an autonomous agent. Available tools: ${discover.output.tools}. Previous results: ${loop.output.results}. Respond with JSON: {\"action\": \"tool_name\", \"arguments\": {}, \"done\": false} or {\"answer\": \"final answer\", \"done\": true}."
              },
              { "role": "user", "message": "${workflow.input.task}" }
            ]
          }
        },
        {
          "name": "act",
          "taskReferenceName": "act",
          "type": "SWITCH",
          "expression": "$.think.output.result.done ? 'done' : 'call_tool'",
          "decisionCases": {
            "call_tool": [
              {
                "name": "execute_tool",
                "taskReferenceName": "tool_call",
                "type": "CALL_MCP_TOOL",
                "inputParameters": {
                  "mcpServer": "${workflow.input.mcpServerUrl}",
                  "method": "${think.output.result.action}",
                  "arguments": "${think.output.result.arguments}"
                }
              }
            ]
          }
        }
      ]
    }
  ]
}

Every step is durably persisted β€” no framework, no SDK lock-in. Code-first engines force your code to be deterministic so the framework can replay it. Conductor makes the engine deterministic β€” so your code doesn't have to be.

See the Build Your First AI Agent guide for the full walkthrough.


Conductor Skills for AI Coding Assistants

Conductor Skills let AI coding assistants (Claude Code, Gemini CLI, and others) create, manage, and deploy Conductor workflows directly from your terminal.

Claude

# Install Skills for Claude Code
/plugin marketplace add conductor-oss/conductor-skills
/plugin install conductor@conductor-skills

Install for all detected agents

One command to auto-detect every supported agent on your system and install globally where possible. Re-run anytime β€” it only installs for newly detected agents.

macOS / Linux

curl -sSL https://conductor-oss.github.io/conductor-skills/install.sh | bash -s -- --all

Windows (PowerShell) / (cmd)

# powershell
irm https://conductor-oss.github.io/conductor-skills/install.ps1 -OutFile install.ps1; .\install.ps1 -All

# cmd
powershell -c "irm https://conductor-oss.github.io/conductor-skills/install.ps1 -OutFile install.ps1; .\install.ps1 -All"

SDKs

Language Repository Install
β˜• Java conductor-oss/java-sdk Maven Central
🐍 Python conductor-oss/python-sdk pip install conductor-python
🟨 JavaScript conductor-oss/javascript-sdk npm install @io-orkes/conductor-javascript
🐹 Go conductor-oss/go-sdk go get github.com/conductor-sdk/conductor-go
🟣 C# conductor-oss/csharp-sdk dotnet add package conductor-csharp
πŸ’Ž Ruby conductor-oss/ruby-sdk (incubating)
πŸ¦€ Rust conductor-oss/rust-sdk (incubating)

Documentation & Community

  • Documentation β€” Architecture, guides, API reference, and cookbook recipes.
  • Slack β€” Community discussions and support.
  • Community Forum β€” Ask questions and share patterns.

Backend Configuration
Backend Configuration
Redis + ES7 (default) config-redis.properties
Redis + ES8 config-redis-es8.properties
Redis + OpenSearch config-redis-os.properties
Postgres config-postgres.properties
Postgres + ES7 config-postgres-es7.properties
MySQL + ES7 config-mysql.properties

Build From Source

Requirements and instructions

Requirements: Docker Desktop, Java (JDK) 21+, Node 18 (for UI)

git clone https://github.com/conductor-oss/conductor
cd conductor
./gradlew build

# (optional) Build UI
# ./build_ui.sh

# Start local server
cd server
../gradlew bootRun

See the full build guide for details.


FAQ

Is this the same as Netflix Conductor?

Yes. Conductor OSS is the continuation of the original Netflix Conductor repository after Netflix contributed the project to the open-source foundation.

Is Conductor open source?

Yes. Conductor is a fully open-source workflow engine licensed under Apache 2.0. You can self-host on your own infrastructure with 5 persistence backends and 6 message brokers.

Is this project actively maintained?

Yes. Orkes is the primary maintainer and offers an enterprise SaaS platform for Conductor across all major cloud providers.

Can Conductor scale to handle my workload?

Yes. Built at Netflix, battle-tested at internet scale. Conductor scales horizontally across multiple server instances to handle billions of workflow executions.

Does Conductor support durable execution?

Yes. Conductor pioneered durable execution patterns, ensuring workflows and durable agents complete reliably despite infrastructure failures or crashes. Every step is persisted and recoverable.

Can I replay a workflow after it completes or fails?

Yes. Conductor preserves full execution history indefinitely. You can restart from the beginning, rerun from a specific task, or retry just the failed step β€” via API or UI.

Can Conductor orchestrate AI agents and LLMs?

Yes. Conductor provides native integration with 14+ LLM providers (Anthropic, OpenAI, Gemini, Bedrock, and more), MCP tool calling, function calling, human-in-the-loop approval, and vector database integration for RAG.

Why does Conductor separate orchestration from code?

Coupling orchestration logic with business logic forces developers to maintain determinism constraints manually β€” no direct I/O, no system time, no randomness in workflow definitions. Conductor eliminates this entire class of bugs by making the orchestration layer deterministic by construction. Workers are plain code with zero framework constraints β€” write them in any language, use any library, call any API.

Isn't writing workflows as code more powerful than JSON?

It depends on what you mean by "powerful." In code-first engines, the workflow definition and your business logic live in the same runtime β€” which means the engine must replay your code to recover state. That forces determinism constraints on your business logic: no direct I/O, no system time, no threads, no randomness. Conductor separates these concerns. The orchestration graph is declarative (JSON), so it's deterministic by construction. Your workers are plain code with zero constraints β€” use any language, any library, call any API. You get the full power of code where it matters (business logic) without the framework tax where it doesn't (orchestration).

Can JSON workflows handle complex logic like branching, loops, and error handling?

Yes. Conductor supports SWITCH (conditional branching), DO_WHILE (loops with configurable iteration cleanup), FORK_JOIN (parallel execution with dynamic fanout), SUB_WORKFLOW (composition), and DYNAMIC tasks resolved at runtime. These are composable β€” you can nest loops inside branches inside forks. For error handling, every task supports configurable retries, timeouts, and optional/compensating tasks. The declarative model doesn't limit complexity β€” it makes complexity visible and debuggable.

How does Conductor handle workflow versioning?

Workflow definitions are versioned by number. Running executions continue on the version they started with β€” deploying a new version never breaks in-flight workflows. There's no replay compatibility problem because Conductor doesn't replay your code. The orchestration graph is the source of truth, and each execution is pinned to its definition version. Update orchestration logic without redeploying workers and without worrying about breaking running workflows.

What about developer experience β€” IDE support, type checking, debugging?

Conductor provides a built-in visual UI for designing, running, and debugging workflows. Every execution is fully observable: you can inspect the input, output, timing, and retry history of every task. For type safety, Conductor validates workflow inputs and task I/O against JSON Schema. Workers are plain code in your language of choice β€” you get full IDE support, type checking, and debugging for your business logic. The orchestration layer is visible in the UI, not hidden inside a framework.

Can Conductor handle long-running workflows (days, weeks, months)?

Yes. Conductor is designed for long-running workflows. Executions are fully persisted β€” a workflow can pause for months waiting for a human approval, an external signal, or a scheduled timer, and resume exactly where it left off. There's no in-memory state to lose. This is the same mechanism that makes AI agent loops durable: if iteration 12 waits for a human review for three weeks, iteration 13 picks up right where it left off.

Don't I lose flexibility by not having orchestration in code?

You gain flexibility. Because workflows are JSON, LLMs can generate and modify them at runtime β€” no compile/deploy cycle. Dynamic forks let you fan out to a variable number of parallel tasks determined at runtime. Dynamic sub-workflows let one workflow compose others by name. And because workers are decoupled from orchestration, you can update the workflow graph or swap worker implementations independently. Code-first engines couple these together, so changing orchestration means redeploying and re-versioning your code.

How does Conductor compare to other workflow engines?

Conductor is an open-source workflow engine with native LLM task types for 14+ providers, built-in MCP integration, durable execution, full replayability, and 7 language SDKs. Unlike code-first engines, Conductor separates orchestration from business logic β€” determinism is an architectural guarantee, not a developer constraint. Your workers are plain code with zero framework rules. The orchestration layer is declarative, so it's observable, versionable, and composable by LLMs. Battle-tested at Netflix, Tesla, LinkedIn, and J.P. Morgan.

Is Orkes Conductor compatible with Conductor OSS?

100% compatible. Orkes Conductor is built on top of Conductor OSS with full API and workflow compatibility.


Contributing

We welcome contributions from everyone!

Contributors


Roadmap

See the Conductor OSS Roadmap. Want to participate? Reach out.

License

Conductor is licensed under the Apache 2.0 License.