Skip to content

Workflow Agents

Workflow agents provide deterministic orchestration without any LLM calls. They chain, parallelize, or loop over regular agents using fixed execution patterns. This means zero LLM cost for the orchestration layer itself — only the leaf agents consume tokens.

Chains agents in order, passing each agent’s output as the next agent’s input:

use heartbit::{AgentRunner, SequentialAgent};
let researcher = AgentRunner::builder(provider.clone())
.name("researcher").system_prompt("You research topics.").build()?;
let writer = AgentRunner::builder(provider.clone())
.name("writer").system_prompt("You write articles.").build()?;
let editor = AgentRunner::builder(provider.clone())
.name("editor").system_prompt("You edit for clarity.").build()?;
let pipeline = SequentialAgent::builder()
.agent(researcher)
.agent(writer)
.agent(editor)
.build()?;
let output = pipeline.execute("Write about Rust async").await?;

Execution flow:

  1. researcher receives the original input
  2. writer receives the researcher’s output
  3. editor receives the writer’s output
  4. Final result is the editor’s output

Token usage is accumulated across all agents.

Runs agents concurrently using tokio::JoinSet. All agents receive the same input and execute simultaneously:

use heartbit::{AgentRunner, ParallelAgent};
let analyst_a = AgentRunner::builder(provider.clone())
.name("analyst_a").system_prompt("Analyze from perspective A.").build()?;
let analyst_b = AgentRunner::builder(provider.clone())
.name("analyst_b").system_prompt("Analyze from perspective B.").build()?;
let parallel = ParallelAgent::builder()
.agent(analyst_a)
.agent(analyst_b)
.build()?;
let output = parallel.execute("Evaluate this proposal").await?;

Results are collected and merged in a deterministic order (sorted by agent name). If any agent fails, the entire parallel execution fails fast.

Repeats a single agent until a stop condition is met or a maximum iteration count is reached:

use heartbit::{AgentRunner, LoopAgent};
let refiner = AgentRunner::builder(provider.clone())
.name("refiner").system_prompt("Improve the text quality.").build()?;
let loop_agent = LoopAgent::builder()
.agent(refiner)
.max_iterations(5)
.should_stop(|text| text.contains("QUALITY: EXCELLENT"))
.build()?;
let output = loop_agent.execute("Draft: Rust is fast.").await?;

On each iteration:

  1. The agent receives the previous iteration’s output (or the original input on first iteration)
  2. The should_stop function evaluates the output
  3. If it returns true or max_iterations is reached, the loop ends

All three workflow agents use the builder pattern:

MethodAvailable onDescription
.agent(AgentRunner)AllAdd a pre-built agent to the workflow
.agents(Vec<AgentRunner>)Sequential, ParallelAdd multiple agents at once
.max_iterations(n)LoopSet maximum loop iterations
.should_stop(fn)LoopSet the stop condition closure

All workflow agents return AgentOutput, the same type as regular agents:

pub struct AgentOutput {
pub result: String,
pub tool_calls_made: usize,
pub tokens_used: TokenUsage,
pub structured: Option<Value>,
pub estimated_cost_usd: Option<f64>,
}

Token usage is accumulated across all agents in the workflow.

PatternUse case
SequentialMulti-stage pipelines: research, then write, then edit
ParallelFan-out analysis: multiple perspectives on the same input
LoopIterative refinement: keep improving until quality threshold

Workflow agents can be composed — a SequentialAgent can contain a ParallelAgent as one of its stages, or a LoopAgent can wrap a SequentialAgent for iterative pipeline refinement.

Compared to Orchestrator, workflow agents are simpler and cheaper: no LLM decides routing, no delegation tools, just deterministic execution. Use Orchestrator when you need dynamic task decomposition; use workflow agents when the execution pattern is known in advance.