Skip to content

Built-in Tools

14 tools are available by default when running without a config file (env-based mode). With a config file, built-in tools are added alongside any MCP tools defined in mcp_servers.

ToolDescription
bashExecute bash commands. Working directory persists between calls. Default timeout: 120s, max: 600s.
readRead a file with line numbers. Detects binary files. Max size: 256 KB.
writeWrite content to a file. Creates parent directories. Read-before-write guard.
editReplace an exact string in a file (must appear exactly once). Read-before-write guard.
patchApply unified diff patches to one or more files. Single-pass hunk application.
globFind files matching a glob pattern. Skips hidden files.
grepSearch file contents with regex. Uses rg when available, falls back to built-in.
listList directory contents as an indented tree. Skips common build artifacts.
webfetchFetch content from a URL via HTTP GET. Supports text, markdown, HTML. Max: 5 MB.
websearchSearch the web via Exa AI. Requires EXA_API_KEY.
todowriteWrite/replace the full todo list. Only 1 item in progress at a time.
todoreadRead the current todo list.
skillLoad skill definitions from SKILL.md files.
questionAsk the user structured questions (only available when on_question callback is set).

The Blackboard is a shared Key -> Value store for squad agents (those dispatched via form_squad). Sub-agents receive three additional tools:

ToolDescription
blackboard_readRead a value by key from the shared blackboard
blackboard_writeWrite a key-value pair to the shared blackboard
blackboard_listList all keys currently in the blackboard

After each sub-agent completes, its result is automatically written to the blackboard under the key "agent:{name}".

The orchestrator agent has two tools for dispatching work to sub-agents:

ToolDescription
delegate_taskDispatch independent parallel subtasks to sub-agents
form_squadDispatch collaborative subtasks that share a Blackboard

Set response_schema in config (or .structured_schema() on the builder) to constrain an agent’s output format. When configured:

  1. A synthetic __respond__ tool is injected into the agent’s tool set
  2. The agent calls __respond__ to produce structured JSON
  3. The result is available in AgentOutput::structured

This works in both standalone and Restate execution paths.

The --approve flag enables interactive approval before each tool execution round:

  • The CLI prompts the user before tool calls are executed
  • Denied tools receive error results so the LLM can adjust and retry
  • In the Restate path, approval uses per-turn promise keys for durable waiting

The on_text callback receives text deltas as they arrive from the LLM. Both Anthropic and OpenRouter providers implement SSE streaming. Sub-agents do not stream — only the orchestrator or top-level agent streams text.

Structured AgentEvent variants are emitted via the OnEvent callback:

EventDescription
RunStartedAgent run has begun
TurnStartedA new reasoning turn started
LlmResponseLLM returned a response
ToolCallStartedA tool call is about to execute
ToolCallCompletedA tool call finished
ApprovalRequestedWaiting for HITL approval
ApprovalDecisionHITL approval/denial received
SubAgentsDispatchedOrchestrator dispatched sub-agents
SubAgentCompletedA sub-agent finished its task
ContextSummarizedContext was compacted via summarization
RunCompletedAgent run completed successfully
GuardrailDeniedA guardrail blocked the action
GuardrailWarnedA guardrail issued a warning
RunFailedAgent run failed with an error
RetryAttemptProvider retrying after transient error
DoomLoopDetectedIdentical tool calls detected across turns
SessionPrunedOld tool results were pruned
AutoCompactionTriggeredContext overflow triggered auto-compaction
ModelEscalatedCascade provider escalated to a higher tier
BudgetExceededToken budget exhausted

Use --verbose / -v to emit events as JSON to stderr.

estimate_cost(model, usage) -> Option<f64> returns estimated USD cost for known models (Claude 4, 3.5, and 3 generations, including OpenRouter aliases). The estimate accounts for cache read/write token rates. Cost is displayed in CLI output after each run.

Implement the Tool trait and register with an agent:

use heartbit::{Tool, ToolDefinition, ToolOutput, Error};
use serde_json::Value;
use std::pin::Pin;
use std::future::Future;
pub struct MyTool;
impl Tool for MyTool {
fn definition(&self) -> ToolDefinition {
ToolDefinition {
name: "my_tool".into(),
description: "Does something useful".into(),
input_schema: serde_json::json!({
"type": "object",
"properties": {
"input": { "type": "string", "description": "The input value" }
},
"required": ["input"]
}),
}
}
fn execute(
&self,
input: Value,
) -> Pin<Box<dyn Future<Output = Result<ToolOutput, Error>> + Send + '_>> {
Box::pin(async move {
let input_str = input["input"].as_str().unwrap_or_default();
Ok(ToolOutput::success(format!("Processed: {input_str}")))
})
}
}
// Register with an agent:
// AgentRunner::builder(provider).tools(vec![Arc::new(MyTool)]).build()