Every capability Claude Code exposes to the model — reading files, running bash commands, searching the web, calling MCP servers — is a Tool. The tool system is the bridge between the AI's natural-language reasoning and concrete side effects on your machine.
This lesson dissects that bridge: how a tool is defined, how it is registered and filtered, how the orchestration layer decides concurrency, how permissions gate execution, and how results stream back to the model.
Tool.ts, tools.ts, tools/utils.ts,
services/tools/toolOrchestration.ts,
services/tools/toolExecution.ts,
services/tools/StreamingToolExecutor.ts
The full lifecycle of a single tool invocation follows this path:
The Tool<Input, Output, P> type is a protocol contract — a TypeScript structural type every tool must satisfy. It is generic over three parameters: the Zod input schema, the output type, and the progress event shape.
export type Tool<
Input extends AnyObject,
Output,
P extends ToolProgressData
> = {
name: string // primary identifier the model uses
aliases?: string[] // legacy names for backward compat
inputSchema: Input // Zod schema — source of truth for validation
maxResultSizeChars: number // overflow → persist to disk
call(
args: z.infer<Input>,
context: ToolUseContext,
canUseTool: CanUseToolFn,
parentMessage: AssistantMessage,
onProgress?: ToolCallProgress<P>
): Promise<ToolResult<Output>>
checkPermissions(
input: z.infer<Input>,
context: ToolUseContext
): Promise<PermissionResult>
isConcurrencySafe(input: z.infer<Input>): boolean
isReadOnly(input: z.infer<Input>): boolean
isDestructive?(input: z.infer<Input>): boolean
}
Rather than making authors supply every method, buildTool() merges a ToolDef (partial) with safe defaults. The defaults are fail-closed: assume writes, assume not concurrency safe, deny nothing by default (let the general permission system decide).
// Defaults — applied when a ToolDef omits the key
const TOOL_DEFAULTS = {
isEnabled: () => true,
isConcurrencySafe: () => false, // conservative: assume state mutation
isReadOnly: () => false,
isDestructive: () => false,
// Defer to the general permission system by default
checkPermissions: (input) =>
Promise.resolve({ behavior: 'allow', updatedInput: input }),
toAutoClassifierInput: () => '', // skip security classifier
userFacingName: () => '',
}
export function buildTool<D extends AnyToolDef>(def: D): BuiltTool<D> {
return {
...TOOL_DEFAULTS,
userFacingName: () => def.name, // sensible fallback
...def,
} as BuiltTool<D>
}
BuiltTool<D> mirrors the runtime spread at the type level, so the return type is as narrow as possible — preserving literal types from the definition.
A tool's call() returns a ToolResult<T>:
export type ToolResult<T> = {
data: T
newMessages?: (UserMessage | AssistantMessage | ...)[]
// Only honored for non-concurrency-safe tools
contextModifier?: (context: ToolUseContext) => ToolUseContext
}
The contextModifier is how a tool (e.g., EnterPlanMode) mutates shared state without reaching into global variables. It is applied serially after the tool completes. Concurrent tools cannot use contextModifier — the comment in StreamingToolExecutor explicitly notes this as a known limitation.
The interface carries a large surface area for rendering and security integration:
renderToolUseMessage() — React node shown while streaming tool inputrenderToolResultMessage() — React node for the result in transcriptrenderGroupedToolUse() — batch rendering when multiple same-type tools run togethertoAutoClassifierInput() — compact representation for the security classifier; return '' to skipextractSearchText() — for transcript search indexing; must match what renders or you get phantom hitsinterruptBehavior() — 'cancel' or 'block': what happens when the user types while this tool runsshouldDefer / alwaysLoad — ToolSearch deferred loading flags
tools.ts is the source of truth for which tools exist. It implements a three-tier assembly pipeline.
Returns every tool that could be available in the current build. Feature flags and environment variables gate conditional tools at module load time using Bun's feature() dead-code elimination:
const REPLTool = process.env.USER_TYPE === 'ant'
? require('./tools/REPLTool/REPLTool.js').REPLTool
: null
const SleepTool = feature('PROACTIVE') || feature('KAIROS')
? require('./tools/SleepTool/SleepTool.js').SleepTool
: null
export function getAllBaseTools(): Tools {
return [
AgentTool, TaskOutputTool, BashTool,
...(hasEmbeddedSearchTools() ? [] : [GlobTool, GrepTool]),
FileReadTool, FileEditTool, FileWriteTool,
WebFetchTool, TodoWriteTool, WebSearchTool,
// ... 30+ more tools, conditionally included
...(isToolSearchEnabledOptimistic() ? [ToolSearchTool] : []),
]
}
Applies mode-specific filtering on top of getAllBaseTools():
CLAUDE_CODE_SIMPLE) — only Bash, Read, EditalwaysDenyRules in the permission contextisEnabled() — each tool can veto itselfexport function assembleToolPool(
permissionContext: ToolPermissionContext,
mcpTools: Tools,
): Tools {
const builtInTools = getTools(permissionContext)
const allowedMcpTools = filterToolsByDenyRules(mcpTools, permissionContext)
// Built-ins sorted alphabetically as a prefix, then MCP tools sorted alphabetically.
// Keeps a stable cache breakpoint between the two groups.
const byName = (a, b) => a.name.localeCompare(b.name)
return uniqBy(
[...builtInTools].sort(byName).concat(allowedMcpTools.sort(byName)),
'name',
)
}
When a model response contains multiple tool_use blocks, the orchestrator decides which run concurrently and which run serially using partitionToolCalls().
The rule is simple but powerful:
isConcurrencySafe(input) === true are batched together and run in parallel.isConcurrencySafe() — parse failures default to false (conservative).// Simplified from partitionToolCalls()
for (const toolUse of toolUseMessages) {
const safe = isConcurrencySafe(toolUse)
if (safe && lastBatch?.isConcurrencySafe) {
lastBatch.blocks.push(toolUse) // extend parallel group
} else {
acc.push({ isConcurrencySafe: safe, blocks: [toolUse] })
}
}
Concurrent batches use the all() async-generator combinator with a concurrency ceiling from CLAUDE_CODE_MAX_TOOL_USE_CONCURRENCY (default 10).
Non-safe tools may return a contextModifier to mutate ToolUseContext (e.g., change the permission mode). Serial tools apply the modifier immediately, before the next tool runs. Concurrent tools queue their modifiers and apply them all after the batch completes:
// Serial: apply immediately so next tool sees updated context
if (update.contextModifier) {
currentContext = update.contextModifier.modifyContext(currentContext)
}
// Concurrent: queue, apply after batch
queuedContextModifiers[toolUseID].push(modifyContext)
// ... after all concurrent tools complete:
for (const modifier of modifiers) {
currentContext = modifier(currentContext)
}
StreamingToolExecutor is the real-time variant: it starts executing tools as their blocks stream in from the API, before the model's full response has finished. This is the class used in production streaming mode.
Each tool tracks a ToolStatus:
type ToolStatus = 'queued' | 'executing' | 'completed' | 'yielded'
runToolUse() generator is being consumedProgress messages (type: 'progress') are stored in pendingProgress and emitted immediately out-of-order — they don't need to wait for the result.
private canExecuteTool(isConcurrencySafe: boolean): boolean {
const executing = this.tools.filter(t => t.status === 'executing')
return (
executing.length === 0 ||
(isConcurrencySafe && executing.every(t => t.isConcurrencySafe))
)
}
A non-safe tool must wait for all executing tools to finish. A safe tool can only join if all currently executing tools are also safe.
The executor holds a siblingAbortController — a child of the main abort controller. When a Bash tool produces an error result, it aborts siblings:
if (isErrorResult && tool.block.name === BASH_TOOL_NAME) {
this.hasErrored = true
this.erroredToolDescription = getToolDescription(tool)
this.siblingAbortController.abort('sibling_error')
}
The per-tool toolAbortController bubbles non-sibling aborts up to the main query controller — critical for ExitPlanMode's "clear context + auto" flow.
Even though tools execute concurrently, results must be emitted in the order the model requested them (the model's tool_result messages are paired by ID). The executor achieves this by iterating this.tools in insertion order and yielding only when the head tool is completed:
for (const tool of this.tools) {
// Progress always goes through immediately
while (tool.pendingProgress.length > 0) {
yield { message: tool.pendingProgress.shift()! }
}
if (tool.status === 'completed' && tool.results) {
tool.status = 'yielded'
for (const msg of tool.results) yield { message: msg }
} else if (tool.status === 'executing' && !tool.isConcurrencySafe) {
break // head is a non-safe executing tool — must wait
}
}
runToolUse() is the central dispatch function. It handles unknown tools, pre-abort checks, and delegates to streamedCheckPermissionsAndCallTool() which wraps the async permission+execution flow in a Stream to multiplex progress and final results.
inputSchema.safeParse(input). Failure returns an InputValidationError tool result immediately. Includes a hint for deferred tools whose schema wasn't sent.tool.validateInput(). Custom per-tool checks (path traversal, file size limits, etc.).mapToolResultToToolResultBlockParam() + size-budget processing.// A shallow clone is made for hooks/canUseTool to observe
const backfilledClone =
tool.backfillObservableInput && processedInput !== null
? ({ ...processedInput } as typeof processedInput)
: null
if (backfilledClone) {
tool.backfillObservableInput!(backfilledClone as Record<string, unknown>)
processedInput = backfilledClone
}
The original parsedInput.data goes to tool.call(). Mutation of the original would alter transcript serialization and break VCR fixture hashes in tests.
The Bash tool has an internal _simulatedSedEdit field used by the permission system after user approval. If the model somehow supplies this in its output, the code strips it before execution:
if (tool.name === BASH_TOOL_NAME && '_simulatedSedEdit' in processedInput) {
const { _simulatedSedEdit: _, ...rest } = processedInput
processedInput = rest // field stripped, execution proceeds safely
}
This is a defense-in-depth measure even though Zod's strictObject should already reject the field.
streamedCheckPermissionsAndCallTool() bridges the callback-based progress API and the generator-based result API into a single AsyncIterable<MessageUpdateLazy>:
const stream = new Stream<MessageUpdateLazy>()
checkPermissionsAndCallTool(..., progress => {
// Progress callback → enqueue progress message to stream
stream.enqueue({ message: createProgressMessage(...) })
})
.then(results => {
for (const r of results) stream.enqueue(r)
})
.finally(() => stream.done())
return stream // AsyncIterable that yields progress then results
The ToolPermissionContext (wrapped in DeepImmutable) flows through the entire system. It drives both registration-time filtering and runtime permission checks.
export type ToolPermissionContext = DeepImmutable<{
mode: PermissionMode // 'default' | 'plan' | 'bypassPermissions' | ...
additionalWorkingDirectories: Map<string, AdditionalWorkingDirectory>
alwaysAllowRules: ToolPermissionRulesBySource
alwaysDenyRules: ToolPermissionRulesBySource
alwaysAskRules: ToolPermissionRulesBySource
isBypassPermissionsModeAvailable: boolean
shouldAvoidPermissionPrompts?: boolean // background agents: auto-deny
awaitAutomatedChecksBeforeDialog?: boolean
}>
DeepImmutable prevents any code path from accidentally mutating permissions in-place. The only way to change the context is via a contextModifier returned from ToolResult.
Applied both at registration time (tool list visible to model) and at MCP tool assembly time. Uses getDenyRuleForTool() which understands MCP server-prefix rules like mcp__server that blanket-deny all tools from a server:
export function filterToolsByDenyRules(
tools: readonly T[],
permissionContext: ToolPermissionContext
): T[] {
return tools.filter(tool => !getDenyRuleForTool(permissionContext, tool))
}
Tool is a TypeScript structural type. Any object satisfying the interface is a tool — built-in, MCP, or dynamically generated. buildTool() fills in safe fail-closed defaults so authors only override what they need.getAllBaseTools() → getTools() → assembleToolPool(). Feature flags gate tools at load time; deny rules filter them before the model sees them; isEnabled() is the final veto. MCP tools are sorted into a separate alphabetical suffix to preserve prompt-cache stability.isConcurrencySafe(input) is called per-tool-call, not per-tool-type. A Bash tool running ls could be safe while one running rm -rf is not. The orchestrator partitions tool blocks into concurrent/serial batches at runtime.ToolPermissionContext is DeepImmutable. State changes happen via contextModifier functions returned from ToolResult, applied after tool completion. This prevents accidental cross-tool state contamination.siblingAbortController. Bash speculatively starts the security classifier before hooks run. Bash has the _simulatedSedEdit internal field that is defense-stripped. Bash's implicit dependency chains justify treating it differently from purely-read tools.buildTool() default isConcurrencySafe to, and why?assembleToolPool(), why are built-in tools and MCP tools sorted as two separate alphabetical groups rather than a single flat sort?StreamingToolExecutor, only Bash tool errors abort sibling tools. Which code comment explains the reasoning?backfillObservableInput() working on a clone rather than mutating parsedInput.data directly?contextModifier in its ToolResult. When is this modifier applied if the tool ran as part of a concurrent batch?