Agents
An agent is an autonomous LLM-powered process that receives a task, reasons about how to accomplish it, uses tools, and delivers a result.
The ReAct Loop
Every agent runs a ReAct (Reason + Act) loop. On each iteration:
- Context is assembled: system prompt, skills, memory, tool definitions, conversation history
- The LLM generates a response — either text or tool calls
- If tool calls: execute them, append results, loop again
- If text: stream to client, conversation is complete
The loop runs up to maxIterations (default 25) before stopping. Each iteration is metered for billing.
Configuration
- systemPrompt — the core instructions. Injected at the start of every context window
- model — which LLM to use (routed through AI Gateway)
- temperature — controls randomness (0.0–1.0)
- maxIterations — loop limit per conversation turn
- enabledTools — which tools the agent can access (filtered by plan tier)
Subagents
Agency-tier agents can spawn subagents — background agent instances with their own context windows but access to the same workspace. Subagents are useful for parallel research, independent sub-tasks, or divide-and-conquer strategies.
Depth is limited to 3 levels to prevent infinite recursion. Subagent iterations are capped at 15.
Streaming
Responses stream via Server-Sent Events (SSE):
event: text— streamed text chunkevent: tool_start— tool call initiatedevent: tool_result— tool call completedevent: done— conversation turn complete with statsevent: error— error occurred