Basic Agent
This guide walks you through building agents withAgentLoop — the core execution engine of the Agents package. You will learn how to send messages, add tools, customize behavior, observe execution, and test agents without making LLM calls.
Hello World
The simplest possible agent sends a message to a language model and returns the response:AgentLoop::default()creates a loop with the defaultToolCallingDriver, which connects to whatever LLM provider is configured in your environment (typically viaOPENAI_API_KEYor similar).AgentState::empty()creates a fresh, immutable state with no messages, no history, and no execution context. CallingwithUserMessage()returns a new state with the message appended — the original remains empty.$loop->execute($state)runs the step loop. The driver sends the message to the LLM, receives a text response with no tool calls, and the loop detects there is nothing more to do. It returns the finalAgentStatecontaining the complete execution history.
finalResponse()->toString().
Understanding the Execution Lifecycle
Every call toexecute() follows the same lifecycle:
- Prepare execution — The loop ensures a fresh
ExecutionStatewith a unique execution ID and sets the status toInProgress. - Before step — Lifecycle hooks fire. Guard hooks (step limits, token limits, time limits) check whether execution should be stopped before the next LLM call.
- Driver step — The driver compiles messages from the current state, sends them to the LLM, receives a response, and executes any requested tool calls. The result is captured as an
AgentStep. - After step — Lifecycle hooks fire again. Hooks can inspect the step result, transform state, or trigger summarization.
- Continuation check — The loop evaluates whether to continue. It stops when: (a) no tool calls were returned, (b) a stop signal was emitted by a hook, or (c) the execution was explicitly continued by a hook. If tool calls were present, the loop repeats from step 2.
- After execution — Final hooks fire and the execution status is set to
Completed,Stopped, orFailed.
Adding a Tool
Tools give the agent the ability to act on the world. You define a tool as a callable, and the LLM decides when and how to invoke it based on the function’s name, parameter types, and docblock:FunctionTool::fromCallable() uses reflection to automatically generate the tool’s JSON schema from the callable’s signature. The function name becomes the tool name, parameter types become schema properties, and any PHPDoc @param descriptions become property descriptions. This means well-typed, well-documented functions produce high-quality tool schemas with zero manual configuration.
Multiple Tools
You can add multiple tools to a single loop. Each call towithTool() returns a new AgentLoop instance with the additional tool registered:
System Prompt
A system prompt establishes the agent’s persona, instructions, and constraints. It is sent as a cached context prefix on every LLM request, so the model always has it in scope. BothwithSystemPrompt() and withUserMessage() accept string|\Stringable, so you can pass xprompt Prompt objects or any Stringable directly:
AgentState is immutable, you can create a base state with a system prompt and reuse it across multiple conversations by calling withUserMessage() each time:
Stepping Through Execution
Sometimes you need to observe or react to each step as it happens, rather than waiting for the final result. Theiterate() method returns a generator that yields the state after every step:
execute().
Inspecting Results
The returnedAgentState provides rich access to everything that happened during execution:
Observing Events
TheAgentLoop emits events at every significant point in the lifecycle. You can listen for specific event types or wiretap all events:
Customizing the Driver
Choosing a Model
By default,AgentLoop::default() uses whatever LLM provider and model are configured in your environment. To use a specific provider or model, create the driver explicitly:
ReAct Driver
TheReActDriver implements the Thought/Action/Observation reasoning pattern. Instead of relying on native function-calling APIs, it prompts the model to produce structured decisions about what to do next. This can be useful with models that have weaker function-calling support, or when you want the model’s reasoning to be explicitly visible:
Testing Without an LLM
TheFakeAgentDriver lets you write deterministic agent tests by scripting the exact sequence of responses the “model” will produce. No API keys, no network calls, no flaky tests:
Using AgentBuilder
When your agent needs multiple capabilities — tools, guards, a specific LLM, custom hooks — manual construction becomes verbose.AgentBuilder provides a declarative composition layer:
CanConfigureAgent interface without needing to know about each other.
The UseGuards capability is particularly important for production use. It installs hooks that enforce step limits, token budgets, and execution time limits, preventing runaway agents from burning through your API quota. The defaults are 20 steps, 32768 tokens, and 300 seconds.
See AgentBuilder & Capabilities for the full list of built-in capabilities and how to create your own.
Next Steps
- AgentBuilder & Capabilities — Learn how capabilities compose and explore the full catalog (bash, file tools, subagents, summarization, task planning, structured output, and more).
- Agent Templates — Define agents in Markdown, YAML, or JSON when configuration should be data-driven.
- Session Runtime — Persist agent sessions for multi-turn chat interfaces and long-running workflows.