PendingInference class represents a pending inference execution. It is
returned by the Inference class when you call the create() method. The request is
not sent to the underlying LLM until you actually access the response data, making
the object a lazy handle over a single inference operation.
Retrieving Text Content
The simplest way to get the model’s response is theget() method, which returns
the response content as a plain string:
Retrieving JSON Data
When you request a JSON response format, useasJsonData() to decode the content
directly into an associative array, or asJson() to get the raw JSON string:
Working with InferenceResponse
For full access to every detail of the model’s reply, call response() to get the
normalized InferenceResponse object:
Available InferenceResponse Methods
| Method | Returns | Description |
|---|---|---|
content() | string | The model’s text output |
reasoningContent() | string | Chain-of-thought / thinking content (if supported) |
toolCalls() | ToolCalls | Collection of tool calls made by the model |
usage() | InferenceUsage | Token counts for the request |
finishReason() | InferenceFinishReason | Why the model stopped generating |
responseData() | HttpResponse | The underlying raw HTTP response |
hasContent() | bool | Whether the response contains text content |
hasToolCalls() | bool | Whether the model made any tool calls |
hasReasoningContent() | bool | Whether reasoning / thinking content is present |
isPartial() | bool | Whether this is a partial (streaming) response |
Finish Reasons
ThefinishReason() method returns an InferenceFinishReason enum. Polyglot
normalizes the many vendor-specific strings into a consistent set of values:
| Value | Meaning |
|---|---|
Stop | The model finished naturally |
Length | Output was truncated due to token limits |
ToolCalls | The model wants to invoke a tool |
ContentFilter | Content was blocked by safety filters |
Error | An error occurred during generation |
Other | An unrecognized finish reason |
Token Usage
TheInferenceUsage object provides detailed token breakdowns including cache and reasoning
tokens:
Handling Tool Calls
When the model decides to invoke a tool, you can extract the tool call data usingasToolCallJsonData() on PendingInference, or inspect the ToolCalls collection
on the response object:
Quick JSON Extraction from Tool Calls
If you just need the arguments as a PHP array without inspecting the full response, use the shorthand onPendingInference:
Note: When a single tool call is present, asToolCallJsonData() returns that
call’s arguments as an array. When multiple tool calls are present, it returns
an array of all tool call data.
Streaming Responses
For long-running completions, streaming lets you display output as it arrives. Callstream() to get an InferenceStream and consume deltas:
The PartialInferenceDelta Object
Each delta yielded during streaming is a PartialInferenceDelta with the following
public properties:
| Property | Type | Description |
|---|---|---|
contentDelta | string | New text content in this chunk |
reasoningContentDelta | string | New reasoning content in this chunk |
toolId | ToolCallId|string|null | Tool call ID |
toolName | string | Tool name (when streaming tool calls) |
toolArgs | string | Partial tool arguments JSON |
finishReason | string | Set on the final delta |
usage | ?InferenceUsage | Token usage (typically on the final delta) |
usageIsCumulative | bool | Whether usage counts are cumulative |
Stream Methods
TheInferenceStream class provides several ways to consume and transform the
delta stream:
Using the onDelta Callback
Instead of iterating manually, you can register a callback that fires for each
visible delta:
Stream Lifecycle
The stream is one-shot: oncedeltas() has been fully iterated, calling it
again throws a LogicException. If you need to replay the response, work with
the finalized InferenceResponse returned by $stream->final().
Calling final() before the stream is exhausted will automatically drain all
remaining deltas, ensuring the finalized response is complete.
Checking for Streaming Mode
If you need to branch your code based on whether a request was configured for streaming, use theisStreamed() method on PendingInference: