Advanced
Streaming
Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
Advanced
- Context caching
- Context caching (Anthropic)
- Customize parameters of OpenAI client
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Troubleshooting
LLM API Support
Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Embeddings
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Working directly with LLMs
- Working directly with LLMs and JSON - JSON mode
- Working directly with LLMs and JSON - JSON Schema mode
- Working directly with LLMs and JSON - MdJSON mode
- Inference and tool use
- Working directly with LLMs and JSON - Tools mode
- Prompts
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Simple content summary
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Advanced
Streaming
Overview
Instructor can process LLM’s streamed responses to provide partial response model updates that you can use to update the model with new data as the response is being generated.
Example
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');
use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Instructor;
use Cognesy\Instructor\Utils\Cli\Console;
class UserRole
{
/** Monotonically increasing identifier */
public int $id;
public string $title = '';
}
class UserDetail
{
public int $age = 0;
public string $name = '';
public string $location = '';
/** @var UserRole[] */
public array $roles = [];
/** @var string[] */
public array $hobbies = [];
}
// This function will be called every time a new token is received
function partialUpdate($partial) {
// Clear the screen and move the cursor to the top
Console::clearScreen();
// Display the partial object
dump($partial);
// Wait a bit before clearing the screen to make partial changes slower.
// Don't use this in your application :)
// usleep(250000);
}
?>
Now we can use this data model to extract arbitrary properties from a text message.
As the tokens are streamed from LLM API, the partialUpdate
function will be called
with partially updated object of type UserDetail
that you can use, usually to update
the UI.
<?php
$text = <<<TEXT
Jason is 25 years old, he is an engineer and tech lead. He lives in
San Francisco. He likes to play soccer and climb mountains.
TEXT;
$stream = (new Instructor)->withConnection('openai')->request(
messages: $text,
responseModel: UserDetail::class,
options: ['stream' => true],
mode: Mode::Json,
)->stream();
foreach ($stream->partials() as $partial) {
partialUpdate($partial);
}
$user = $stream->getLastUpdate();
assert($user->name === 'Jason');
assert($user->age === 25);
?>