Extras
Working directly with LLMs and JSON - Tools mode
Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
Advanced
- Context caching
- Context caching (Anthropic)
- Customize parameters of OpenAI client
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Troubleshooting
LLM API Support
Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Embeddings
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Working directly with LLMs
- Working directly with LLMs and JSON - JSON mode
- Working directly with LLMs and JSON - JSON Schema mode
- Working directly with LLMs and JSON - MdJSON mode
- Inference and tool use
- Working directly with LLMs and JSON - Tools mode
- Prompts
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Simple content summary
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Extras
Working directly with LLMs and JSON - Tools mode
Overview
While working with Inference
class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.
Example
In this example we will use OpenAI tools mode, in which model will generate a JSON containing arguments for a function call. This way we can make the model generate a JSON object with specific structure of parameters.
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');
use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Features\LLM\Inference;
$data = (new Inference)
->withConnection('openai')
->create(
messages: [['role' => 'user', 'content' => 'What is capital of France? \
Respond with function call.']],
tools: [[
'type' => 'function',
'function' => [
'name' => 'extract_data',
'description' => 'Extract city data',
'parameters' => [
'type' => 'object',
'description' => 'City information',
'properties' => [
'name' => [
'type' => 'string',
'description' => 'City name',
],
'founded' => [
'type' => 'integer',
'description' => 'Founding year',
],
'population' => [
'type' => 'integer',
'description' => 'Current population',
],
],
'required' => ['name', 'founded', 'population'],
'additionalProperties' => false,
],
],
]],
toolChoice: [
'type' => 'function',
'function' => [
'name' => 'extract_data'
]
],
options: ['max_tokens' => 64],
mode: Mode::Tools,
)
->toJson();
echo "USER: What is capital of France\n";
echo "ASSISTANT:\n";
dump($data);
?>