Advanced
Custom prompts
Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
Advanced
- Context caching
- Context caching (Anthropic)
- Customize parameters of OpenAI client
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Troubleshooting
LLM API Support
Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Embeddings
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Working directly with LLMs
- Working directly with LLMs and JSON - JSON mode
- Working directly with LLMs and JSON - JSON Schema mode
- Working directly with LLMs and JSON - MdJSON mode
- Inference and tool use
- Working directly with LLMs and JSON - Tools mode
- Prompts
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Simple content summary
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Advanced
Custom prompts
Overview
In case you want to take control over the prompts sent by Instructor
to LLM for different modes, you can use the prompt
parameter in the
request()
or respond()
methods.
It will override the default Instructor prompts, allowing you to fully customize how LLM is instructed to process the input.
Example
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');
use Cognesy\Instructor\Enums\Mode;
use Cognesy\Instructor\Events\HttpClient\RequestSentToLLM;
use Cognesy\Instructor\Instructor;
class User {
public int $age;
public string $name;
}
$instructor = (new Instructor)
// let's dump the request data to see how customized prompts look like in requests
->onEvent(RequestSentToLLM::class, fn(RequestSentToLLM $event) => dump($event));
print("\n# Request for Mode::Tools:\n\n");
$user = $instructor
->respond(
messages: "Our user Jason is 25 years old.",
responseModel: User::class,
prompt: "\nYour task is to extract correct and accurate data from the messages using provided tools.\n",
mode: Mode::Tools
);
echo "\nRESPONSE:\n";
dump($user);
print("\n# Request for Mode::Json:\n\n");
$user = $instructor
->respond(
messages: "Our user Jason is 25 years old.",
responseModel: User::class,
prompt: "\nYour task is to respond correctly with JSON object. Response must follow JSONSchema:\n<|json_schema|>\n",
mode: Mode::Json
);
echo "\nRESPONSE:\n";
dump($user);
print("\n# Request for Mode::MdJson:\n\n");
$user = $instructor
->respond(
messages: "Our user Jason is 25 years old.",
responseModel: User::class,
prompt: "\nYour task is to respond correctly with strict JSON object containing extracted data within a ```json {} ``` codeblock. Object must validate against this JSONSchema:\n<|json_schema|>\n",
mode: Mode::MdJson
);
echo "\nRESPONSE:\n";
dump($user);
?>