Prompting - Miscellaneous
Chain of Thought
Cookbook
Instructor - Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
- Validation with LLM
Instructor - Advanced
- Context caching (structured output)
- Customize parameters of LLM driver
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Instructor - Troubleshooting
Instructor - LLM API Support
Instructor - Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Polyglot - LLM Basics
Polyglot - LLM Advanced
Polyglot - LLM Troubleshooting
Polyglot - LLM API Support
Polyglot - LLM Extras
Prompting - Zero-Shot Prompting
Prompting - Few-Shot Prompting
Prompting - Thought Generation
Prompting - Miscellaneous
- Arbitrary properties
- Consistent values of arbitrary properties
- Chain of Summaries
- Chain of Thought
- Single label classification
- Multiclass classification
- Entity relationship extraction
- Handling errors
- Limiting the length of lists
- Reflection Prompting
- Restating instructions
- Ask LLM to rewrite instructions
- Expanding search queries
- Summary with Keywords
- Reusing components
- Using CoT to improve interpretation of component data
Prompting - Miscellaneous
Chain of Thought
Overview
This approach to “chain of thought” improves data quality, by eliciting LLM reasoning to self-explain approach to generating the response.
With Instructor you can achieve a ‘modular’ CoT, where multiple explanations can be generated by LLM for different parts of the response, driving a more granular control and improvement of the response.
Example
<?php
require 'examples/boot.php';
use Cognesy\Instructor\Features\Schema\Attributes\Instructions;
use Cognesy\Instructor\Instructor;
class Employee {
#[Instructions('Think step by step to determine the correct year of employment.')]
public string $reasoning;
public int $yearOfEmployment;
// ... other data fields of your employee class
}
$text = 'He was working here for 5 years. Now, in 2019, he is a manager.';
$employee = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => $text]],
responseModel: Employee::class
);
dump($employee);
assert($employee->yearOfEmployment === 2014);
?>