Prompting - Few-Shot Prompting
Generate In-Context Examples
Cookbook
Instructor - Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
- Validation with LLM
Instructor - Advanced
- Context caching (structured output)
- Customize parameters of LLM driver
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Instructor - Troubleshooting
Instructor - LLM API Support
Instructor - Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Polyglot - LLM Basics
Polyglot - LLM Advanced
Polyglot - LLM Troubleshooting
Polyglot - LLM API Support
Polyglot - LLM Extras
Prompting - Zero-Shot Prompting
Prompting - Few-Shot Prompting
Prompting - Thought Generation
Prompting - Miscellaneous
- Arbitrary properties
- Consistent values of arbitrary properties
- Chain of Summaries
- Chain of Thought
- Single label classification
- Multiclass classification
- Entity relationship extraction
- Handling errors
- Limiting the length of lists
- Reflection Prompting
- Restating instructions
- Ask LLM to rewrite instructions
- Expanding search queries
- Summary with Keywords
- Reusing components
- Using CoT to improve interpretation of component data
Prompting - Few-Shot Prompting
Generate In-Context Examples
Overview
How can we generate examples for our prompt?
Self-Generated In-Context Learning (SG-ICL) is a technique which uses an LLM to generate examples to be used during the task. This allows for in-context learning, where examples of the task are provided in the prompt.
We can implement SG-ICL using Instructor as seen below.
Example
<?php
require 'examples/boot.php';
use Cognesy\Instructor\Extras\Scalar\Scalar;
use Cognesy\Instructor\Extras\Sequence\Sequence;
use Cognesy\Instructor\Features\Core\Data\Example;
use Cognesy\Instructor\Instructor;
enum ReviewSentiment : string {
case Positive = 'positive';
case Negative = 'negative';
}
class GeneratedReview {
public string $review;
public ReviewSentiment $sentiment;
}
class PredictSentiment {
private int $n = 4;
public function __invoke(string $review) : ReviewSentiment {
return (new Instructor)->respond(
messages: [
['role' => 'user', 'content' => "Review: {$review}"],
],
responseModel: Scalar::enum(ReviewSentiment::class),
examples: $this->generateExamples($review),
);
}
private function generate(string $inputReview, ReviewSentiment $sentiment) : array {
return (new Instructor)->respond(
messages: [
['role' => 'user', 'content' => "Generate {$this->n} various {$sentiment->value} reviews based on the input review:\n{$inputReview}"],
['role' => 'user', 'content' => "Generated review:"],
],
responseModel: Sequence::of(GeneratedReview::class),
)->toArray();
}
private function generateExamples(string $inputReview) : array {
$examples = [];
foreach ([ReviewSentiment::Positive, ReviewSentiment::Negative] as $sentiment) {
$samples = $this->generate($inputReview, $sentiment);
foreach ($samples as $sample) {
$examples[] = Example::fromData($sample->review, $sample->sentiment->value);
}
}
return $examples;
}
}
$predictSentiment = (new PredictSentiment)('This movie has been very impressive, even considering I lost half of the plot.');
dump($predictSentiment);
?>
References
On this page