Prompting - Zero-Shot Prompting
Auto-Refine The Prompt
Cookbook
Instructor - Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
- Validation with LLM
Instructor - Advanced
- Context caching (structured output)
- Customize parameters of LLM driver
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Instructor - Troubleshooting
Instructor - LLM API Support
Instructor - Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Polyglot - LLM Basics
Polyglot - LLM Advanced
Polyglot - LLM Troubleshooting
Polyglot - LLM API Support
Polyglot - LLM Extras
Prompting - Zero-Shot Prompting
Prompting - Few-Shot Prompting
Prompting - Thought Generation
Prompting - Miscellaneous
- Arbitrary properties
- Consistent values of arbitrary properties
- Chain of Summaries
- Chain of Thought
- Single label classification
- Multiclass classification
- Entity relationship extraction
- Handling errors
- Limiting the length of lists
- Reflection Prompting
- Restating instructions
- Ask LLM to rewrite instructions
- Expanding search queries
- Summary with Keywords
- Reusing components
- Using CoT to improve interpretation of component data
Prompting - Zero-Shot Prompting
Auto-Refine The Prompt
Overview
How do we remove irrelevant information from the prompt?
The S2A (System 2 Attention) technique auto-refines a prompt by asking the model to rewrite the prompt to include only relevant information.
We implement this in two steps:
- Ask the model to rewrite the prompt
- Pass the rewritten prompt back to the model
Example
<?php
require 'examples/boot.php';
use Cognesy\Instructor\Extras\Scalar\Scalar;
use Cognesy\Instructor\Features\Schema\Attributes\Description;
use Cognesy\Instructor\Instructor;
class RewrittenTask {
#[Description("Relevant context")]
public string $relevantContext;
#[Description("The question from the user")]
public string $userQuery;
}
class RefineAndSolve {
private string $prompt = <<<PROMPT
Given the following text by a user, extract the part
that is actually relevant to their question. Please
include the actual question or query that the user
is asking.
Text by user:
{query}
PROMPT;
public function __invoke(string $problem) : int {
$rewrittenPrompt = $this->rewritePrompt($problem);
return (new Instructor)->respond(
messages: "{$rewrittenPrompt->relevantContext}\nQuestion: {$rewrittenPrompt->userQuery}",
responseModel: Scalar::integer('answer'),
);
}
private function rewritePrompt(string $query) : RewrittenTask {
return (new Instructor)->respond(
messages: str_replace('{query}', $query, $this->prompt),
responseModel: RewrittenTask::class,
model: 'gpt-4o',
);
}
}
$answer = (new RefineAndSolve)(problem: <<<PROBLEM
Mary has 3 times as much candy as Megan.
Mary then adds 10 more pieces of candy to her collection.
Max is 5 years older than Mary.
If Megan has 5 pieces of candy, how many does Mary have in total?
PROBLEM,
);
echo $answer . "\n";
?>
References
On this page