Prompting - Miscellaneous
Reflection Prompting
Cookbook
Instructor - Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
- Validation with LLM
Instructor - Advanced
- Context caching (structured output)
- Customize parameters of LLM driver
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Instructor - Troubleshooting
Instructor - LLM API Support
Instructor - Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Polyglot - LLM Basics
Polyglot - LLM Advanced
Polyglot - LLM Troubleshooting
Polyglot - LLM API Support
Polyglot - LLM Extras
Prompting - Zero-Shot Prompting
Prompting - Few-Shot Prompting
Prompting - Thought Generation
Prompting - Miscellaneous
- Arbitrary properties
- Consistent values of arbitrary properties
- Chain of Summaries
- Chain of Thought
- Single label classification
- Multiclass classification
- Entity relationship extraction
- Handling errors
- Limiting the length of lists
- Reflection Prompting
- Restating instructions
- Ask LLM to rewrite instructions
- Expanding search queries
- Summary with Keywords
- Reusing components
- Using CoT to improve interpretation of component data
Prompting - Miscellaneous
Reflection Prompting
Overview
This implementation of Reflection Prompting with Instructor provides a structured way to encourage LLM to engage in more thorough and self-critical thinking processes, potentially leading to higher quality and more reliable outputs.
Example
<?php
require 'examples/boot.php';
use Cognesy\Instructor\Features\Schema\Attributes\Instructions;
use Cognesy\Instructor\Features\Validation\Contracts\CanValidateSelf;
use Cognesy\Instructor\Features\Validation\ValidationResult;
use Cognesy\Instructor\Instructor;
use Cognesy\Polyglot\LLM\Enums\Mode;
class ReflectiveResponse implements CanValidateSelf {
#[Instructions('Is problem solvable and what domain expertise it requires')]
public string $assessment;
#[Instructions('Describe an expert persona who would be able to solve this problem, their skills and experience')]
public string $persona;
#[Instructions("Initial analysis and expert persona's approach to the problem")]
public string $initialThinking;
#[Instructions('Steps of reasoning leading to the final answer - expert persona thinking through the problem')]
/** @var string[] */
public array $chainOfThought;
#[Instructions('Critical examination of the reasoning process - what could go wrong, what are the assumptions')]
public string $reflection;
#[Instructions('Final answer after reflection')]
public string $finalOutput;
// Validation method to ensure thorough reflection
public function validate(): ValidationResult {
$errors = [];
if (empty($this->reflection)) {
$errors[] = "Reflection is required for a thorough response.";
}
if (count($this->chainOfThought) < 2) {
$errors[] = "Please provide at least two steps in the chain of thought.";
}
return ValidationResult::make($errors);
}
}
$problem = 'Solve the equation x+y=x-y';
$solution = (new Instructor)->withConnection('anthropic')->respond(
messages: $problem,
responseModel: ReflectiveResponse::class,
mode: Mode::MdJson,
options: ['max_tokens' => 2048]
);
print("Problem:\n$problem\n\n");
dump($solution);
?>