Skip to main content
Instructor for PHP home page
Search...
⌘K
Issues
Github
Github
Search...
Navigation
LLM Inference and Embeddings \ LLM Advanced
Parallel Calls
Main
Packages
Cookbook
Community
Blog
Structured Outputs \ Basics
Basic use
Specifying required and optional parameters via constructor
Getters and setters
Private vs public object field
Basic use via mixin
Fluent API
Handling errors with `Maybe` helper class
Messages API
Mixed Type Property
Modes
Making some fields optional
Automatic correction based on validation results
Using attributes
Using LLM API connection presets from config file
Validation
Custom validation using Symfony Validator
Validation across multiple fields
Validation with LLM
Structured Outputs \ Advanced
Use custom configuration providers
Context caching (structured output)
Context caching (structured output, OpenAI)
Customize parameters of LLM driver
Use custom HTTP client instance
Use custom HTTP client instance - Laravel
Custom prompts
Customize parameters via DSN
Extracting arguments of function or method
Manual Schema Building
Streaming partial updates during inference
Providing example inputs and outputs
Extracting scalar values
Extracting sequences of objects
Streaming
Structures
Structured Outputs \ Troubleshooting
Debugging
Laravel Logging Integration
Monolog Logging with Functional Pipeline
PSR-3 Logging with Functional Pipeline
Symfony Logging Integration
Receive specific internal event with onEvent()
Modifying Settings Path
Tracking token usage via events
Receive all internal events with wiretap()
Structured Outputs \ LLM API Support
A21
Anthropic
Azure OpenAI
Cerebras
Cohere
DeepSeek
Fireworks.ai
Google Gemini
Google Gemini (OpenAI compatible API)
Groq
Hugging Face
Inception
Meta
Minimaxi
Mistral AI
MoonshotAI
Local / Ollama
OpenAI
OpenRouter
Perplexity
SambaNova
Together.ai
xAI / Grok
Structured Outputs \ Extras
Extraction of complex objects
Extraction of complex objects (Anthropic)
Extraction of complex objects (Cohere)
Extraction of complex objects (Gemini)
Custom Content Extractors
Using structured data as an input
Image processing - car damage detection
Image to data (OpenAI)
Image to data (Anthropic)
Image to data (Gemini)
Generating JSON Schema from PHP classes
Return extracted data as array
Use different class for schema and output
Streaming with array output format
Pure Array Processing (No Classes)
Generating JSON Schema from PHP classes
Generating JSON Schema dynamically
Create tasks from meeting transcription
Translating UI text fields
Web page to PHP objects
LLM Inference and Embeddings \ LLM Basics
Working directly with LLMs
Working directly with LLMs and JSON - JSON mode
Working directly with LLMs and JSON - JSON Schema mode
Working directly with LLMs and JSON - MdJSON mode
Working directly with LLMs and JSON - Tools mode
Generating JSON Schema from PHP classes
Generating JSON Schema from PHP classes
LLM Inference and Embeddings \ LLM Advanced
Customize configuration providers of LLM driver
Context caching (text inference)
Context caching (text inference, OpenAI)
Customize configuration of LLM driver
Custom Embeddings Config
Using custom LLM driver
Customize LLM Configuration with DSN string
Embeddings utils
Embeddings
Work directly with HTTP client facade
Parallel Calls
Reasoning Content Access
LLM Inference and Embeddings \ LLM Troubleshooting
Debugging HTTP Calls
Embeddings Logging with Laravel
Inference Logging with Laravel
Inference Logging with Monolog
Inference Logging with Symfony
LLM Inference and Embeddings \ LLM API Support
A21
Anthropic
Azure OpenAI
Cerebras
Cohere
DeepSeek
Fireworks.ai
Google Gemini
Google Gemini (OpenAI-compatible)
Groq
Inception
Meta
Minimaxi
Mistral AI
MoonshotAI
Local / Ollama
OpenAI
OpenRouter
Perplexity
SambaNova
Together.ai
xAI / Grok
LLM Inference and Embeddings \ LLM Extras
Multi-Participant AI Chat Panel Discussion
Chat with summary
Using images in prompts
Streaming metrics (Polyglot)
Prompt Templates
Simple content summary
Inference and tool use
Inference and tool use (ReAct driver)
HTTP Client \ HTTP Client
HTTP Client – Basics
HTTP Client – Streaming Basics
HTTP Middleware (Hooks + Conditional Decoration)
HTTP Middleware (Stream)
HTTP Middleware (Sync)
HTTP Client – Pool Basics
Agents and Agent Controllers \ Agents
Basic Agent Usage
Agent with File System Tools
Agent-Driven Codebase Search
Agent Self-Critique Pattern
Agent with Structured Output Extraction
Agent Subagent Orchestration
Agents and Agent Controllers \ Agent Controllers
Basic Agent Control Usage
Agent Control Events & Monitoring
Agent Control Streaming
Agent Control Runtime Switching
Claude Code CLI - Basic
Claude Code CLI - Agentic Search
OpenAI Codex CLI - Basic
OpenAI Codex CLI - Streaming
OpenCode CLI - Basic
OpenCode CLI - Streaming
Prompting \ Zero-Shot
Assign a Role
Auto-Refine The Prompt
Clarify Ambiguous Information
Define Style
Emotional Stimuli
Generate Follow-Up Questions
Ask Model to Repeat the Query
Simulate a Perspective
Prompting \ Few-Shot
Consistency Based Self Adaptive Prompting (COSP)
Example Ordering
Generate In-Context Examples
Select Effective Examples
Prompting \ Thought Generation
Analogical Prompting
Automate Example Selection
Prioritize Complex Examples
Examine The Context
Consider Higher-Level Context
Include Incorrect Examples
Use Majority Voting
Generate Prompt Variations
Structure The Reasoning
Prioritize Uncertain Examples
Prompting \ Ensembling
Combine Multiple Reasoning Chains
Use LLMs to Combine Different Responses
Combine Different Specialized LLMs
Prioritize Consistent Examples
Use Distinct Example Subsets
Use Ensembles To Test Prompts
Generate Multiple Candidate Responses
Use Task Specific Evaluation Metrics
Use Translation for Paraphrasing
Verify Responses over Majority Voting
Prompting \ Self-Criticism
Break Down Reasoning Into Multiple Steps
Determine Uncertainty of Reasoning Chain
Improve With Feedback
Reconstruct Prompt from Reasoning Steps
Self-Verify Responses
Independently Verify Responses
Prompting \ Decomposition
Break Down Complex Tasks
Ditch Vanilla Chain Of Thought
Generate Code for Intermediate Steps
Generate in Parallel
Solve Simpler Subproblems
Leverage Task Specific Systems
Prompting \ Miscellaneous
Arbitrary properties
Consistent values of arbitrary properties
Chain of Summaries
Chain of Thought
Single label classification
Multiclass classification
Entity relationship extraction
Handling errors
Limiting the length of lists
Reflection Prompting
Restating instructions
Ask LLM to rewrite instructions
Expanding search queries
Summary with Keywords
Reusing components
Using CoT to improve interpretation of component data
On this page
Overview
Example
LLM Inference and Embeddings \ LLM Advanced
Parallel Calls
Overview
Work in progress.
Example
Copy
<?
php
echo
"This example is not yet implemented.
\n
"
;
?>
Work directly with HTTP client facade
Reasoning Content Access
⌘I