Configuration
After publishing the configuration file withphp artisan vendor:publish --tag=instructor-config, you will find it at config/instructor.php. This file controls every aspect of the package, from LLM provider connections and extraction behavior to HTTP transport, logging, event bridging, code-agent execution, native-agent runtime boundaries, and response caching.
This is Laravel-native configuration. The Laravel integration reads config('instructor.*') through Laravel’s config repository and converts those arrays into typed runtime config objects internally. It does not ask the standalone packages/config YAML loader to parse config/instructor.php.
Default Connection
->connection('name') on any facade, or by passing an LLMConfig object via ->fromConfig(...).
Connections
Configure multiple LLM provider connections. Each connection defines its driver, API credentials, default model, and token limits. You can define as many connections as you need and switch between them at runtime.Supported Drivers
| Driver | Provider | Description |
|---|---|---|
openai | OpenAI | GPT-4, GPT-4o, GPT-4o-mini |
anthropic | Anthropic | Claude 3, Claude 3.5, Claude 4 |
azure | Azure OpenAI | Azure-hosted OpenAI models |
gemini | Gemini 1.5, Gemini 2.0 | |
mistral | Mistral AI | Mistral, Mixtral models |
groq | Groq | Fast inference with Llama, Mixtral |
cohere | Cohere | Command models |
deepseek | DeepSeek | DeepSeek models |
ollama | Ollama | Local open-source models |
perplexity | Perplexity | Perplexity models |
Adding a Custom Connection
Any OpenAI-compatible API can be used by setting theopenai driver and pointing api_url to your endpoint. Extra keys beyond the standard set (driver, api_url, api_key, endpoint, model, max_tokens, options) are automatically merged into the options array and forwarded with each request.
Embeddings Connections
Configure embedding model connections separately from inference connections. The embeddings section has its owndefault key and connection definitions.
Extraction Settings
Configure defaults for structured output extraction. These values apply to everyStructuredOutput call unless overridden at runtime.
Output Modes
The output mode controls how the package instructs the LLM to produce structured output. Different providers have varying levels of support for each mode.| Mode | Description | Best For |
|---|---|---|
json_schema | Uses JSON Schema for structured output | Most reliable; recommended for OpenAI |
json | Simple JSON mode without schema enforcement | Fallback for models that lack schema support |
tools | Uses tool/function calling to extract structured data | Alternative approach; good cross-provider support |
md_json | Markdown-wrapped JSON | Useful for Gemini and similar models |
HTTP Client Settings
Configure the underlying HTTP transport. The Laravel package ships with its ownLaravelDriver that wraps Laravel’s HTTP client (Illuminate\Http\Client\Factory), which means Http::fake() works transparently in your tests.
Cognesy\Http\Contracts\CanSendHttpRequests to the Laravel-backed HTTP transport. All higher-level services (Inference, Embeddings, StructuredOutput) depend on that contract, ensuring consistent HTTP behavior across the entire package.
Logging Settings
The package includes a logging pipeline that enriches log entries with Laravel request context (request ID, authenticated user, route, URL) automatically.Logging Presets
| Preset | Description |
|---|---|
default | Development-friendly logging with request enrichment, native-agent and AgentCtrl templates, and noisy streaming events excluded by default |
production | Minimal logging at warning level and above; excludes verbose HTTP, partial-response, and streaming delta events for lower overhead |
custom | Fully configurable pipeline — supply your own channel, level, exclude_events, include_events, and templates arrays |
default and production presets automatically attach lazy enrichers that add the current HTTP request context (request ID, user ID, session ID, route, method, URL) to every log record.
Events Settings
Configure how Instructor’s internal events are bridged to Laravel’s event dispatcher.bridge_events is empty (the default), every Instructor event is forwarded to Laravel. To limit traffic, list only the event classes you care about. See the Events guide for the full list of available events and listener examples.
Cache Settings
Configure response caching to avoid redundant API calls for identical inputs.Native Agents Settings
Theagents namespace is reserved for native Cognesy\Agents runtime integration. This keeps native runtime, persistence, and observability settings separate from AgentCtrl code-agent execution.
tools and capabilities through the container before registration, so constructor dependencies are supported. definitions accepts file or directory paths. schemas accepts class strings or schema-definition arrays.
If you prefer explicit service-provider wiring, use the first-party tag constants in Cognesy\Instructor\Laravel\Agents\AgentRegistryTags for definitions, tools, capabilities, and tagged SchemaRegistration objects.
For long-lived sessions, switch session_store to database and publish the package migration with php artisan vendor:publish --tag=instructor-migrations.
For broadcast envelopes, enable agents.broadcasting.enabled and set the Laravel broadcasting connection you want to use.
Code Agents (AgentCtrl) Settings
AgentCtrl now reads from the dedicated agent_ctrl namespace. The facade still falls back to the legacy agents key so existing published configs continue to work.
Telemetry Settings
Thetelemetry namespace is the first-class home for Laravel telemetry wiring.
driver accepts null, otel, langfuse, logfire, or composite. The package binds telemetry, projector composition, and the runtime event bridge from this namespace automatically.
Environment Variables Reference
| Variable | Default | Description |
|---|---|---|
INSTRUCTOR_CONNECTION | openai | Default LLM connection |
INSTRUCTOR_OUTPUT_MODE | json_schema | Output mode for extraction |
INSTRUCTOR_MAX_RETRIES | 2 | Max validation retry attempts |
INSTRUCTOR_HTTP_DRIVER | laravel | HTTP client driver |
INSTRUCTOR_HTTP_TIMEOUT | 120 | Request timeout (seconds) |
INSTRUCTOR_HTTP_CONNECT_TIMEOUT | 30 | Connection timeout (seconds) |
INSTRUCTOR_LOGGING_ENABLED | true | Enable logging |
INSTRUCTOR_LOG_CHANNEL | stack | Laravel log channel |
INSTRUCTOR_LOG_LEVEL | warning | Minimum log level |
INSTRUCTOR_LOGGING_PRESET | production | Logging preset |
INSTRUCTOR_DISPATCH_EVENTS | true | Bridge events to Laravel |
INSTRUCTOR_CACHE_ENABLED | false | Enable response caching |
OPENAI_API_KEY | — | OpenAI API key |
ANTHROPIC_API_KEY | — | Anthropic API key |
Runtime Configuration
Override any configuration at runtime using the fluent API on the facades:LLMConfig object and pass it directly: