Skip to main content
Getting started with Instructor requires two things:
  1. Install the cognesy/instructor-struct package
  2. Provide LLM provider credentials

Installation

composer require cognesy/instructor-struct
# @doctest id="c625"
Instructor requires PHP 8.3 or later.

Providing API Keys

Instructor reads provider credentials from environment variables. The simplest approach is to set them in your shell or a .env file at the root of your project:
# .env
OPENAI_API_KEY=sk-your-key-here
# @doctest id="31ab"
For other providers, set the corresponding variable:
ANTHROPIC_API_KEY=your-key
GEMINI_API_KEY=your-key
GROQ_API_KEY=your-key
MISTRAL_API_KEY=your-key
# @doctest id="42f9"
Never commit API keys to version control. Add .env to your .gitignore file.

Preset-Based Setup

Presets are the fastest way to get started. A preset name maps to a provider configuration that reads credentials from the environment:
use Cognesy\Instructor\StructuredOutput;

$result = StructuredOutput::using('openai')
    ->with(
        messages: 'What is the capital of France?',
        responseModel: City::class,
    )
    ->get();
// @doctest id="0fe8"
You can switch providers by changing the preset name:
// Use Anthropic instead of OpenAI
$result = StructuredOutput::using('anthropic')
    ->with(
        messages: 'What is the capital of France?',
        responseModel: City::class,
    )
    ->get();
// @doctest id="6a8f"

Explicit Provider Configuration

When you need full control over the driver, model, API base URL, or other connection parameters, use LLMConfig directly:
use Cognesy\Instructor\StructuredOutput;
use Cognesy\Polyglot\Inference\Config\LLMConfig;

$result = StructuredOutput::fromConfig(
    LLMConfig::fromDsn('driver=openai,model=gpt-4o-mini')
)->with(
    messages: 'What is the capital of France?',
    responseModel: City::class,
)->get();
// @doctest id="1038"
You can also construct LLMConfig from an array for more detailed configuration:
use Cognesy\Polyglot\Inference\Config\LLMConfig;

$config = LLMConfig::fromArray([
    'driver' => 'openai',
    'model' => 'gpt-4o-mini',
    'apiKey' => $_ENV['OPENAI_API_KEY'],
    'apiUrl' => 'https://api.openai.com/v1',
    'maxTokens' => 4096,
]);

$result = StructuredOutput::fromConfig($config)
    ->with(
        messages: 'What is the capital of France?',
        responseModel: City::class,
    )
    ->get();
// @doctest id="0369"

Runtime Configuration

StructuredOutput handles single requests. When you need to configure behavior that applies across multiple requests — retries, output mode, event listeners, or custom pipeline extensions — use StructuredOutputRuntime:
use Cognesy\Instructor\StructuredOutput;
use Cognesy\Instructor\StructuredOutputRuntime;
use Cognesy\Polyglot\Inference\Config\LLMConfig;

$runtime = StructuredOutputRuntime::fromConfig(
    LLMConfig::fromPreset('openai')
)->withMaxRetries(3);

$structured = (new StructuredOutput)->withRuntime($runtime);

$city = $structured
    ->with(
        messages: 'What is the capital of France?',
        responseModel: City::class,
    )
    ->get();
// @doctest id="aaf4"

What Belongs Where

Understanding the separation of concerns helps you structure your application:
LayerResponsibilityExamples
LLMConfigProvider connection detailsDriver, model, API key, base URL, max tokens
StructuredOutputConfigExtraction behaviorOutput mode, retry prompt template, schema naming
StructuredOutputRuntimeRuntime behaviorMax retries, event listeners, custom validators/transformers
StructuredOutputSingle requestMessages, response model, system prompt, examples

Output Modes

Instructor supports multiple strategies for getting structured output from the LLM. The default mode (Tools) uses the provider’s function/tool calling API. You can switch modes via the runtime:
use Cognesy\Instructor\Enums\OutputMode;

$runtime = StructuredOutputRuntime::fromConfig(
    LLMConfig::fromPreset('openai')
)->withOutputMode(OutputMode::JsonSchema);
// @doctest id="39f8"
Available modes:
ModeDescription
OutputMode::ToolsUses the provider’s tool/function calling API (default)
OutputMode::JsonRequests JSON output via the provider’s JSON mode
OutputMode::JsonSchemaSends a JSON Schema and requests strict conformance
OutputMode::MdJsonAsks the LLM to return JSON inside a Markdown code block
OutputMode::TextExtracts JSON from unstructured text responses
OutputMode::UnrestrictedNo output constraints; extraction is best-effort

Event Listeners

The runtime exposes a full event system for monitoring and debugging:
use Cognesy\Instructor\Events\StructuredOutput\StructuredOutputRequestReceived;

$runtime = StructuredOutputRuntime::fromConfig(
    LLMConfig::fromPreset('openai')
);

// Listen for a specific event
$runtime->onEvent(
    StructuredOutputRequestReceived::class,
    fn($event) => logger()->info('Request received', [
        'requestId' => $event->data['requestId'],
        'executionId' => $event->data['executionId'],
        'phaseId' => $event->data['phaseId'],
    ]),
);

// Or wiretap all events
$runtime->wiretap(
    fn($event) => logger()->debug(get_class($event)),
);
// @doctest id="e676"

Using a Local Model with Ollama

Instructor works with local models through Ollama. Install Ollama, pull a model, and point Instructor at the local endpoint:
use Cognesy\Instructor\StructuredOutput;
use Cognesy\Polyglot\Inference\Config\LLMConfig;

$result = StructuredOutput::fromConfig(
    LLMConfig::fromDsn('driver=ollama,model=llama3.1')
)->with(
    messages: 'What is the capital of France?',
    responseModel: City::class,
)->get();
// @doctest id="cc02"

Framework Integration

Instructor is a standalone library that works in any PHP application. It does not require published config files, service providers, or framework-specific bindings. For Laravel-specific installation, configuration, facades, events, and testing, use the dedicated Laravel package docs:

Next Steps

  • Quickstart — run your first extraction
  • Usage — the full request-building API
  • Configuration — advanced configuration options
  • Modes — output mode details and trade-offs
  • LLM Providers — supported providers and driver options