Skip to main content
Inference is the main facade for text generation and structured outputs. Use it for one-off calls, provider switching, and streaming.

Quick Start

<?php
use Cognesy\Polyglot\Inference\Inference;

$text = (new Inference())
    ->withMessages('What is the capital of France?')
    ->get();
// @doctest id="9f99"

Use a Specific Preset

Presets come from config/llm.php.
<?php
use Cognesy\Polyglot\Inference\Inference;

$text = Inference::using('anthropic')
    ->withMessages('Give me three deployment checklist items.')
    ->get();
// @doctest id="3cfc"

Override Model and Options Per Request

<?php
use Cognesy\Polyglot\Inference\Inference;

$text = (new Inference())
    ->with(
        messages: 'Write a 2-line product summary.',
        model: 'gpt-4.1-nano',
        options: ['temperature' => 0.2, 'max_tokens' => 120],
    )
    ->get();
// @doctest id="4d03"

Stream Output

<?php
use Cognesy\Polyglot\Inference\Inference;

$stream = (new Inference())
    ->withMessages('Explain event sourcing in simple terms.')
    ->withStreaming()
    ->stream();

foreach ($stream->responses() as $partial) {
    echo $partial->contentDelta;
}
// @doctest id="45a0"

Switch Providers at Runtime

<?php
use Cognesy\Polyglot\Inference\Inference;

$presets = ['openai', 'anthropic', 'ollama'];

foreach ($presets as $preset) {
    $text = Inference::using($preset)
        ->withMessages('One sentence: what is dependency injection?')
        ->get();
}
// @doctest id="05b4"

See Also