LLMProvider
LLMProvider is a builder that wraps an LLMConfig and an optional explicit driver. It implements CanResolveLLMConfig and HasExplicitInferenceDriver, which the runtime uses during assembly.
Namespace: Cognesy\Polyglot\Inference\LLMProvider
Creating a Provider
Customizing a Provider
All mutators return a new immutable instance:How the Runtime Uses It
When you callInferenceRuntime::fromProvider($provider), the runtime:
- Calls
$provider->resolveConfig()to get theLLMConfig - Checks if
$provider->explicitInferenceDriver()returns a driver - If an explicit driver exists, uses it directly
- Otherwise, looks up the driver name from the config and creates one via the
InferenceDriverRegistry
EmbeddingsProvider
EmbeddingsProvider serves the same role for embeddings. It wraps an EmbeddingsConfig and an optional explicit driver.
Namespace: Cognesy\Polyglot\Embeddings\EmbeddingsProvider
Creating a Provider
LLMProvider, EmbeddingsProvider does not have a using(...) shortcut for presets. Use Embeddings::using(...) or construct the config explicitly.
Customizing a Provider
Driver Factories
Inference Driver Registry
TheInferenceDriverRegistry manages the mapping between driver names and their factory callables. Polyglot ships with a default set of bundled drivers via BundledInferenceDrivers::registry().
Supported inference drivers include:
| Driver Name | Class | Notes |
|---|---|---|
a21 | A21Driver | A21 Labs |
anthropic | AnthropicDriver | Anthropic Messages API |
azure | AzureDriver | Azure OpenAI |
bedrock-openai | BedrockOpenAIDriver | AWS Bedrock (OpenAI-compatible) |
cerebras | CerebrasDriver | Cerebras |
cohere | CohereV2Driver | Cohere v2 |
deepseek | DeepseekDriver | DeepSeek |
fireworks | FireworksDriver | Fireworks AI |
gemini | GeminiDriver | Google Gemini native API |
gemini-oai | GeminiOAIDriver | Gemini via OpenAI-compatible endpoint |
glm | GlmDriver | GLM |
groq | GroqDriver | Groq |
huggingface | HuggingFaceDriver | Hugging Face |
inception | InceptionDriver | Inception |
meta | MetaDriver | Meta Llama API |
minimaxi | MinimaxiDriver | Minimaxi |
mistral | MistralDriver | Mistral |
openai | OpenAIDriver | OpenAI Chat Completions API |
openai-responses | OpenAIResponsesDriver | OpenAI Responses API |
openresponses | OpenResponsesDriver | Open Responses API |
openrouter | OpenRouterDriver | OpenRouter |
perplexity | PerplexityDriver | Perplexity |
qwen | QwenDriver | Alibaba Qwen |
sambanova | SambaNovaDriver | SambaNova |
xai | XAiDriver | xAI (Grok) |
moonshot | OpenAICompatibleDriver | Moonshot (via OpenAI-compatible) |
ollama | OpenAICompatibleDriver | Ollama (via OpenAI-compatible) |
openai-compatible | OpenAICompatibleDriver | Generic OpenAI-compatible APIs |
together | OpenAICompatibleDriver | Together AI (via OpenAI-compatible) |
LLMConfig, CanSendHttpRequests, and CanHandleEvents in its constructor) or as a callable factory:
Embeddings Driver Registry
TheEmbeddingsDriverRegistry follows the same immutable instance-based pattern as InferenceDriverRegistry. Bundled embeddings drivers are provided via BundledEmbeddingsDrivers::registry() and include: openai, azure, cohere, gemini, jina, mistral, and ollama.
Custom embeddings drivers can be registered through the registry:
InferenceDriverRegistry and EmbeddingsDriverRegistry use immutable instance-based registration, so driver registrations can vary per runtime.
Key Contracts
The provider system is built on a small set of interfaces:Provider Contracts
| Interface | Purpose |
|---|---|
CanResolveLLMConfig | Returns an LLMConfig from a provider |
HasExplicitInferenceDriver | Optionally returns a pre-built inference driver |
CanAcceptLLMConfig | Allows setting an LLMConfig on a provider |
CanResolveEmbeddingsConfig | Returns an EmbeddingsConfig from a provider |
HasExplicitEmbeddingsDriver | Optionally returns a pre-built embeddings driver |
Driver Contracts
| Interface | Purpose |
|---|---|
CanProcessInferenceRequest | Main inference driver contract (make responses, stream deltas, report capabilities) |
CanHandleVectorization | Main embeddings driver contract (handle requests, parse responses) |
CanProvideInferenceDrivers | Registry that creates inference drivers by name |
Adapter Contracts
| Interface | Purpose |
|---|---|
CanTranslateInferenceRequest | Converts InferenceRequest to HttpRequest |
CanTranslateInferenceResponse | Converts HttpResponse to InferenceResponse or stream deltas |
CanMapMessages | Maps typed Messages to provider format |
CanMapRequestBody | Assembles the request body |
CanMapUsage | Extracts token usage from response data |
CanProcessInferenceRequest also includes a capabilities() method that reports what features a driver supports (e.g., streaming, tool calls, structured output). This can be used to make runtime decisions about which features to use with a given provider: