LLMConfig and the
Polyglot runtime that powers the underlying inference calls. Once the configuration is resolved,
your application code stays the same regardless of which provider you use.
Selecting a Provider
The most common approaches:config/llm/presets directory. Each file defines the API
URL, driver, default model, and other provider-specific settings.
Supported Providers
The following providers have built-in presets:| Provider | Preset name |
|---|---|
| A21 | a21 |
| Anthropic | anthropic |
| AWS Bedrock | aws-bedrock |
| Azure OpenAI | azure |
| Cerebras | cerebras |
| Cohere | cohere |
| DeepSeek | deepseek |
| DeepSeek (Reasoning) | deepseek-r |
| Fireworks | fireworks |
| Google Gemini | gemini |
| Gemini (OpenAI-compatible) | gemini-oai |
| GLM | glm |
| Groq | groq |
| Hugging Face | huggingface |
| Inception | inception |
| Meta | meta |
| MiniMaxi | minimaxi |
| MiniMaxi (OpenAI-compatible) | minimaxi-oai |
| Mistral | mistral |
| Moonshot / Kimi | moonshot-kimi |
| Ollama | ollama |
| OpenAI | openai |
| OpenAI Responses | openai-responses |
| OpenRouter | openrouter |
| Perplexity | perplexity |
| Qwen | qwen |
| SambaNova | sambanova |
| Together | together |
| xAI | xai |
Custom Providers
Any OpenAI-compatible API can be used by building anLLMConfig manually: