LLMConfigfor inferenceEmbeddingsConfigfor embeddings
LLMConfig
driver: which inference driver to use (openai,anthropic,openai-compatible, etc.)apiUrl,endpoint,apiKey: transport/auth settingsmodel,maxTokens,contextLength,maxOutputLengthmetadata,queryParams,pricing,options
EmbeddingsConfig
driver: embeddings driver (openai,cohere,gemini, etc.)apiUrl,endpoint,apiKeymodel,dimensions,maxInputs,metadata
Resolution Flow
EmbeddingsProvider and EmbeddingsRuntime.
DSN and Overrides
Both providers support DSN input:Notes
- Use
driver, notproviderType. - HTTP client selection is handled in runtime construction (
InferenceRuntime/EmbeddingsRuntime), not as a config field. - Retry policy is explicit (
withRetryPolicy(...)), not embedded in genericoptions.