Configuration Options
Instructor provides extensive configuration options through fluent API methods to customize its behavior, processing, and integration with various LLM providers.Request Configuration
Configure how Instructor processes your input and builds requests:Response Configuration
Define how Instructor should process and validate responses:StructuredOutputConfig:
ResponseCachePolicy::Memory if you need second-pass replay of streamed updates.
Advanced Configuration
Fine-tune Instructor’s internal processing:LLM Provider Configuration
Configure connection and communication with LLM providers:StructuredOutputRuntime also supports:
fromConfig(LLMConfig $config)- start from explicitLLMConfigfromResolver(CanResolveLLMConfig $resolver)- plug your own resolverfromDefaults()- use defaults with optional event/http overrides
config()events()validators(),transformers(),deserializers(),extractors()
Processing Pipeline Overrides
Customize validation, transformation, and deserialization:StructuredOutputConfig Knobs
BesidesoutputMode, maxRetries, and responseCachePolicy, you can tune:
schemaName/schemaDescription- schema metadata sent to the modeltoolName/toolDescription- tool-call metadata inOutputMode::ToolsretryPrompt- feedback prompt used for recovery attemptsmodePrompts- per-mode prompt templatesuseObjectReferences- schema rendering behavior for object referencesdefaultToStdClass- fallback type for schema-less payloadsthrowOnTransformationFailure- fail-fast behavior for transform stepchatStructure- order of template sections used to build final prompt