RequestMaterializeris the legacy/default pathStructuredPromptRequestMaterializeris the new path using prompt classes and markdown templates
StructuredOutputRuntime::withRequestMaterializer().
System And Prompt Text
The two most common customization points are the system message and the prompt text:- System text sets the model’s persona and overall behavior. Use it for stable instructions that apply across many requests.
- Prompt text provides task-specific instructions for this particular extraction. On the new structured prompt path it is rendered inside the single system prompt body alongside the mode-specific extraction instructions.
with() method:
Examples
Few-shot examples are another prompt component. On the new structured prompt path they are rendered as markdown inside the system prompt to demonstrate the expected extraction style:Example class.
Cached Context
Some providers (notably Anthropic) support prompt caching, where stable parts of the conversation are cached between requests to reduce latency and cost. UsewithCachedContext() to mark content as cacheable:
withCachedContext() is marked with cache control headers where the
provider supports them.
Mode-Specific Prompts
Instructor uses a default prompt for each output mode that tells the model how to format its response. On the legacy path these prompts are inline strings. On the new path they are prompt classes backed by markdown templates and configured inStructuredOutputConfig.
| Mode | Default prompt behavior |
|---|---|
Tools | ”Extract correct and accurate data from the input using provided tools.” |
Json | Includes the JSON Schema and asks for a strict JSON response |
JsonSchema | Asks for a strict JSON response following the provided schema |
MdJson | Includes the JSON Schema and asks for JSON inside a Markdown code block |
Overriding Mode Prompts
Legacy inline prompt override:Template Placeholders
Mode prompts support the<|json_schema|> placeholder, which Instructor replaces with
the JSON Schema generated from your response model. This is particularly important for
Json and MdJson modes, where the schema must be embedded in the prompt:
Tool Name And Description
InOutputMode::Tools, the tool definition sent to the model includes a name and
description. These provide semantic context that can improve extraction quality:
extracted_data and Function call based on user instructions.
respectively. Overriding them with task-specific values can help the model understand
what the tool represents.
OutputMode::JsonandOutputMode::MdJsonignore tool name and description since they do not use tool calling.
Retry Prompt
When validation fails and retries are enabled, Instructor appends a retry prompt to the conversation. The default is:deserializationErrorPromptClass.
Chat Structure
Instructor assembles the final prompt from named sections in a specific order. The default structure includes sections for system messages, cached context, prompt, examples, messages, and retries. You can reorder or extend this throughStructuredOutputConfig: