Symptoms
- Errors like “model not found,” “parameter not supported,” or “context length exceeded”
- Unexpected or degraded response quality from certain models
- Requests that succeed on one model but fail on another
- Tool calls or JSON output that work with some models but not others
Check Model Availability
Verify that the model identifier in your preset or request matches a model that is currently available from the provider. Model names are case-sensitive and must be exact:Context Length Limits
Each model has a maximum context length (measured in tokens). If your input exceeds this limit, the provider returns an error. ThecontextLength field in the preset defines this limit for reference, but the actual enforcement happens at the provider.
Common context windows:
| Model | Approximate Context Window |
|---|---|
| GPT-4.1 | 1,000,000 tokens |
| GPT-4.1-nano | 1,000,000 tokens |
| Claude Haiku 4.5 | 200,000 tokens |
| Gemini models | varies by model |
| Llama 3 (via Ollama) | 128,000 tokens |
- Summarizing or truncating the input
- Splitting the request into smaller chunks
- Switching to a model with a larger context window
Tool and Function Calling Support
Not all models support tool (function) calling. If you passtools to a model that does not support them, the provider may return an error or silently ignore the tools.
When debugging tool-related failures, first confirm the request works without tools:
JSON and Structured Output Support
Models vary in their support for structured output formats:- JSON Schema mode — the model is constrained to output JSON matching a specific schema. Only some models support this.
- JSON object mode — the model is instructed to output valid JSON, but without schema enforcement.
- Plain text — all models support this.
responseFormat option controls this, but the actual behavior depends on the model.
Streaming Support
Most modern models support streaming, but some do not. If enabling streaming causes errors, test with a non-streaming request first:Vision and Multimodal Capabilities
Only certain models support image inputs. Sending images to a text-only model will cause an error. Check the provider’s documentation to confirm which models accept multimodal input.Implement Model Fallbacks
For production applications, implement a fallback strategy that tries alternative models when the preferred model fails:Debugging Approach
When a model-specific issue arises, use this systematic approach:- Reduce to plain text. Remove tools, response format, and streaming. If the plain text request fails, the problem is not model-capability related (check authentication, configuration, or connection).
- Add features one at a time. Re-enable streaming, then response format, then tools. The first feature that causes failure identifies the unsupported capability.
- Check the provider’s model documentation. Verify that the specific model version supports the feature you need.
- Try a different model from the same provider to confirm whether the issue is model-specific or provider-wide.