Overview
Instructor offers a simplified way to work with LLM providers’ APIs supporting caching, so you can focus on your business logic while still being able to take advantage of lower latency and costs.Note: Context caching is automatic for all OpenAI API calls. Read more in the OpenAI API documentation.