Overview
Instructor offers a simplified way to work with LLM providers’ APIs supporting caching, so you can focus on your business logic while still being able to take advantage of lower latency and costs.Note 1: Instructor supports context caching for Anthropic API and OpenAI API.
Note 2: Context caching is automatic for all OpenAI API calls. Read more in the OpenAI API documentation.
Example
When you need to process multiple requests with the same context, you can use context caching to improve performance and reduce costs. In our example we will be analyzing the README.md file of this Github project and generating its structured description for multiple audiences. Let’s start by defining the data model for the project details and the properties that we want to extract or generate based on README file.Project
data model.
Let’s start by asking the user to describe the project for a specific audience: P&C insurance CIOs.