Extras
Image processing - car damage detection
Basics
- Basic use
- Basic use via mixin
- Handling errors with `Maybe` helper class
- Modes
- Making some fields optional
- Private vs public object field
- Automatic correction based on validation results
- Using attributes
- Using LLM API connections from config file
- Validation
- Custom validation using Symfony Validator
- Validation across multiple fields
Advanced
- Context caching
- Context caching (Anthropic)
- Customize parameters of OpenAI client
- Custom prompts
- Using structured data as an input
- Extracting arguments of function or method
- Streaming partial updates during inference
- Providing example inputs and outputs
- Extracting scalar values
- Extracting sequences of objects
- Streaming
- Structures
Troubleshooting
LLM API Support
Extras
- Extraction of complex objects
- Extraction of complex objects (Anthropic)
- Extraction of complex objects (Cohere)
- Extraction of complex objects (Gemini)
- Embeddings
- Image processing - car damage detection
- Image to data (OpenAI)
- Image to data (Anthropic)
- Image to data (Gemini)
- Working directly with LLMs
- Working directly with LLMs and JSON - JSON mode
- Working directly with LLMs and JSON - JSON Schema mode
- Working directly with LLMs and JSON - MdJSON mode
- Inference and tool use
- Working directly with LLMs and JSON - Tools mode
- Prompts
- Generating JSON Schema from PHP classes
- Generating JSON Schema dynamically
- Simple content summary
- Create tasks from meeting transcription
- Translating UI text fields
- Web page to PHP objects
Extras
Image processing - car damage detection
Overview
This is an example of how to extract structured data from an image using Instructor. The image is loaded from a file and converted to base64 format before sending it to OpenAI API.
In this example we will be extracting structured data from an image of a car with visible damage. The response model will contain information about the location of the damage and the type of damage.
Scanned image
Here’s the image we’re going to extract data from.
Example
<?php
$loader = require 'vendor/autoload.php';
$loader->add('Cognesy\\Instructor\\', __DIR__ . '../../src/');
use Cognesy\Instructor\Extras\Image\Image;
use Cognesy\Instructor\Features\Schema\Attributes\Description;
use Cognesy\Instructor\Utils\Str;
enum DamageSeverity : string {
case Minor = 'minor';
case Moderate = 'moderate';
case Severe = 'severe';
case Total = 'total';
}
enum DamageLocation : string {
case Front = 'front';
case Rear = 'rear';
case Left = 'left';
case Right = 'right';
case Top = 'top';
case Bottom = 'bottom';
}
class Damage {
#[Description('Identify damaged element')]
public string $element;
/** @var DamageLocation[] */
public array $locations;
public DamageSeverity $severity;
public string $description;
}
class DamageAssessment {
public string $make;
public string $model;
public string $bodyColor;
/** @var Damage[] */
public array $damages = [];
public string $summary;
}
$assessment = Image::fromFile(__DIR__ . '/car-damage.jpg')
->toData(
responseModel: DamageAssessment::class,
prompt: 'Identify and assess each car damage location and severity separately.',
connection: 'openai',
model: 'gpt-4o',
options: ['max_tokens' => 4096]
);
dump($assessment);
assert(Str::contains($assessment->make, 'Toyota', false));
assert(Str::contains($assessment->model, 'Prius', false));
assert(Str::contains($assessment->bodyColor, 'white', false));
assert(count($assessment->damages) > 0);
?>
On this page