Providing examples to LLM
To improve the results of LLM inference you can provide examples of the expected output. This will help LLM to understand the context and the expected structure of the output. It is typically useful in theOutputMode::Json and OutputMode::MdJson modes, where the output
is expected to be a JSON object.
Instructor’s request() method accepts an array of examples as the examples parameter,
where each example is an instance of the Example class.
Example class
Example constructor have two main arguments: input and output.
The input property is a string which describes the input message, while he output
property is an array which represents the expected output.
Instructor will append the list of examples to the prompt sent to LLM, with output
array data rendered as JSON text.
Modifying the example template
You can use a template string as an input for the Example class. The template string may contain placeholders for the input data, which will be replaced with the actual values during the execution. Currently, the following placeholders are supported:{input}- replaced with the actual input message{output}- replaced with the actual output data
Convenience factory methods
You can also create Example instances using thefromText(), fromChat(), fromData()
helper static methods. All of them accept $output as an array of the expected output data
and differ in the way the input data is provided.
Make example from text
Example::fromText() method accepts a string as an input. It is equivalent to creating
an instance of Example using the constructor.
Make example from chat
Example::fromChat() method accepts an array of messages, which may be useful when
you want to use a chat or chat fragment as a demonstration of the input.
Make example from data
Example::fromData() method accepts any data type and uses the Json::encode() method to
convert it to a string. It may be useful to provide a complex data structure as an example
input.