Extras
Working directly with LLMs and JSON - MdJSON mode
Overview
While working with Inference
class, you can also generate JSON output
from the model inference. This is useful for example when you need to
process the response in a structured way or when you want to store the
elements of the response in a database.
Example
In this example we will use emulation mode - MdJson, which tries to force the model to generate a JSON output by asking it to respond with a JSON object within a Markdown code block.
This is useful for the models which do not support JSON output directly.
We will also provide an example of the expected JSON output in the prompt to guide the model in generating the correct response.