Overview
Groq is LLM providers offering a very fast inference thanks to their custom hardware. They provide a several models - Llama2, Mixtral and Gemma. Supported modes depend on the specific model, but generally include:- OutputMode::MdJson - fallback mode
- OutputMode::Json - recommended
- OutputMode::Tools - supported