Skip to content

0.23

Latest
Compare
Choose a tag to compare
@simonw simonw released this 28 Feb 16:57
· 3 commits to main since this release

Support for schemas, for getting supported models to output JSON that matches a specified JSON schema. See also Structured data extraction from unstructured content using LLM schemas for background on this feature. #776

  • New llm prompt --schema '{JSON schema goes here} option for specifying a schema that should be used for the output from the model. The schemas documentation has more details and a tutorial.
  • Schemas can also be defined using a concise schema specification, for example llm prompt --schema 'name, bio, age int'. #790
  • Schemas can also be specified by passing a filename and through several other methods. #780
  • New llm schemas family of commands: llm schemas list, llm schemas show, and llm schemas dsl for debugging the new concise schema language. #781
  • Schemas can now be saved to templates using llm --schema X --save template-name or through modifying the template YAML. #778
  • The llm logs command now has new options for extracting data collected using schemas: --data, --data-key, --data-array, --data-ids. #782
  • New llm logs --id-gt X and --id-gte X options. #801
  • New llm models --schemas option for listing models that support schemas. #797
  • model.prompt(..., schema={...}) parameter for specifying a schema from Python. This accepts either a dictionary JSON schema definition or a Pydantic BaseModel subclass, see schemas in the Python API docs.
  • The default OpenAI plugin now enables schemas across all supported models. Run llm models --schemas for a list of these.
  • The llm-anthropic and llm-gemini plugins have been upgraded to add schema support for those models. Here's documentation on how to add schema support to a model plugin.

Other smaller changes:

  • GPT-4.5 preview is now a supported model: llm -m gpt-4.5 'a joke about a pelican and a wolf' #795
  • The prompt string is now optional when calling model.prompt() from the Python API, so model.prompt(attachments=llm.Attachment(url=url))) now works. #784
  • extra-openai-models.yaml now supports a reasoning: true option. Thanks, Kasper Primdal Lauritzen. #766
  • LLM now depends on Pydantic v2 or higher. Pydantic v1 is no longer supported. #520