You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
llm prompt --schema X option and model.prompt(..., schema=) parameter (#777)
Refs #776
* Implemented new llm prompt --schema and model.prompt(schema=)
* Log schema to responses.schema_id and schemas table
* Include schema in llm logs Markdown output
* Test for schema=pydantic_model
* Initial --schema CLI documentation
* Python docs for schema=
* Advanced plugin docs on schemas
Copy file name to clipboardexpand all lines: docs/changelog.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -411,7 +411,7 @@ There's also a new {ref}`llm.Collection <embeddings-python-collections>` class f
411
411
- The output formatfor`llm logs` has changed. Previously it was JSON- it's now a much more readable Markdown format suitable for pasting into other documents. [#160](https://github.com/simonw/llm/issues/160)
412
412
- The new `llm logs --json` option can be used to get the old JSONformat.
413
413
- Pass `llm logs --conversation ID`or`--cid ID` to see the full logs for a specific conversation.
414
-
- You can now combine piped inputand a prompt in a single command: `cat script.py | llm 'explain this code'`. This works even for models that do not support {ref}`system prompts <system-prompts>`. [#153](https://github.com/simonw/llm/issues/153)
414
+
- You can now combine piped inputand a prompt in a single command: `cat script.py | llm 'explain this code'`. This works even for models that do not support {ref}`system prompts <usage-system-prompts>`. [#153](https://github.com/simonw/llm/issues/153)
415
415
- Additional {ref}`openai-compatible-models` can now be configured with custom HTTP headers. This enables platforms such as [openrouter.ai](https://openrouter.ai/) to be used withLLM, which can provide Claude access even without an Anthropic API key.
416
416
- Keys setin`keys.json` are now used in preference to environment variables. [#158](https://github.com/simonw/llm/issues/158)
417
417
- The documentation now includes a {ref}`plugin directory <plugin-directory>` listing all available plugins forLLM. [#173](https://github.com/simonw/llm/issues/173)
Copy file name to clipboardexpand all lines: docs/plugins/advanced-model-plugins.md
+15
Original file line number
Diff line number
Diff line change
@@ -90,6 +90,21 @@ def register_models(register):
90
90
)
91
91
```
92
92
93
+
(advanced-model-plugins-schemas)=
94
+
95
+
## Supporting schemas
96
+
97
+
If your model supports {ref}`structured output <python-api-schemas>` against a defined JSON schema you can implement support by first adding `supports_schema = True` to the class:
98
+
99
+
```python
100
+
classMyModel(llm.KeyModel):
101
+
...
102
+
support_schema =True
103
+
```
104
+
And then adding code to your `.execute()` method that checks for `prompt.schema` and, if it is present, uses that to prompt the model. `prompt.schema` will always be a Python dictionary, even if the user passed in a Pydantic model class.
105
+
106
+
Check the [llm-gemini](https://github.com/simonw/llm-gemini) and [llm-anthropic](https://github.com/simonw/llm-anthropic) plugins for example of this pattern in action.
See {ref}`prompt templates <prompt-templates>` for more.
124
124
125
-
(conversation)=
125
+
(usage-schemas)=
126
+
### Schemas
127
+
128
+
Some models include the ability to return JSON that matches a provided [JSON schema](https://json-schema.org/). Models from OpenAI, Anthropic and Google Gemini all include this capability.
129
+
130
+
LLM has alpha functionality for specifying a schema to use for the response to a prompt.
131
+
132
+
Create the schema as a JSON string, then pass that to the `--schema` option. For example:
133
+
134
+
```bash
135
+
llm --schema '{
136
+
"type": "object",
137
+
"properties": {
138
+
"dogs": {
139
+
"type": "array",
140
+
"items": {
141
+
"type": "object",
142
+
"properties": {
143
+
"name": {
144
+
"type": "string"
145
+
},
146
+
"bio": {
147
+
"type": "string"
148
+
}
149
+
}
150
+
}
151
+
}
152
+
}
153
+
}' -m gpt-4o-mini 'invent two dogs'
154
+
```
155
+
The JSON returned from the model should match that schema.
156
+
157
+
Be warned that different models may support different dialects of the JSON schema specification.
158
+
159
+
(usage-conversation)=
126
160
### Continuing a conversation
127
161
128
162
By default, the tool will start a new conversation each time you run it.
0 commit comments