Skip to content

Commit 000e984

Browse files
committed
--extract support for templates, closes #681
1 parent 67d4a99 commit 000e984

File tree

4 files changed

+31
-7
lines changed

4 files changed

+31
-7
lines changed

docs/templates.md

+17-4
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,11 @@ You can also save default parameters:
2626
llm --system 'Summarize this text in the voice of $voice' \
2727
--model gpt-4 -p voice GlaDOS --save summarize
2828
```
29+
If you add `--extract` the setting to {ref}`extract the first fenced code block <usage-extract-fenced-code>` will be persisted in the template.
30+
```bash
31+
llm --system 'write a Python function' --extract --save python-function
32+
llm -t python-function 'reverse a string'
33+
```
2934
## Using a template
3035

3136
You can execute a named template using the `-t/--template` option:
@@ -100,7 +105,7 @@ curl -s 'https://til.simonwillison.net/macos/imovie-slides-and-audio' | \
100105
Output:
101106
> In a fantastical steampunk world, Simon Willison decided to merge an old MP3 recording with slides from the talk using iMovie. After exporting the slides as images and importing them into iMovie, he had to disable the default Ken Burns effect using the "Crop" tool. Then, Simon manually synchronized the audio by adjusting the duration of each image. Finally, he published the masterpiece to YouTube, with the whimsical magic of steampunk-infused illustrations leaving his viewers in awe.
102107

103-
## System templates
108+
### System templates
104109

105110
When working with models that support system prompts (such as `gpt-3.5-turbo` and `gpt-4`) you can set a system prompt using a `system:` key like so:
106111

@@ -116,7 +121,7 @@ system: You speak like an excitable Victorian adventurer
116121
prompt: 'Summarize this: $input'
117122
```
118123

119-
## Additional template variables
124+
### Additional template variables
120125

121126
Templates that work against the user's normal input (content that is either piped to the tool via standard input or passed as a command-line argument) use just the `$input` variable.
122127

@@ -157,7 +162,7 @@ I got this:
157162
> My previous test subject seemed to have learned something new about iMovie. They exported keynote slides as individual images [...] Quite impressive for a human.
158163

159164
(prompt-default-parameters)=
160-
## Specifying default parameters
165+
### Specifying default parameters
161166

162167
You can also specify default values for parameters, using a `defaults:` key.
163168

@@ -185,7 +190,15 @@ I got this:
185190

186191
> Text, summarize in Yoda's voice, I will: "Hmm, young padawan. Summary of this text, you seek. Hmmm. ...
187192

188-
## Setting a default model for a template
193+
### Configuring code extraction
194+
195+
To configure the {ref}`extract first fenced code block <usage-extract-fenced-code>` setting for the template, add this:
196+
197+
```yaml
198+
extract: true
199+
```
200+
201+
### Setting a default model for a template
189202

190203
Templates executed using `llm -t template-name` will execute using the default model that the user has configured for the tool - or `gpt-3.5-turbo` if they have not configured their own default.
191204

llm/cli.py

+6-3
Original file line numberDiff line numberDiff line change
@@ -262,9 +262,6 @@ def prompt(
262262

263263
model_aliases = get_model_aliases()
264264

265-
if extract:
266-
no_stream = True
267-
268265
def read_prompt():
269266
nonlocal prompt
270267

@@ -319,6 +316,8 @@ def read_prompt():
319316
to_save["system"] = system
320317
if param:
321318
to_save["defaults"] = dict(param)
319+
if extract:
320+
to_save["extract"] = True
322321
path.write_text(
323322
yaml.dump(
324323
to_save,
@@ -335,6 +334,7 @@ def read_prompt():
335334
if system:
336335
raise click.ClickException("Cannot use -t/--template and --system together")
337336
template_obj = load_template(template)
337+
extract = template_obj.extract
338338
prompt = read_prompt()
339339
try:
340340
prompt, system = template_obj.evaluate(prompt, params)
@@ -343,6 +343,9 @@ def read_prompt():
343343
if model_id is None and template_obj.model:
344344
model_id = template_obj.model
345345

346+
if extract:
347+
no_stream = True
348+
346349
conversation = None
347350
if conversation_id or _continue:
348351
# Load the conversation - loads most recent if no ID provided

llm/templates.py

+2
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ class Template(BaseModel):
99
system: Optional[str] = None
1010
model: Optional[str] = None
1111
defaults: Optional[Dict[str, Any]] = None
12+
# Should first fenced code block be extracted?
13+
extract: Optional[bool] = None
1214

1315
class Config:
1416
extra = "forbid"

tests/test_templates.py

+6
Original file line numberDiff line numberDiff line change
@@ -91,6 +91,12 @@ def test_templates_list(templates_path, args):
9191
{"prompt": "Say hello as $name", "defaults": {"name": "default-name"}},
9292
None,
9393
),
94+
# -x/--extract should be persisted:
95+
(
96+
["--system", "write python", "--extract"],
97+
{"system": "write python", "extract": True},
98+
None,
99+
),
94100
),
95101
)
96102
def test_templates_prompt_save(templates_path, args, expected_prompt, expected_error):

0 commit comments

Comments
 (0)