Tag: llm

Set default pipe/redirect encoding in Python

[via] Changing default encoding of Python? - StackOverflow

I ran into an issue using llm today where I was unable to save a response to a file using a pipe

llm llm logs -n 1 | Out-File response.txt
This would give me the error "UnicodeEncodeError: 'charmap' codec can't encode character '\u2192' in position 2831: character maps to <undefined>"

If you set the "PYTHONIOENCODING" environment variable to "utf8", it will fix the issue. This is because Python's default encoding is ASCII. Since the last response I got back from the model contained a non-ASCII character, this error was thrown.

So now, in my PowerShell profile, I've added a line to set the default to utf8, which fixes the issue.

$env:PYTHONIOENCODING = 'utf8'
# / 2025 / 04 / 21

Update all llm plugins

Quick one-liner to update all llm plugins using PowerShell:

llm plugins | ConvertFrom-Json | % { llm install -U $_.name }
# / 2025 / 04 / 21

LLM templates

david-jarman/llm-templates: LLM templates to share

Simon Willison's LLM tool now supports sharing and re-using prompt templates. This means you can create yaml prompt templates in GitHub and then consume them from anywhere using the syntax llm -t gh:{username}/{template-name}.

I have created my own repo where I will be uploading my prompt templates that I use. My most recent template that I've been getting value out of is "update-docs". I use this prompt/model combination to update documentation in my codebases after I've refactored code or added new functionality. The setup is that I use "files-to-prompt" to build the context of the codebase, including samples, then add a single markdown document that I want to be updated at the end. I've found that asking the AI to do too many things at once ends up with really bad results. I've also been playing around with different models. I haven't come to a conclusion on which is the absolute best for updating documentation, but so far o4-mini has given me better vibes than GPT 4.1.

Here is the one-liner command I use to update each document:

files-to-prompt -c -e cs -e md -e csproj --ignore "bin*" --ignore "obj*" /path/to/code /path/to/samples /path/to/doc.md | llm -t gh:david-jarman/update-docs
You can override the model in the llm call using "-m <model>"

llm -t gh:david-jarman/update-docs -m gemini-2.5-pro-exp-03-25
The next thing I'd like to tackle is creating a fragment provider for this scenario so I don't have to add so many paths to files-to-prompt. It's a bit clunky and I think it would be more elegant to just have a fragment provider that knows about my codebase structure and can bring in the samples and code without me needing to specify it each time.
# / 2025 / 04 / 18