Tag: ai

We are the art by Brandon Sanderson

Brandon Sanderson’s Keynote speech

Brandon Sanderson offers his view on why AI art is not art. I highly recommend watching the whole video, it’s very entertaining and he makes a lot of good points from the perspective of those who make art.

I’m more of a consumer than a creator (although I do play guitar which scratches the itch), and my view on art and AI is that what makes art valuable to me is the context in which the art is made. It’s why I can hear the same song played by two different musicians and one version moves you more than another.

My example of this is listening to Townes Van Zandt play covers of songs, and they hit different. Knowing who he was as a person and the struggles he had throughout life really change how you experience a song. The pain is something you can hear in a raspy voice that is the result of years of smoking and hard living.

To echo, but slightly modify, Brandon’s point here, no matter how good AI gets at making art, it won’t have the same impact on me as a consumer because it’s completely void of any context that would give it meaning. 
# / 2026 / 02 / 02

Get notifications from Claude Code on Windows with WSL

My ~/.claude/settings.json with full solution

I've been looking for a way to get notified when Claude Code needs my input or is finished. Big shout out to u/Ok-Engineering2612 on Reddit for this post:  WSL Toast Notifications with Hooks in Claude Code : r/ClaudeAI. I had been trying to do the same thing with BurntToast but I forget the way WSL interops with Windows.

The settings from the Reddit thread did need a little tweaking ($PAYLOAD is no longer supported and now Claude Code sends the JSON structure via stdin). Here's my change to the command:

 "command": "input=$(cat) && powershell.exe -NoProfile -Command \"Import-Module BurntToast; New-BurntToastNotification -Text 'Claude Code Notification', '$(echo \"$input\" | jq -r '.message')'\""
Here is the full documentation for hooks: Hooks reference
# / 2026 / 01 / 09

LLMs - unexpected side effects

Simon Willison on helping people write code again

+1 to what Simon is saying. I’ve been able to do a lot more on side projects now especially as a busy parent. I will kick off a prompt on Claude code (on my phone sometimes with Claude code web) and then go back to playing with my daughter or cooking or dishes. I can prototype ideas I have and look at the results whenever I have time. It’s really made me fall in love with programming again! I just wish I could find a better workflow for it at work. I’m struggling to get out of the “small toy projects” phase and into using it on larger projects. Probably less of a me problem and more of a tech / organization problem. 
# / 2026 / 01 / 04

Set default pipe/redirect encoding in Python

[via] Changing default encoding of Python? - StackOverflow

I ran into an issue using llm today where I was unable to save a response to a file using a pipe

llm llm logs -n 1 | Out-File response.txt
This would give me the error "UnicodeEncodeError: 'charmap' codec can't encode character '\u2192' in position 2831: character maps to <undefined>"

If you set the "PYTHONIOENCODING" environment variable to "utf8", it will fix the issue. This is because Python's default encoding is ASCII. Since the last response I got back from the model contained a non-ASCII character, this error was thrown.

So now, in my PowerShell profile, I've added a line to set the default to utf8, which fixes the issue.

$env:PYTHONIOENCODING = 'utf8'
# / 2025 / 04 / 21

LLM templates

david-jarman/llm-templates: LLM templates to share

Simon Willison's LLM tool now supports sharing and re-using prompt templates. This means you can create yaml prompt templates in GitHub and then consume them from anywhere using the syntax llm -t gh:{username}/{template-name}.

I have created my own repo where I will be uploading my prompt templates that I use. My most recent template that I've been getting value out of is "update-docs". I use this prompt/model combination to update documentation in my codebases after I've refactored code or added new functionality. The setup is that I use "files-to-prompt" to build the context of the codebase, including samples, then add a single markdown document that I want to be updated at the end. I've found that asking the AI to do too many things at once ends up with really bad results. I've also been playing around with different models. I haven't come to a conclusion on which is the absolute best for updating documentation, but so far o4-mini has given me better vibes than GPT 4.1.

Here is the one-liner command I use to update each document:

files-to-prompt -c -e cs -e md -e csproj --ignore "bin*" --ignore "obj*" /path/to/code /path/to/samples /path/to/doc.md | llm -t gh:david-jarman/update-docs
You can override the model in the llm call using "-m <model>"

llm -t gh:david-jarman/update-docs -m gemini-2.5-pro-exp-03-25
The next thing I'd like to tackle is creating a fragment provider for this scenario so I don't have to add so many paths to files-to-prompt. It's a bit clunky and I think it would be more elegant to just have a fragment provider that knows about my codebase structure and can bring in the samples and code without me needing to specify it each time.
# / 2025 / 04 / 18

Security issues with MCP

The “S” in MCP Stands for Security

Great article that outlines some of the attack vectors of the Model Conext Protocol. I’ve been playing around with it recently in Claude Code and by attempting to integrate it into the llm CLI by simonw. 

As with any dependency, it’s good to vet the source before using it. Same is true for mcp servers, which are usually docker containers, npm or python tools.
# / 2025 / 04 / 06

Pokemon Red RL

Train RL agents to play Pokemon Red - GitHub

I'm very late to the trend of AI playing Pokemon gameboy games, but I just started playing Pokemon Red myself on the iOS Delta emulator, and have been having lots of fun. To be clear, this is my first time doing anything with Pokemon. It wasn't something I was into as a child, but am for some reason discovering it as an adult and enjoying it.

I just wanted to make a quick post to show how I got the PokemonRedExperiments project running on my MacBook Pro M4 using uv.

Full steps:

# Clone the repo
git clone https://github.com/PWhiddy/PokemonRedExperiments.git

# Install ffmpeg
brew install ffmpeg

# Copy ROM to git root path
cd PokemonRedExperiments
cp /path/to/pokemon-red.gb PokemonRed.gb
# Validate rom is valid. Should produce ea9bcae617fdf159b045185467ae58b2e4a48b9a
shasum ./PokemonRed.gb

# Set up python environment
cd baselines
uv venv --python 3.10
uv pip install -r requirements.txt

# Start the pre-trained RL agent
uv run ./run_pretrained_interactive.py
I first tried using Python 3.12, as the README suggested using Python 3.10+, but I found that there are package dependency conflicts with 3.12, so I changed my uv command to use 3.10 and everything worked. This is why I love uv. I can very easily try out other versions of python and not worry about messing up other projects.

Running Pokemon Red with RL
# / 2025 / 03 / 25

Claude Code Initial Impressions

Claude Code Announcement

Claude announced a new hybrid reasoning model today. That's a great idea to have a singular model for both reasoning and quick responses.

What I'm more interested in is their new Claude Code tool. It's an interactive CLI that is similar to GitHub Copilot or Cursor, but only runs in your terminal as of now. Here is the link for setting it up: https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview

I was hoping that this tool would just use my existing Claude plan, but no, of course you actually pay for the tokens it uses. I'm sure this was a very conscious decision, as this tool uses A LOT of tokens right now. I mean, it's breathtaking. The first thing I did was load it up on my link blog codebase, and ran the /init command to generate a readme file for the codebase. I immediately ran the /cost command to see how much that operation costed. Thirty cents. That may not sound like much, but for as small as my codebase is, I was expecting that to only be a few cents. I then gave it a very specific task to add validation to my admin post authoring form. I gave it a fair bit of instruction, as the docs recommends treating the tool like a software engineer that you would delegate a task to. So I gave it hints as to how to find validation rules and all that. I then sent it off. It ran for something like 2 minutes making the change. It prompted me for permission to perform tool actions (e.g. run some bash commands, run build, etc). After a total of 10 minutes of use, I was up to $1.50 in spend, the code did not build, and I realized that the tool call to build the code was broken. Edit: It turns out powershell is not officially supported yet. You must use bash or zsh to launch claude.

I'm still excited about this tool and will keep playing around with it. I'll probably have to reload my anthropic wallet with more credits soon as it is expensive, but so far it seems like a really cool concept, and I hope they keep improving it and driving down the cost.
# / 2025 / 02 / 24