I found the problem and it’s really bad. Looking at your log, here’s the catastrophic command that was run: rm -rf tests/ patches/ plan/ ~/
Not that it’s a foolproof solution, but I’ve aliased rm to a trash command for almost ten years now to move files to the trash instead of deleting them. A trash command is built into macOS Sequoia and newer.
The one downside is the command doesn’t support the -f or -r arguments, so it often causes issues with Claude Code. Sometimes it’ll switch to using rmdir, so I’m also aliasing that now.
I’m not sure how much moving the home directory to the trash would have helped, since the trash command does completely remove files and directories prefixed with a period.
After seeing Wispr Flow mentioned a few times, I was curious about using speech-to-text to interact with Claude Code. The idea of paying $12/month and potentially sharing all my prompts wasn’t ideal, so I decided to see what Claude Code could help me build.
With a bit of experimentation, I settled on using sox for audio recording, since it seemed the best at silence detection, and parakeet-mlx for transcribing. I tried various improvements to silence detection, but found turning up my input volume helped the most.
With three commands in a script, I have a decent local speech-to-text solution:
#!/bin/bash
RECORDING_FILE="/tmp/record-recording.wav"
TRANSCRIPT_FILE="/tmp/record-recording.txt"
# Record until silence or 60 seconds.
rec -q"${RECORDING_FILE}" rate 16k pad 0.2 0 silence 1 0.05 1% 1 1.0 1% trim 0 60
The real script is a little more verbose, but this demonstrates the core of it. I use Hammerspoon to trigger the script and type the response for me. I also have it display a recording and transcribing status indicator in the menu bar.
My next step is to decide what to use for transcription on Linux so I can use it on my other machine. I might also create a second script that pipes the results through an LLM for use outside of prompts.
The release also features the Ministral 3 series—three edge-focused models (3B, 8B, and 14B parameters) designed for superior cost-to-performance efficiency. These smaller models include multimodal and multilingual capabilities, making them suitable for edge deployments.
I’m excited to see small models continuing to advance. While I’m all in on Claude Code for code-related tasks, I enjoy using local models for simple copywriting tasks or for tasks requiring complete privacy.