Category:
Developer tooling
Vendor-agnostic AI tooling — portable skills across coding agents
One canonical source folder. Symlinks from each tool's expected directory back to that source. The same skill files work in Cursor, Claude Code, and any other AI coding agent without forks. Here's the layout, the sync logic, and why locking your skill library into one tool is a mistake worth avoiding.
When your AI writes 200 lines and it could be 50
The single most common failure mode I see with AI-generated code is overengineering. Not bad code — just too much of it. This post is a tour of the specific shapes that overengineering takes, before/after diffs from real sessions, and the one-line mental check that fixes most of it.
Behavioral guardrails for AI coding assistants
There are four behaviors I want any AI coding agent to follow, regardless of what model is behind it. Think before coding. Keep it simple. Make surgical changes. Define what done means. Each one came from a real mistake I watched the agent make. Here's why I encoded them as a rule file.
Org-wide standards as code
One standards repo, one wrapper that turns those standards into AI-agent rules, skills, and a security-review subagent. Add the wrapper as a workspace folder next to any project and the agent automatically follows the policy. Here's the pattern, the folder layout, and why this beats per-project copies.
A read-only AI security reviewer
A subagent that reads your security rules, looks at the code you just wrote, and tells you what's wrong by severity — with citations. This is the actual definition file, the rule set it consults, the output it produces, and the small choices that make it useful instead of a noise generator.
Subagents — specialized AI that doesn't pollute your context
Asking your main AI agent to do a heavy analysis task burns context you can never get back. Subagents are how you spawn a fresh, scoped, sometimes read-only AI instance to do that work — and hand back only the answer, not the noise. Here's the model, and when to reach for it.
Building debug skills that inspect live systems
A debug skill turns "let me crack open the database and see what the agent thought it was doing" into a thirty-second question to your AI agent. This is a walkthrough of one — start to finish — with the actual layout I use, the gotchas, and the small decisions that decide whether the skill is genuinely useful or just visible noise.
A taxonomy of AI agent skills
After writing thirty-three skills, a structure emerged. Six types. A naming convention that the agent itself uses to find the right one. This is the taxonomy I landed on, why each type exists, and the small design decisions that made the whole drawer easier to navigate.
AI agent skills — why I wrote 33 of them
I started with one. A small Markdown file with a few shell commands inside, that the agent could pull up by name. Two months later there were thirty-three. Here's the story of how the skill drawer filled up, and why it kept paying for itself.
Rules as project memory
AI coding agents start every conversation with no memory. Rules are the small Markdown files that fix that — they tell the agent the conventions of your project, attached to the right files at the right time. Here's how they work and how I think about writing them.