The tools we build shape how we think
There’s an old idea that the tools we use shape the way we think. Marshall McLuhan said something close to it about media. Carpenters say it about hand tools. Programmers say it about the languages they spend the most time in — Lisp will give you one model of computation; Haskell will give you another; the model you carry forward is the model you used most.
I’ve been thinking about a version of that idea for the last few months, watching what happens to my own mental shape as I’ve been building AI tooling around the way I work. The straightforward effect is the one everyone expects: the tools save time, work compounds, capabilities stack. That’s been true for me. What I didn’t expect was that, three months in, I’d be approaching unfamiliar problems with a different mental shape than I had before. Not because my brain changed. Because the system around my brain changed, and the way I think about problems started reflecting the system I’m thinking inside.
This is a slow post. I’m not sure I’m going to land it cleanly. But I want to write it down anyway because the experience of noticing it has been one of the more genuinely interesting things to come out of this period, and I haven’t seen it described in the practitioner posts I read.
The first loop — tools change the work
This one is well-documented and easy to talk about. The tools I’ve built — rules, skills, subagents, the journaling integration — have made specific work cheaper. I’ve covered each of those in their own posts in this series. The first loop is:
Build a tool. The tool reduces friction on a recurring task. The recurring task becomes faster. You do it more reliably. The output gets steadier.
That’s the whole loop. It’s real. It’s worth doing. It’s also not the interesting loop.
The second loop — changed work changes how you think about work
The interesting loop is the one that runs over weeks and months, almost invisibly, after the first loop has been operating for a while.
The shape of it, in my experience:
The tools handle the recurring tasks well. You stop spending mental energy on them. The freed mental energy goes into noticing patterns at a higher level. Those patterns suggest new tools. The new tools change the recurring tasks again. Each turn of the loop, the level of abstraction at which you operate moves up a notch.
The first time I noticed this happening, I was debugging an issue in the AI agent project I work on. Two years ago, “debug an issue” would have meant: open the codebase, find the entry point, trace the data flow by hand, look at logs, reconstruct what happened. A reasonable engineer’s debugging approach. It worked.
Now, “debug an issue” means: ask the agent to inspect the persisted state, ask the agent to replay a slice of the pipeline with mocked inputs, ask the agent to summarize the trace. Different actions. The actions themselves are not what changed.
What changed is the question I ask first. Two years ago, my first question was “what is the code doing?”. Now my first question is “what is the system in?”. The shift from code to system is small in words. It’s enormous in practice. It changes which evidence I look at first, which intuitions I trust, which kinds of bugs I expect to find.
That’s the second loop in action. The tools made it cheap to inspect state. Cheap state inspection became my default move. That default reshaped which questions I asked. The reshaped questions are now operating at a different level of abstraction than they used to.
Some specific shifts I’ve noticed
A few concrete examples of what the changed mental shape looks like.
I now think in pipelines first, code second. Before all this tooling, my mental model of any project was a graph of files and functions. I’d navigate it by name. Now my mental model of any project is a graph of stages — input, processing, persistence, output, communication — and the files are things that implement the stages. The pipeline-first view came from spending months working with explicit pipeline-shaped systems, where the agent’s state machine has named stages and the debug skills inspect them. The pipeline-first view has migrated outward into how I think about other systems too, even ones that don’t have explicit pipelines.
I notice the cost of asking before the cost of answering. When I’m trying to figure something out, the first thing I evaluate now isn’t “how hard is the answer?” but “how cheap is it to make this question askable?”. If the question isn’t expressible as input to any tool I have, I notice. Sometimes the noticing is a sign I should write a tool. Sometimes it’s a sign the question isn’t well-formed yet. Either way, framing-as-tool-input has become the first move.
I treat my own future self as an audience that needs structured input. This is the journaling effect. When something happens that future-me will need to recall, I write it in a form that’s structured, searchable, and decoupled from this week’s chat history. The shift is from storing memory in conversation to storing memory in artifacts. It feels like a small organizational habit. It’s actually a different model of how knowledge accumulates over time.
I ask the agent to push back before I commit to a plan. The behavioral guardrails post is the source of this one. The “Think Before Coding” rule asks the agent to surface assumptions, propose alternatives, and push back when warranted. Once that pattern was real, I started pre-applying it to my own thinking. Before I commit to a plan, I now run the same check on myself: what am I assuming? what alternatives haven’t I considered? what’s the simpler version of this? The agent’s discipline became my discipline.
I default to surgical changes. Same source. The “Surgical Changes” rule encodes the test that every changed line should trace directly to the user’s request. Once I’d been operating with that rule for a while, I started applying it to my own work, including work the agent isn’t involved in. “Why am I touching this?” is now a question I ask myself reflexively when reviewing my own diffs.
In each case, a constraint or pattern I encoded in the AI tooling has migrated upward into my own thinking. The agent didn’t teach me these things — most of them were already engineering hygiene I’d theoretically agreed with. What changed is that operating an agent that consistently applied these patterns made them feel less like aspirations and more like the actual shape of work.
What’s actually happening, at a slightly deeper level
I want to try to say what I think is going on, even though I’m not sure I have it right.
When you spend a long time operating in a system with certain shapes — explicit pipelines, structured artifacts, written rules, audited reviews — those shapes become available to your thinking even outside the system. Cognitive scientists have a name for something like this; the practice of mental models being shaped by repeated tool use isn’t novel. The new thing, for me, is watching it happen with AI tooling specifically, where the shapes you’re internalizing aren’t the shape of a calculator or a programming language, but the shape of a structured way of breaking down problems and assigning sub-problems to specialized handlers.
The tools I’ve described in this series each embody a particular way of decomposing work:
- Rules embody the conventions of the project in a form that can be loaded automatically.
- Skills embody recurring procedures in a form that can be invoked by name.
- Subagents embody specialized roles in a form that can be spawned for one task.
- The journaling integration embodies legibility across time in a form that survives sessions.
- The portable layout embodies vendor independence in a form that survives tools.
Each of those is a small, concrete instance of a more general intellectual habit. Encode the conventions. Cache the procedures. Specialize the roles. Persist the records. Decouple from the runtime. Those five habits are not new — engineers and writers have practiced them for decades. What’s new is having a system that requires me to practice them, every day, on every task, because the system breaks down if I don’t.
That requirement is the discipline. The discipline reshapes how I think.
Why this matters beyond me
I don’t want to overgeneralize from a single practitioner’s experience. But I think there’s a broader implication worth at least naming, if not defending.
If the second loop is real — if working inside well-shaped AI systems quietly reshapes the working engineer’s cognitive habits — then the question of which AI systems we build, today is much bigger than the question of which features ship in the next release. The systems are formative. The habits they install in their users will outlive any specific tool.
Bad systems install bad habits. A system that encourages quick acceptance of plausible-looking output installs a habit of low scrutiny. A system that hides its reasoning installs a habit of incurious trust. A system that rewards verbosity installs a habit of inflation. These aren’t hypothetical worries; they’re patterns I’ve seen in workflows around me, and I’ve caught myself slipping into them on bad days.
Good systems install good habits. A system that pushes back, surfaces assumptions, asks for explicit success criteria, audits its own output, and persists its decisions in legible artifacts installs habits of rigor, structure, and humility. These also aren’t hypothetical; they’re the habits I’ve watched myself develop in the projects where the system was shaped right.
The choice of which kind of system to build for yourself, and which kind to encourage in your team, is more consequential than it appears at the start. The system isn’t just helping with tasks. The system is teaching you, slowly, how to think.
The honest closing
This post might be wrong. It might be over-reading my own experience, projecting more importance onto the second loop than it deserves. Three months is a short window for claims about mental shape.
I’d want to come back to this in a year and see whether the patterns I’m describing have stabilized into something durable or whether they were the temporary novelty of a new tool. I suspect they’ll stabilize. I’m not certain.
What I am certain about is the noticing. Whatever’s happening, something has been quietly different in how I approach problems, and I’ve felt it without searching for it. The first loop — tools save time — is the one everyone talks about. The second loop — changed work changes how we think about work — is the one I think will turn out to matter more, in the long run, for the people who lean into it deliberately.
That’s the whole post. The tools we build shape how we think. We’ve known that about hammers and pens. It’s also true about AI agents, and the shaping happens whether we notice it or not. Better to notice it and shape the shaping, while we still can.