Our job is to understand

A number of programmers, me included, feel that sometime between 2025 and 2026, we lost our craft. We used to write our own code by hand. The arrival of powerful LLM models and coding agents built upon them has shifted us from writers of code to managers and reviewers of agents that produce code.

This shift is not happening everywhere, but it is general and taking place at great speed. For many in the industry, me included, to remain to code by hand is to be left behind. The sheer speed at which you can produce with agents outcompetes most hand coding. Last year, when Dario Amodei said that AI would write 90% of the code, I smiled and thought he’d better lay off the Friscan hype pipe. By the end of 2026, it will be hard to challenge that number.

Elsewhere I listed my objections to using coding agents, as a cathartic process of reasoning in public to work through my objections. Because, emotionally, I loved hand crafting my code. But history is clear in that mechanical looms beat manual ones for everyday production. And the same thing is happening to code now.

The two objections I had left were quality and understandability. How do you achieve either of these when working at breakneck speed with coding agents?

I am starting to see, both in my work and that of others (though certainly not in everyone), that the quality problem is solvable. It is solved by defining clear specifications, and testing standards based on them. The existence of dark factories already proves this possible. On my end, I’ve now carried out an extensive surface testing program on an existing SaaS product, with most of the tests designed and implemented by agents. New code, also written by agents, also carries new tests. The level of quality we have now is better than before we used agents, despite a massive increase in velocity.

If the quality problem is solvable and being solved, that only leaves understandability. Lack of understanding has been recently defined as cognitive debt. If, before agents, good projects were defined by low technical debt, I wonder if in this new age, what makes a project good will be low cognitive debt; that is, a high degree of understanding of the system by those who created it.

Why is understanding so important? I wonder that myself. I’ve spent my entire programming career trying to create understandable software, raging against systems that were hard to understand, and marvelling over software masterpieces.

My current view is that understanding is true ownership. When you understand something deeply, you can see how it functions, and how it is connected to everything around it. Understanding allows you to adapt something when it’s broken, or when it needs to change. Understanding also lets you see the implications of changes, for the system itself and for what’s around them.

A system can be seen in terms of Christopher Alexander; a field of centers, which each center made of centers. What makes the system good (alive) is that changes to a center also strengthen other centers. This is what context allows: to figure out what changes strengthen the system.

This is why I was always anti-magic in software: if you don’t let me understand and you package that as a feature, you’re pulling away my ownership of the tool. My agency. Agency is power, and power should be distributed, not concentrated.

Even models with one million tokens still have large context limitations compared to humans. We’re able to understand much larger contexts, and in subtle ways. Whether it’s our guts or our spirits that bring the subtlety in (I think it’s both), the amount and quality of context we humans are capable of dwarves anything that current models can do. I also think that if we develop AI with a human-level context, then we’ll probably have no choice but to call that AGI.

But going back to our current time and the present of the craft: our job is to understand. Not to execute, but to understand. We need to understand what needs are around us, how they are currently (not) satisfied, and how can they be better served. Behind those interfaces, we need to understand how things happen. Even without reviewing the code, we need to understand what the data flows are made of, what’s possible. How it connects to the lower levels of the machine.

To understand is not just a process of making: it’s also bearing witness to something. It borders on meditation on what is the real meaning of what’s being created. This is the first, and now perhaps the last thing that’s left of our craft, since everything in the middle has been massively compressed. Capacity to execute used to be the ultimate ownership, but that’s now been made much more accessible.

If I were to create something I don’t really understand, I would not have true ownership of it. If a team runs a project and no one really has the full picture of it, how strong would their position be if something goes wrong?

Steve Yegge recounts a scene from a science fiction book, where in the middle of a battle, a cannon fails to work and its maintainers, for generations taking care of them by polishing them with fine wines and soft cloths but completely ignorant of their workings, get promptly slaughtered. Call me old fashioned, but I feel very insecure if I’m in the business of making cannons that I don’t understand, and I polish them by shooting elegant prompts to my agents to keep things well documented, implemented and tested, in ways I cannot really fathom.

Our understanding might have to be based on even greater abstractions, but I don’t think that giving up on understanding is a wise choice. Time will tell. Therefore, my new mantra is: our job is to understand. Understanding used to be central to our craft. It now feels as if our craft has been stripped of almost everything else, and understanding is the only thing left.

And I also think that this revolution is coming to most knowledge work, also in 2026. Code just happened to be at the epicenter of the wave.


See archives »