So, it’s February 2026 and state of the art AI agents (LLMs that can execute actions) can write code with tremendous speed and their output is not half bad. Boom. Where do we go from here?
Before AI, strong teams were fast by force of stellar talent, watertight standards, plus a lot of hours worked. Think of John Carmack leading ID. Blistering pace with tremendous quality, and all hand coded. Think also of the other end of the continuum: big organizations where the daily outputs of engineers were perhaps 10-20 lines of code per day.
Now, sheer speed is automatic. Agents can churn out code faster than almost any of us can read it, let alone understand it. Ultrafast software development has become a commodity. Ouch. And computers are so fast that the code can still run fast enough for most practical purposes. The challenge for teams still building software is now to cope with this speed.
My current definition of “coping” with the speed of AI is to 1) maintain a high level of quality; 2) maintain a high level of understanding of the system. Quality and understandability.
Quality is perhaps the easiest of the two (not that it’s easy). My definition of quality is threefold: 1) defining what makes the system useful and good (ie: X is good for Y and Z); 2) having a description of the system that is so accurate that it can be validated against the actual system; 3) the actual system passing those tests. In short: having a clear purpose for the system, having a strong definition of how the system works, and making sure the system is what it’s supposed to be.
Understandability is harder. It means that not only the system is well specified and that meets its specification, but that the entire system fits in the human heads of those who are responsible for it. Elsewhere I’ve defined understanding as the representation of the system in one’s head (and a system as a representation of a part of reality). Understandability is an inverse function of complexity. The simpler a system is, the more possible it becomes to understand it.
Here I’m reminded of C.A.R. Hoare’s wonderful quote:
There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.
The second method is to achieve quality without understanding. You specify the system with the help of AI (or the other way around) until it’s so clear that everyone can agree on and validate what the system does. I suspect it has actually become more worthwhile to do this, because AI can do most of it. Before, you needed a lot of human hours to build a huge system and ensure it had reasonable quality. Now you’re just a couple billion tokens away from doing it, which is orders of magnitude faster and cheaper than the old way. You still have a maintenance nightmare, but you can also throw tokens at it, perhaps successfully. I don’t know. I’m not sure if anybody knows yet.
The first method is to have quality AND understandability, by virtue of simplicity. Even though you can now produce 10x the code as before, you still converge onto a small system with crystal clear primitives which you fully understand. Perhaps it still means you even fully understand the code.
I wager to say that any team producing software cannot ignore the quality problem, and that software built without quality is as doomed as before, only on a much shorter timeframe. But the understanding problem is an open question, and thus a fork in the road. Or perhaps a gradient.
Understanding takes time up front. Human heads are not getting 10x faster or bigger. At least in the short term, understanding slows you down. And it is hard work. Is this a tortoise vs hare dynamic? Or is it trying to manually forge artillery shells in an age of industrialized warfare?
Besides the (essential) concerns of human agency and job continuation, will it be the case that teams that take the work to understand their AI-built systems can function and even outcompete those who don’t? At this point, I’m concerned about the survival of understanding in the open market of software development, not its triumph. Is there inherent operational value in creating systems that are understood by those who build it? Does understandability always lead to quality through simplicity? These are not rhetorical questions. My gut screams “YES”, but the question and the answer needs to be explicit.
If one chooses both (quality and understandability), how do we go about it now? These are my questions:
- How can we use AI to reach quality and understandability at the same time?
- Can we find new dynamics that allow us to go faster than before and yet have even more understanding of what’s going on? In other words: is there an angle where this AI revolution actually enhances understanding instead of making it impossible? Can AI help us become much better at creating good representations of the systems in our heads?