A substitute for thinking

There’s no substitute for thinking.

Yesterday I had that thought and felt 70% defiant, 30% unsure. The insecurity comes from a constant feeling that if I offload more of my thinking to AI, I could go faster and get more done. Think less, move faster.

If my job consisted of moving a pile of rocks from one place to another, I wouldn’t be concerned about letting a machine help me do it. The bargain is good: I do much more with the help of a machine, I’m better off as a result. I then go and do exercise to stay fit, since I cannot rely on the rock moving to keep me fit.

Will the same thing happen to thinking, if our work is to think? Can we rely on the good enough intellectual output of AIs to do most knowledge work? The argument about skill rot could be handled by the equivalent of a mental gym: if our knowledge work is to just direct a bit the AI, we then need to stay mentally fit by doing intellectually demanding things outside of work, just as we stay fit by training outside physical work assisted by machines.

I think the key difference (if there’s any) is that I can judge whether the rocks have been moved well without having to move them myself. This is generally not the case with something that you think through. The process of thinking also gives you the tools to judge the output.

The difficult question is: is the AI output good enough so that the speed at which I can churn it out offsets the loss in quality compared to what I would do by thinking hard and taking my time? Another unsettling parallel is that of high vs low level languages. Programs written in low level languages are generally faster than those in high level languages, because they are more optimized. But, for many applications, it doesn’t matter, and the high level languages win.

You can also use AI to do hard thinking. This doesn’t make me go much faster, but it helps to go deeper. I already use AI like this. But what I’m discussing above is not to use AI to think, but rather, to use AI to not have to think that much about what’s being done and just get it done quickly.

The defiance comes from a feeling that, whenever I let the AI think for me (which is perhaps not often enough to prove it) there are subtle but important gaps in my work. Gaps that don’t offset the speed gains. I call it defiance and not certainty because 1) the AI wave is huge, thinking and going slowly is now the underdog (at least in the little corners of my 2026 mind); 2) I’m not sure about this yet.

I think this is the main tension I’m experiencing in using AI for knowledge work. It’s not about purpose, or about staying sharp. It’s about whether going slower and deeper is more valuable than going quickly.

Will this be the case? And if so, where?

To go fast, use AI. To go far, think.