I’ve not gotten tired of thinking about (and attempting to practice) simplicity in almost everything I do. And, when pressed about the need for simplicity, I always resort to say: simplicity makes things easier to understand.
Not long ago, a colleague (thanks Chris!) introduced me to Rich Hickey’s Simple Made Easy talk, where he provides great definitions of what’s simple vs what’s easy, and separating them. Simple things are things that stand apart; while complex things are things that are interleaved. Complexity makes things very hard to understand. And we’re extremely limited, as humans, on understanding complexity.
I think that perhaps the point that Rich was making is that simplicity is the property that makes something easier to understand.
Now, there’s more to a thing than understanding it. One is perfectly capable of doing things one dimly understands. Programming is full of such workflows; and a lot of knowledge workers also work on things they barely understand. On the other end of respectability, quantum mechanics itself is founded on a basis of “if you think you understand it, you don’t understand it” (Feynman). So, as humans, we can engage in activities we dimly (or not at all) understand, and even win Nobel Prizes for it.
Some people are quite OK with it. Some people are almost incapable of it. I’m one of the latter. But I really don’t know why. It doesn’t feel like something I chose.
The feeling I have is that, the moment I stop understanding something, I cannot carry it further. Or rather, I can, but I know that whatever comes out of it cannot really be of any good.
I avoid being paralyzed by having a few things I semi-understand in my head, and re-attempting to understand them later. This has worked well for things like closures. I still haven’t understood Godel’s incompleteness theorems, yet.
Is understanding the construction of an interconnected vision or mental system? Is it an illusion? Are we going to throw it away if/when AI gets so good at predicting the world (exactly like quantum mechanics) that we can let it take our decisions for us, even if we (or the AI itself) doesn’t understand its basis?
I don’t know. I’d like to know the answers to these questions, even if they were grim.
For now, in my little corner of work, I’m trying to build information systems (and a theory about them) that is thoroughly understandable by anyone with enough time and disposition in their hands. Why? I’m not really sure. But my intuition tells me that that’s the way to build something that is truly useful, that lasts, and that brings beauty to the world. So I’ll keep on doing that.
But two more data points that, as we’d say in Spanish, “carries water to my windmill”:
- I’ve recently seen a group of smart people get completely bogged down in a swamp of their own making, simply because they didn’t fully understand the abstractions they were building.
- Reading and listening to some greats of programming (Wirth, Hoare, Armstrong), I’m shocked by how much they emphasize simplicity and understanding, pretty much considering them to be the measure of the art.
In the face of the fear of obsolescence, let’s see how far I can take this line. And if you read this and are of the same opinion, more power to you.