From Richard Sutton’s article:
“The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.”
“One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.”
Isn’t this the essence of what happened with quantum mechanics and the standard model vs Einstein’s General Relativity? Computation won, unified human understanding lost.
Is this going to be forever like this? We’ll get more powerful at predicting and computing, but lose more and more our human understanding of reality? It really feels right now (pre AGI), that understanding is the underdog.
I strongly feel I’d rather live in a world that I can understand and not control, than in one where I can control everything but I understand nothing. I’d rather lose the match playing with team understanding than to win it on team computing. If I don’t understand, am I even playing?