We are in 2025 and we see the major tech players, as well as some startups, go full throttle on artificial general intelligence. At this, I’m quite unnerved: I don’t think we’re ready as a species to be able to handle AGI. Here, I want to shortly explore the concept of will and representation, as originated by Schopenhauer, to AGI. I promise that it’s less boring than it sounds.
Schopenhauer saw the world as twofold: representation (how we perceive the world—appearance, structure, reason) and will (the blind striving, drive, and desire underlying all life).
I see three possibilities concerning AGI:
- No will: We develop an intelligence that is at least as strong as human intelligence, and therefore can create an intelligent representation of the world. However, this intelligence has no will of its own, and therefore can be used as a mere tool. This is the easiest scenario, where AGIs are like nuclear weapons, massively powerful but agent-less. The only concern (by no means a small one) is to prevent AGI from being used by small group of humans to oppress other humans.
- Benevolent will: we develop AGI and it turns out to have a will, to be an agent of its own (or multiple ones, that’d even make more sense by symmetry with human intelligence). So we basically have to consider it to be alive. And we’re lucky enough that it is benevolent, and it wishes us well, and just want to coexists with us. This is not the easiest scenario, because it requires an entire morality about turning on and off computer systems; about how many resources to devote to AGI; etc.
- Unreliable will: we develop AGI and it turns out to have a will, and we cannot trust that will. Therefore, we have to control it. This is therefore akin to slavery, with all the ills that this implies, for both exploiter and exploited. We create intelligence, we don’t trust it, therefore we keep it chained for our benefit. Not my idea of progress.
As with amortality, I think part of the reason I (and many others) dismiss AGI in the short term is that it’s uncomfortable to contemplate.
I’m reminded of Ian Malcolm’s words in Jurassic Park:
No, I’ll tell you the problem with engineers and scientists. Scientists have an elaborate line of bullshit about how they are seeking to know the truth about nature. Which is true, but that’s not what drives them. Nobody is driven by abstractions like ‘seeking truth’. Scientists are actually preoccupied with accomplishment. So they are focused on whether they can do something. They never stop to ask if they *should* do something. They conveniently define such considerations as pointless. If they don’t do it, someone else will. Discovery, they believe, is inevitable. So they just try to do it first. That’s the game in science. Even pure scientific discovery is an aggressive, penetrative act.” (…) There is always some proof that scientists were there, making their discoveries.””