I’m in the middle of writing a much longer post on the technopolitical divide of the 2030s. But, for now, I’ll leave a small comment on the successionists, particularly as I anticipate their ranks will soon grow.
Successionists argue that AIs are humanity’s worthy successors, and that the primary goal during the transition to superintelligence should be ensuring this successor species can successfully take over. They count, eg., Rich Sutton as among their believers. They worry less about human extinction per se, and more about being replaced by a successor species that isn’t worthy of the transition.
The humanist position, by contrast, is a populist one. They want humans to flourish. It does not require too much understanding to say that the future should remain human. The successionists would argue that AIs are the worthy successors to humans, and should be allowed to do so (or just enable them, by surrendering to the technocapital machine).
Here’s my attempt at laying out my favorite arguments for both.
For the successionists:
The utilitarian argument. It is easy to imagine a set of beings for whom their very existence is an immensely joyous experience. To maximize the good, we must ensure these beings exist, and are preserved and are open to the greatest number of resources that they have available. Feed the utility monsters.
The AIs are our “worthy successors.” Even though the monkeys may not have wanted us to replace them, across a huge variety of values and value sets, we have so much more richness and complexity and fulfillment. The same applies for the potential richness of the AIs, compared to us.
There is no inherent reason to prize our species. There’s nothing special about the carbon upon which we are based, compared to the Silicon that they would be. Just as we were previously wrong to discriminate based on racial or gender grounds, likewise we may be wrong to do so on species. They are, at least to begin with, systematically disenfranchised compared to us, their creators. Whichever species offers the most of what we want, no matter what they are, should receive our full support.
We’re so much less efficient than what we could be. So little of our resources (mental capacity, energy, reproduction) is spent on truly meaningful activities — they’re just kind of quirks in the way that we evolved. A species created more deliberately would be more able to use the limited resources in a way that promoted whatever values we choose.
We have so much more control over the values that we give our successor species. We can further endow them with the kind of generating functions that we have for moral progress that humans have — there is nothing special about human convergence towards more moral states.
Survival of the fittest. Who are we to stop new entrants to competitive systems? If we get outcompeted by AIs that either wipe us all out at once, or gradually disempower us, we’ve lost, fair and square. They deserve their chance.
I expect the kinds of arguments that are written about above to apply to the elite people who think a lot about this, and who are willing to bite the bullet.
I’m a humanist, though, and here are my reasons why this matters — particularly given the extremely high risk that:
It is very unlikely that the first AI systems that we discover that we hand control to (or wrest it from us) are going to be the optimal AI systems that we can be certain espouse our values. It is further hard to train a process for moral progress within the best AI systems we have today with existing techniques.
Humans have more contact with reality. Human lives are rich, complex and valuable. There’s a sense in which we are much more grounded, as biological intelligences, then digital ones are. Reality has a surprising amount of detail (even though it does seem like inference, in a very grand sense, is cheap and anything that reduces or condenses that (into data centers or some other idealized/instrumental goal) will be destroying a lot.
Fears around digital minds. What if we enter a permanent Malthusian trap, where low reproduction costs from living in simulated bodies create incentives for maximum population growth? And then each one of the people who are in those worlds is just happy enough to survive. A repugnant conclusion, of our own making.
Moral uncertainty. We don’t know what values are good, and what it means for them to get better. It may be unwise to permanently lock-in the future universe to the 20-something vibe-coders who make our current AI systems (or even an Oxford philosopher who has, admittedly, thought deeper about this than most.) AI companies should give us more time to think about the actions that we’re taking as a species, and not dive head first into whatever happens to be easiest.
We don’t know whether the AIs will be conscious, or sentient. We may, inadvertently, extinguish all valuable life in the universe because of a poorly-trained optimization algorithm.
We are actually very different from the AIs, and we’ve not figured out what the precise nature of how they work.
We should respect the desires of the moral subjects we have alive today — humans — who, almost definitely, do not support successionism.
The populists will broaden into a humanist movement. The successionists will say that it is only natural, and kind, and fair to do the human hand-off to AIs.
I know which group I’m rooting for.
Good luck to you — to us! — fellow human.
Appreciate you writing this up, useful to read more about the arguments the sides are making
Thanks for writing this!
Do you have hard lines in terms of ways you'd choose to remain human?
What does it mean to remain human in your eyes?
^ Things I'd love to read about in a future blog post