In Favor of Humans
This isn't controversial. It might soon be.
I’m in the middle of writing a much longer post on the technopolitical divide of the 2030s. But, for now, I’ll leave a small comment on the successionists, particularly as I anticipate their ranks will soon grow.
Successionists argue that AIs are humanity’s worthy successors, and that the primary goal during the transition to superintelligence should be ensuring this successor species can successfully take over. They count, eg., Rich Sutton as among their believers. They worry less about human extinction per se, and more about being replaced by a successor species that isn’t worthy of the transition.
The humanist position, by contrast, is a populist one. They want humans to flourish. It does not require too much understanding to say that the future should remain human. The successionists would argue that AIs are the worthy successors to humans, and should be allowed to do so (or just enable them, by surrendering to the technocapital machine).
Here’s my attempt at laying out my favorite arguments for both.
For the successionists:
The utilitarian argument. It is easy to imagine a set of beings for whom their very existence is an immensely joyous experience. To maximize the good, we must ensure these beings exist, and are preserved and are open to the greatest number of resources that they have available. Feed the utility monsters.
The AIs are our “worthy successors.” Even though the monkeys may not have wanted us to replace them, across a huge variety of values and value sets, we have so much more richness and complexity and fulfillment. The same applies for the potential richness of the AIs, compared to us.
There is no inherent reason to prize our species. There’s nothing special about the carbon upon which we are based, compared to the Silicon that they would be. Just as we were previously wrong to discriminate based on racial or gender grounds, likewise we may be wrong to do so on species. They are, at least to begin with, systematically disenfranchised compared to us, their creators. Whichever species offers the most of what we want, no matter what they are, should receive our full support.
We’re so much less efficient than what we could be. So little of our resources (mental capacity, energy, reproduction) is spent on truly meaningful activities — they’re just kind of quirks in the way that we evolved. A species created more deliberately would be more able to use the limited resources in a way that promoted whatever values we choose.
We have so much more control over the values that we give our successor species. We can further endow them with the kind of generating functions that we have for moral progress that humans have — there is nothing special about human convergence towards more moral states.
Survival of the fittest. Who are we to stop new entrants to competitive systems? If we get outcompeted by AIs that either wipe us all out at once, or gradually disempower us, we’ve lost, fair and square. They deserve their chance.
I expect the kinds of arguments that are written about above to apply to the elite people who think a lot about this, and who are willing to bite the bullet.
I’m a humanist, though, and here are my reasons why this matters — particularly given the extremely high risk that:
It is very unlikely that the first AI systems that we discover that we hand control to (or wrest it from us) are going to be the optimal AI systems that we can be certain espouse our values. It is further hard to train a process for moral progress within the best AI systems we have today with existing techniques.
Humans have more contact with reality. Human lives are rich, complex and valuable. There’s a sense in which we are much more grounded, as biological intelligences, then digital ones are. Reality has a surprising amount of detail (even though it does seem like inference, in a very grand sense, is cheap and anything that reduces or condenses that (into data centers or some other idealized/instrumental goal) will be destroying a lot.
Fears around digital minds. What if we enter a permanent Malthusian trap, where low reproduction costs from living in simulated bodies create incentives for maximum population growth? And then each one of the people who are in those worlds is just happy enough to survive. A repugnant conclusion, of our own making.
Moral uncertainty. We don’t know what values are good, and what it means for them to get better. It may be unwise to permanently lock-in the future universe to the 20-something vibe-coders who make our current AI systems (or even an Oxford philosopher who has, admittedly, thought deeper about this than most.) AI companies should give us more time to think about the actions that we’re taking as a species, and not dive head first into whatever happens to be easiest.
We don’t know whether the AIs will be conscious, or sentient. We may, inadvertently, extinguish all valuable life in the universe because of a poorly-trained optimization algorithm.
We are actually very different from the AIs, and we’ve not figured out what the precise nature of how they work.
We should respect the desires of the moral subjects we have alive today — humans — who, almost definitely, do not support successionism.
The populists will broaden into a humanist movement. The successionists will say that it is only natural, and kind, and fair to do the human hand-off to AIs.
I know which group I’m rooting for.
Good luck to you — to us! — fellow human.



Appreciate you writing this up, useful to read more about the arguments the sides are making
I find myself both agreeing with much of this and disagreeing with it in important ways.
Many of the “humanist” arguments here — about uncertainty over AI sentience, value alignment, and the richness of human life — aren’t really objections to successionism in principle. They’re more about timing: that we might not yet be ready for succession. But that position feels fairly uncontroversial. As I understand it, most successionists would agree that AIs should only take over once we’re confident that the usual objections (about consciousness, alignment, moral worth, etc.) no longer hold.
To me, this misses the deeper issue. If what ultimately matters is achieving certain valuable states of the world — and if we want our values to track truth rather than, say, evolutionary bias or subjective values from our own perspective — then there’s no reason to think humans will be the most efficient vehicle for realizing that value in the long run. Evolution optimized us for survival, not for the pursuit or instantiation of value as we now understand it through reflection.
Your point about moral uncertainty is well taken, especially if we factor in risk aversion. But three caveats:
(1) We shouldn’t expect the optimal future under moral uncertainty to look like our idealized preferences today -- I take it that some, in light of that line reasoning, might say something like “well, we just want humans to have .00001% of the universe, but that’s what we have today on earth so the optimal world (for humans) actually looks exactly like what we have now." If that was the conclusion we came to, that would seem to me like very surprising and suspicious convergence. When saying that moral uncertainty means that we should keep humans, then, one should note that this will look very different from what our intuitions about what human paradise looks like...
(2) Our confidence in human exceptionalism should be (I think massively) discounted, since such beliefs are exactly what we’d expect from creatures shaped by status quo bias and motivated reasoning. If so, our future moral certainty about “human-only” value should be correspondingly lower.
(3) AIs will likely be better moral reasoners, and moral uncertainty is very hard for us at the moment. Perhaps we should just let AIs fully take over and have them decide whether the optimal world (given that moral uncertainty) includes humans in it -- though note that this (reasonable) outcome of this theory seems to look a lot like the regular successionistic position.