Before Utopia, On Death Ground
Why is the populist awakening on AI happening now?
Note: Still thinking about an essay on the political divides of the 2030s, but I thought I’d leave one fun tidbit that came out of a brainstorming session with Adam.
Political parties are a mess. Predicting who supports what—and why—has become nearly impossible. That makes forecasting positions on AI, an inevitably new battleground, even harder.
I recently discovered Scott Alexander’s “thrive/survive” theory of the political spectrum (it’s one of those grand ambitious theories which has a delightful simple predictive/retrodictive power about it)
His hypothesis: rightism optimizes for surviving an unsafe environment; leftism optimizes for thriving in a safe one.
This describes the traditional “right” and “left” positions well:
The right imagines the zombie apocalypse is tomorrow—hence guns, deference to military and police, suspicion of outsiders, reverence for tradition, practical skills.
The left imagines a technological utopia—hence environmentalism, cosmopolitanism, skepticism of religion, free love and gender identity, openness to social experimentation, status games.
Scott’s theory came during a simpler age, where “left” could reasonably describe the democrats, and “right”, the Republicans. Zohran Mamdani and Donald Trump are both more naturally “rightist”: surviving in an unsafe environment (the specter of autocracy, the “great replacement”). Similarly, the tech right and the abundance left are “leftist”: pro-innovation, set towards thriving in a safe environment. Perhaps a survive/thrive theory for the populist versus the elite. That’s why Gavin Newsom and JD Vance are broadly pro-AI and pro-innovation, and Bernie Sanders and Steve Bannon oppose.
I argue AI will drive left and right toward their logical extremes. Before utopia, humanity is on death ground. And this is driving the populist awakening*
For the elite, AI has potential to bring humanity towards utopia, or, if you will, “Fully Automated Luxury Gay Space Communism.” Free-market-loving transhumanists (like the current crop of AI CEOs) and progressives alike now speak of universal basic income, perhaps administered by a single world government. The people building the technology are disproportionately educated, libertarian, socially progressive coastal elites. Their vision is utopia: abundance without scarcity, liberation from labor, the final triumph of reason over superstition.
For the populists, the unsafe environment, the Zombie Apocalypse — at least when it comes to an actual “Great Replacement” with an alien species — is at the door, and threatens the security of your family. The AIs are an unknown other.
In some sense, you should therefore predict that the most populist are the ones that are most against AI. If you believe that AI has the potential to disempower humanity, either slowly or all at once, then humanity is on death ground. And, in the fight for survival, society may again lurch towards populism.
This pattern is not new. Greco-Roman ideals, then Judeo-Christian reaction. The Renaissance, then the Reformation. The Enlightenment, then Romanticism. Each technological and intellectual surge produced a popular countermovement that reasserted tradition, community, and moral limits. The development of superintelligence may mark the end of a generational run for the rationalist project that began with the Industrial Revolution.
What might this framework predict?
First, strange bedfellows. The socialist left and Christian right will find themselves in unexpected alliance against AI labor displacement. Both sense death ground—one for the worker, one for the family. Their policy prescriptions will differ. Their enemy is the same. Bernie Sanders and Tucker Carlson sound oddly similar on AI these days.
Religiosity will correlate with AI skepticism, even controlling for education. Not because religious people are anti-technology. Because their framework already assumes humanity is not the highest intelligence—and that hubris has consequences. The Garden of Eden is a story about reaching beyond your station. So is the Tower of Babel. These narratives will find new purchase.
The right, therefore, rejects predictions of utopia. Mark Beall, author of “A Conservative Approach to AGI,” testified to Congress:
When I hear folks in industry claim things about universal basic income and this sort of digital utopia, I study history, and I worry that that sort of leads to one place, and that place is the Gulag.
He may be onto something. The conservative tradition—Christian virtues, suspicion of hubris, respect for what we do not fully understand—may offer the firmest ideological check on the rationalist project to build Machine God.
Not conservative policy—not tax cuts or deregulation or any particular platform. Conservative epistemology. The conviction that human reason is bounded. That we do not fully understand the systems we inhabit. That humility before complexity is not cowardice but wisdom. Or to quote Beall:
Let others dream of building gods; we will defend the human. Let others chase infinite growth; we will cultivate enduring goods. Let others automate civilization’s soul; we will remember that civilization exists to serve the soul.
It’s a beautiful essay.
The new Pope understands this, in his calls for “moral discernment” from the builders of AI.
Despite all that Marc Andreessen may roll his eyes, modern technologists will need it anyway.


