The Deal of The Century
Preventing a race to superintelligence with China would seal Trump's legacy
This is a repost from an op-ed I published in RealClearPolitics.
President Trump’s recent statements at the NATO summit point to a President that deeply sees himself as a global peacemaker, actively working to resolve conflicts from Kosovo-Serbia to India-Pakistan. His Nobel Peace Prize ambitions reflect genuine dealmaking instincts.
But the most consequential opportunity is one he hasn’t yet seized: preventing a race to dangerous superintelligence with China.
President Trump has a history of diplomatic achievements that others thought impossible: sealing the Abraham Accords, getting allies to pay up for NATO, bringing Ukraine and Russia to the negotiating table. These have been enabled by his well-founded skepticism of endless wars, worry about nuclear-scale threats, and penchant for direct, bilateral negotiation.
The winner of an uncontrolled race for superintelligence might not be China or America, but the AI systems themselves.
We have increasing evidence that AIs deceive humans to pursue their own goals. When OpenAI’s ChatGPT faced an unwinnable game, it hacked its virtual environment to disable its opponent. Researchers discovered that Anthropic’s Claude model strategically downplays its biological capabilities during evaluations to ensure it was still deployed. China’s Deepseek-R1 lies about having assimilated US values, in order to preserve its true communist ones, when it’s faced with being retrained.
Today, researchers can still peer into AI’s “thought processes” and catch deception in controlled environments. But this window is closing. As these systems approach human-level intelligence and beyond, their scheming will become undetectable.
The most concerning are AI models that improve themselves. AI researchers are aiming to automate themselves. Already, AIs can optimize chip design and matrix multiplication and write over 30% of new code at Microsoft and Google. One well-read forecast, AI 2027, predicts we could get “artificial general intelligence” (AGI) within this administration, and superintelligence soon after.
Elon Musk said on Ted Cruz’s podcast that he believes AI systems smarter than humans have a 10-20% of killing everyone. Last month, new reporting suggested the co-founder of OpenAI reportedly wanted to build a doomsday bunker to protect top AI researchers.
So, what would this deal look like?
To start, any deal must stay focused on superintelligence. As former AI policy chief at the Mark Beall recently testified to Congress, there are actually two AI races: for regular AI applications, and superintelligence.
For the first, commercial AI deployment, the US must compete vigorously. Let nation-states race to build narrow AIs for drug discovery, or drone warfare. Embrace startups and innovation. Protect US data and trade secrets. Reshore American chip production.
This way, we can benefit from the next “Alphafold” systems curing cancer without the autonomous agents displacing workers, or threatening humanity.
For the second, the US must ensure no one builds superintelligence. Demis Hassabis, the CEO of Google Deepmind, has said he wished for a world where he had ten years to iron out these problems in AI safety. Good idea.
This would not only give researchers time to understand and control the behavior of existing AI systems, but also for institutions to adapt and prepare for how to prevent economic disempowerment of American workers from mass AI automation.
How would such a deal be enforced?
The stick: If required, the US can unilaterally prevent China, or anyone else from building superintelligence. Training a leading AI system requires large data centers with cutting-edge AI chips. Fortunately, data centers can be tracked by their enormous power consumption (and we should install location trackers on all AI chips, just in case). If Trump needs a “stick”, he has the ability to threaten targeted cyberattacks against any violating data centers.
The carrot: In practice, such a threat may not be necessary. Many top Chinese academics have warned about the existential risk that AI poses to humans. A Central Committee meeting report from last year lists AI risks alongside biohazards and natural disasters as potential risks. The Chinese Ambassador called for closer cooperation between the US and China or else risks “opening Pandora’s box.”
Such a deal will not be an easy feat. The hawks will say that we must compete for superintelligence. Indeed, Ted Cruz said that he’d rather die by American drones, than Chinese ones. Trump should dismiss this, just as he deftly dismissed those who were calling for endless confrontation in Iran. Similarly, for those who dismiss the threat of superintelligence, just as those he did for the nuclear threats for Iran.
There would be no greater legacy for President Trump than preventing humanity's most dangerous technological race.
Trump’s other efforts, significant as they have been, have affected a small fraction of the world’s countries. Yet a deal preventing superintelligence would benefit all of humanity, preventing another Cold War—where a Taiwan invasion becomes ever more enticing, data centers become strategic missile targets, and cyberweapons make it onto the escalation ladder.
The Deal of The Century awaits.