For this very short post, I don’t make recommendations about whether the U.S. government should prevent the development of superintelligence—just a quick observation that it can.
I’m not sure whether the U.S. government could do it immediately. It’s unclear whether the NSA (or, properly, the National Reconnaissance Office) has trained its satellites on the right patches of land, or whether its signals intelligence listens to the right groups of people. But the government simply has the capacity.
The broad strokes of this argument have been made by me and others in different posts and articles, but I’ll collate them here for a single point of reference.
This relies on three central claims:
1) The U.S. can detect the first superintelligence training run, anywhere on Earth.
Training AIs that can undergo recursive self-improvement—and thus match the capability of the average OpenAI research engineer—will require a huge amount of compute, at least the very first time we do it.
This may be lower bounded by two orders of magnitude more than the largest last known training run (Grok 4/GPT-4.5?), which will require Stargate levels of compute.
It is difficult to hide the data centers that would have those levels of compute. Even if we figure out distributed computing, superintelligence will not be trained on networked laptops. And even if it were, such activity would be detected. AI-enhanced "automated intelligence" goes far.
2) The U.S. can determine whether these training runs constitute superintelligence.
Even using the coarsest measures laid out in various attempts at definitions (my favorite is no. 3), the intelligence community can form an all-things-considered view based on compute, training intent, model evals, and observables, then act accordingly.
3) The U.S. can, if it chooses, prevent these training runs from happening.
In the U.S., no company would train if Congress passed a law or the President issued an order that, under statutes like ECRA or IEEPA, directed agencies to halt covered training or block the necessary transactions. For adversaries abroad, the central argument is laid out in “Superintelligence Strategy,” which proposes “Mutually Assured AI Malfunction”: credible threats of sabotage—including cyber operations—and, in extremis, conventional strikes would deter destabilizing runs.
I highlight these points because, in my view, it is often mistakenly a point of contention within debates. This says nothing about the level of political will it would take for the U.S. government to commit to such an endeavor—and the number of Nvidia and OpenAI lobbyists and China hawks they would have to turn away.
But if the political consensus converged on “if anyone builds it, everyone dies,” the government could make sure no one builds it.