AI Chips Can Be Tracked Cheaply And Securely
Despite what Nvidia’s lobbyists may say
Note: this post is longer than my 500-word quick takes normally the blog—the vast majority of the research courtesy of Sophie. This was partially our own attempt to understand whether any of the arguments against location-tracking were legitimate, and to explain, hopefully in understandable, not-super-technical terms, what we learned.
Semiconductor chips are the lifeblood of AI companies and militaries. We should know where they are.
A popular policy proposal is, therefore, to track their locations with a software update. This can curb rampant smuggling to China. Furthermore, for any future international treaty on AI, both the US and China can verify that all compute is accounted for.
Even those who support exporting AI chips should favor location tracking chips. If we’re going to pursue a diffusion strategy—exporting chips to allied nations while restricting adversaries—we need a way to verify that those chips stay where they’re supposed to be.
Skeptics of location tracking, including, unsurprisingly, Nvidia themselves, tend to make three main arguments against the proposal:
Location verification is not technically feasible / too expensive.
Location verification requires backdoors/chips to emit data, which makes them structurally insecure.
Location verification can’t be done in a tamper-proof way, and is trivially cost-effective to circumvent.
We disagree with both (1) and (2), which seem like willful misrepresentations, and believe (3) can be effectively mitigated.

1. Location verification is not technically feasible / too expensive.
IAPS has written one of the most promising location verification proposals to date involving “remote attestation”, a security mechanism that “allows a device (or a computing environment) to prove to a remote party that it is in a certain state, running specific software, or other internal details without revealing more information than necessary” (IAPS). The setup is as follows:
Establish a set of “landmark servers” that can receive signals transmitted from chips.
Update the chip software to enable it to transmit cryptographic proof of identity to those landmark servers.
Measure the time it takes for the signal to travel from the AI chip to the landmark server.
Using the standard signal propagation speed in fiber optic cables, calculate the maximum distance the chip could be from the landmark server.
Triangulate using multiple landmark servers to identify the area in which the chip is most likely located and/or rule out that the chip is not in a restricted location.
IAPS guesses that this can be done for less than $15m/yr with existing technology. Per year, that’s about the size of a seed round of reasonably-hyped SF AI-wrapper, or the cost of ~1/10th of one mile of California high-speed rail. Cheap. And it makes sense! It’s easier to find a lost AirPods case than a $60,000 Nvidia GB200, at the moment. Apple has done it.
What makes the feasibility debate appear especially disingenuous is that NVIDIA has already built and deployed similar systems. They offer a Remote Attestation Service that establishes device identity, maintains secure cryptographically signed channels, prevents spoofing, etc.; the company is already operating verification infrastructure at scale. Instead of “building a system from scratch,” the technical gap appears to be closer to “adapting / expanding existing security infrastructure” to include location data.
2. Location verification requires chips to emit signals/data, which makes them structurally insecure.
Proponents of this argument argue that adding “backdoors” that can emit the location of the chip, or even just a signal, creates a security vulnerability. You could try and smuggle out the model weights that have been loaded onto the chip, or the output of its operation. Engineering security vulnerabilities makes them both more prone to hacking or sabotage, and thus less attractive to buyers.
This argument seems to misunderstand what data location verification transmits. IAPS’ location verification proposal doesn’t even send coordinates– just a simple ping and response with cryptographic proof of identity– so there’s no “room” to encode arbitrary information without breaking the cryptographic proof, and no real “data” is being emitted.
Essentially, while a new communication pathway is established– in the form of the chip sending pings to landmark servers– since these pings contain only timing metadata with no computational workload data, the privacy concerns are minimal.
Often, what critics are worried about is not necessarily the attestation security features that location verification is built upon, but more so the enforcement mechanisms that have been proposed as add-ons (i.e. geofencing and kill switches).
3. Location verification can’t be done in a tamper-proof way. (h/t Pranav)
There are four major ways a delay-based location verification system could be tampered with:
A. Increasing Timing Delays
It’s possible to deliberately route network traffic in a circuitous path/delay transmission of signals emitted by chips, to artificially inflate the time it takes for a signal transmitted by a chip to reach a landmark server. Researchers have been able to modify the presumed geolocation of devices by up to 1,000 km with a 74% probability of evading detection using this technique.
At first, this tampering might seem counterintuitive: if the whole idea of delay-based location verification is to establish geographical boundaries to determine where a chip is, then wouldn’t increasing the geographical boundaries make it more obvious that a chip has indeed been smuggled somewhere it isn’t supposed to be?
The problem here is that increasing delays increase ambiguity: for example, consider Kaliningrad and Gdansk. These cities are approximately 100km apart– one in Russia (restricted) and one in Poland (allowed). If an adversary can increase the uncertainty radius to, say, 1,000km, then the geolocation covers both locations.
Greater ambiguity means that triangulation using several different landmark servers to identify a most likely location is significantly less precise– without that added ambiguity, officials might have otherwise been able to say with some degree of certainty that the chip was located in Russia, but as a result of this kind of tampering, it’s uncertain.
However, the effectiveness of this kind of tampering is usually limited to situations where chips have been smuggled to locations near borders with non-export-controlled countries.
Two promising mitigation strategies include: (1) Setting hard time limits (e.g. “if RTT > x milliseconds, we will assume you’re smuggling regardless”), and (2) establishing baseline RTT distributions for verified chips in x location (e.g. Gdansk). New, more ambiguous RTT measurements can then be compared against the baseline, which establishes a standardized statistical measure for flagging anomalies (e.g. any RTT > 2 standard deviations above the baseline should be investigated).
B. Decreasing Timing Delays
Using dark fiber, it’s possible to artificially decrease RTT measurements to make chips appear closer to landmarks than they truly are. Dark fiber is significantly faster than standard fiber optic cable due to a lack of congestion/routing issues.
This kind of tampering is considerably more difficult to mitigate. However, it can be done by distributing landmarks in such a way that the cost of spoofing exceeds the value of smuggling– requiring dark fiber connections to 5+ strategically placed global landmarks transforms a $60,000 smuggled chip into a multi-million-dollar evasion project, and leasing substantial amounts of dark fiber could create its own audit trail.
C. Selectively speeding up / delaying measurements to specific landmarks
Substantially more difficult than universally speeding up or slowing down signals from all chips, this approach requires knowing where landmarks are located and consistently speeding up or slowing down pings sent to specific landmarks to “spoof” a particular location. While it is possible for dedicated actors, it faces the same logistical difficulties as #2.
D. Landmark takeover
The final major risk of tampering comes with an adversary taking control over landmarks themselves, whether by cyber intrusion, insider compromise, or physical access to data centers– effectively allowing them to forge attestation responses and falsify location data.
Mitigation strategies include placing landmarks only in allied nations with strong cybersecurity standards, requiring chips to successfully verify against at least 3-5 geographically distributed landmarks (similarly to #2 and #3), protecting landmark signing keys with tamper-resistant hardware, and implementing automated anomaly detection that flags landmarks returning implausible measurements.
4. Location verification is trivially cost-effective to circumvent. (h/t Pranav again)
This argument is more nuanced: it correctly asserts that sophisticated actors with large budgets (i.e. state-sponsored agencies and well-funded corporations) could circumvent location verification (including through many of the above methods) for a relatively low cost– given that major AI projects often involve billion-dollar budgets, spending millions on circumvention infrastructure is trivial relative to the value of smuggled compute.
However, location verification still has significant utility as an informational tool.
Currently, after chips are exported to, say, Malaysia, BIS essentially has no visibility into whether they stay there or get diverted to China. With location verification, if 1,000 out of 2,000 chips shipped to a specific Malaysian distributor fail to verify or show suspicious patterns, that’s a concrete signal. BIS can investigate that company specifically, add them to the Entity List, increase scrutiny on their future orders, coordinate with allies for physical inspections, etc.
Even if adversaries successfully spoof location for some chips, large-scale operations are likely to create detectable anomalies and paper trails– for example, leasing large amounts of dark fiber or establishing relay infrastructure between Malaysia and China has a high probability of appearing in telecommunications records; such efforts would likely involve multiple vendors and create procurement trails that intelligence agencies can attempt to track.
Essentially, location verification is an effective mechanism when it comes to giving enforcement agencies specific leads instead of forcing them to investigate blindly across thousands of shell companies and intermediaries.
Other drawbacks
Location verification is mostly reactive rather than preventative– after it’s been established that a certain chip could be within the territory of a country that faces export controls, there are limited options for recovery.
However, with know-your-customer protocols, detection of violations enables future exclusion from legitimate supply chains, making the initially reactive detection mechanism also function as a preventative tool.
Location verification confirms where chips are, not how they’re being used– a chip legitimately located in Singapore could still be processing Chinese military AI workloads via remote access, cloud connections, or distributed computing arrangements that verification cannot detect.
However, the primary strategic advantage is preventing China from amassing physical concentrations of advanced compute for training frontier models. Remote access to distributed Singapore-based chips faces bandwidth limitations, latency constraints, and network monitoring that make it a poor substitute for domestic data centers.
Conclusion
Currently, we’re leaving hundreds of thousands of advanced AI chips unaccounted for in the global supply chain, and we have mature, affordable technology to fix this. Location verification may not be perfect, but catching even half of the hundreds of thousands of smuggled chips would be transformative for U.S. security interests, and IAPS’s proposal is inexpensive and easy to implement relative to the potential upside.
Ultimately, location verification shifts the economics of smuggling. Instead of operating freely once chips arrive, smugglers must either accept detection risk or invest in ongoing circumvention infrastructure that is likely to create its own detectable signatures. Even imperfect verification makes large-scale diversion substantially harder to pull off undetected.
Combined with measures such as in-person audits and know-your-customer protocols, the policy would enable the U.S. government to chip away at the expansive smuggling operations that are currently undermining our economic and national security.
Location verification isn’t tamper-proof, but meaningful tampering is likely to leave costly, detectable footprints that give regulators valuable evidence they currently lack.
Despite what NVIDIA lobbyists may claim, it is a policy well worth implementing.
Thanks to Asher Brass, Pranav Gade, Shaun Ee, & Will Hodgkins for their helpful feedback!





YAYYYYY always a pleasure collaborating with the great(er-than-Galileo) Jason Hausenloy 🥰
the first of many! hopefully