
On March 1, 2026, Iranian drones struck three AWS data centers in the Gulf, knocking Abu Dhabi Commercial Bank offline for two days and degrading 109 cloud services across the UAE and Bahrain.[1] Four days later, Reuters reported that the US Commerce Department had drafted rules that would turn Nvidia into the enforcement arm of American export controls — monitoring chip deployments worldwide and conducting physical inspections of large installations.[2]
Two events. Five days. Two entirely different layers of the same infrastructure stack. The drones proved that a data center has a street address and someone can hit it. The draft rule revealed something more unsettling: the chips inside that data center may soon operate only at Washington's pleasure — licensed, monitored, and potentially degradable via firmware, without a missile in sight.
These are not the same threat. And no government, regulator, or risk framework treats them as part of the same problem.
The wrong question
The risk frameworks that govern AI infrastructure — from DORA to NIST to NIS2 — share a foundational assumption: dependencies break accidentally.[3] The EU’s Digital Operational Resilience Act (DORA) requires financial entities to assess concentration risk and maintain exit strategies — in case a provider goes bankrupt, suffers a breach, or discontinues a product.[4] NIST’s supply chain risk management category covers vendor assessment and monitoring — generically, without distinguishing accidental outages from deliberate cutoffs.[5] Even NIS2, which comes closest to acknowledging the real threat — Recital 90 warns of “undue influence by a third country on suppliers and service providers” — stops at identification without building a framework for analyzing how such influence operates, at which layer, or on what timeline.[6]
The question these frameworks don’t ask: what happens if someonedeliberately activates the dependency?
The distinction between disruption and coercion is not semantic. It changes the risk analysis and the investment decision that follows. A disruption model tells you to build redundancy — a second cloud provider, a backup data center, a disaster recovery plan. A coercion model tells you to assess who holds leverage over your infrastructure and whether your strategic alignment with them is durable. Redundancy doesn’t help if both your primary and backup providers answer to the same government. Disaster recovery doesn’t help if the disaster is a licensing decision, not a lightning strike.
Farrell and Newman called this weaponized interdependence: when a country or company controls the chokepoint of a global network, it can surveil what flows through and cut off anyone it chooses.[7] The AI stack concentrates that power at every layer. The emerging chip governance regime would give Nvidia and US regulators visibility into where every advanced GPU operates worldwide — visibility that does not yet exist at scale, which is precisely why the Chip Security Act and the March 2026 draft rules seek to mandate it. AWS, Azure, and Google already hold infrastructure-level metadata on every workload running in their regions: which instances are active, what APIs they call, how much compute they consume, and where the data flows — even when the workload itself is encrypted. OpenAI logs every API query by geography, user, and volume. Each chokepoint holder can deny access — partially, gradually, reversibly — through mechanisms that look less like warfare and more like a licensing dispute.
Governments have spent decades developing frameworks to address two of the three threats to their infrastructure. Legal access — the CLOUD Act, FISA, GDPR transfer restrictions — has an established literature and a set of partial mitigations; I traced the specific legal pathways in an earlier analysis of how two “sovereign” clouds answer to the same American court.[8] Physical destruction — geographic distribution, disaster recovery, hardened facilities — has been a part of military doctrine since the Cold War. The third threat — deliberate disabling of infrastructure through the dependency stack — lacks a framework or doctrine. It is the orphan category, the one that the events of March 1-5 forced into the open.
Every layer of the AI stack — chips, cloud, models — contains a foreign-controlled off switch. Each switch is held by a different actor, operates through a different mechanism, and activates on a different timeline. They compound: no single mitigation addresses all three. And the only escape — building the entire stack domestically — is structurally blocked for most nations by the same forces that prevent them from building frontier AI.
Three switches. Three actors. Three precedents that prove each one works.
What disable means — and what it doesn’t
Before tracing the three switches, a definition. Disable, as used here, does not mean hacking. It does not mean unauthorized access, malware, or a cyber attack. Those threats are real, but they are already modeled by NIST, addressed by the US cybersecurity agency CISA, and covered by every enterprise security program on the planet.
Disable means the exercise of legitimate mechanisms — licensing terms, export compliance, firmware attestation, terms of service — that are designed into the product and activated through normal commercial or regulatory channels. When Microsoft suspended Russian cloud services, it was complying with EU sanctions law. When OpenAI blocks API access in China, it is enforcing its own terms of service. When the Chip Security Act proposes location verification, it is proposing a regulatory requirement, not a vulnerability. There is nothing to detect, no intrusion to block, no patch to apply. The “attack” is the vendor doing exactly what the contract allows — or the regulator requiring exactly what the statute authorizes.
This is why existing cybersecurity frameworks miss it. NIST models unauthorized access. DORA models provider failure. Neither model the scenario where the provider is functioning exactly as designed, and the degradation is the intended outcome. Disable operates through the front door, under color of law, with full contractual authority. That is what makes it more dangerous than hacking—and what makes it invisible to every framework designed to defend against unauthorized access.
The chip switch
Nvidia controls approximately 90% of data center GPU revenue worldwide, a figure that has declined only modestly as AMD and custom silicon gain share.[9] The Chip Security Act, introduced in Congress in May 2025 with bipartisan support, would require every export-controlled AI chip to include location verification mechanisms within 180 days of enactment.[10] Nvidia itself has built location verification technology using confidential computing and communication latency between chips and Nvidia-operated servers, initially for its Blackwell-generation processors.[11] The March 2026 draft rule would go further: for chip installations seeking an exemption from new licensing requirements, Nvidia would be required to monitor deployments and mandate software that prevents chips from being linked to unauthorized clusters — effectively turning a chip company into the enforcement arm of American export controls.[12]
The structural parallel is the F-35 Joint Strike Fighter — a military procurement program, not a commercial product, but the dependency architecture is identical. The Pentagon officially denies that the F-35 has a kill switch, and every partner nation confirms this.[13] The denial is technically accurate and strategically irrelevant. The F-35’s Autonomic Logistics Information System routes every partner nation’s aircraft health data, flight records, and maintenance schedules through a single computer hub at Lockheed Martin’s Fort Worth facility — the only such hub in the world.[14] International operators cannot access the jet’s source code or conduct independent test operations outside the continental United States.[15] Mission Data Files — the threat intelligence that keeps the jet’s sensors current — require frequent updates during conflict; without them, an F-35 can fly but cannot effectively fight.[16]
Turkey proved the model works. After investing $1.4 billion and serving as a program partner since 2002, Turkey was expelled from the F-35 program in July 2019 for purchasing Russian S-400 air defense systems.[17] No F-35 ever reached Turkish soil. The jets Turkey had paid for sit in storage at American air bases. In December 2020, Turkey became the first NATO ally sanctioned under CAATSA — legislation designed to penalize procurement of Russian defense equipment, applied for the first time to an ally.[18] No missile was required. The dependency architecture did the work.
The F-35 is a government-to-government military sale; Nvidia chips are commercial products. The coercion mechanisms differ — military procurement operates through the Foreign Military Sales program, while chip control operates through export regulations and firmware governance. But the dependency structure is the same: layered technical obligations that make the hardware functional only as long as the relationship holds. The chip governance regime now forming replicates the F-35 model at a semiconductor scale. Periodic licensing that mirrors Mission Data File updates. Location verification that parallels telemetry flows to Fort Worth. The implicit threat of supply cutoff made Turkey’s expulsion possible. Time to effect: weeks to months — though export control designations can take effect immediately, as Huawei discovered in May 2019.[19]
There is a structural irony in building governance mechanisms into the chip layer. This piece does not advocate for or against chip-level controls — only observes that they are forming, and that they break every pretense of sovereignty for any nation that doesn’t build its own silicon. But the history of embedded governance mechanisms carries a warning that applies regardless of whether the controls are desirable. Historically, surveillance and control capabilities built into infrastructure have been exploited by actors other than their intended users — and often not by rival governments but by criminal actors or unknown third parties. Disable and exploitation are different threats, but they share the same infrastructure, and building one enables the other.
The NSA’s Tailored Access Operations intercepted Cisco networking equipment in transit and implanted JETPLOW firmware backdoors — supply chain interdiction that Cisco discovered only through the Snowden documents, and that cost the company an 18% decline in China orders.[20] Ericsson’s lawful intercept modules, built into Vodafone Greece’s switches by design, were exploited by what subsequent investigations linked to a US intelligence operation to wiretap the Greek Prime Minister and 105 other targets — senior officials, military commanders, journalists, and activists — for nine months around the 2004 Athens Olympics.[21] Wiretap systems in AT&T, Verizon, and seven other US carriers — mandated by the Communications Assistance for Law Enforcement Act (CALEA) since 1994 — were compromised by China’s Salt Typhoon campaign, which accessed Trump’s communications and obtained the near-complete FBI surveillance target list.[22] Senator Warner called it the worst telecom hack in American history.
But the pattern extends beyond state-on-state espionage into territory that should concern every enterprise, not just governments. The NSA influenced a cryptographic random number generator called Dual EC DRBG to be adopted as a NIST standard, reportedly paying RSA Security $10 million to make it the default in the widely used RSA BSAFE library.[23] Juniper Networks used it in its ScreenOS firewall software, with custom parameters intended to prevent exploitation. In 2012, a party later linked by investigators to a Chinese state-backed hacking unit changed those parameters, creating a passive capability to decrypt VPN traffic across affected versions of Juniper’s NetScreen firewalls for three years before discovery.[24] A government-designed backdoor in a cryptographic standard was re-exploited through a parameter swap that required no unauthorized access to the systems it compromised. Juniper’s customers — which included US federal agencies — were exposed not by the NSA’s original design but by someone else who understood it.
The same dynamic played out with the NSA’s EternalBlue exploit, a Windows SMB vulnerability the agency hoarded for years as an intelligence tool. When the Shadow Brokers group leaked it in 2017, it was weaponized within weeks — first as WannaCry ransomware, which shut down NHS hospitals across the UK, then as NotPetya, which halted Maersk’s global shipping operations. Combined damages exceeded $10 billion.[25] Capabilities designed for targeted intelligence collection were used for indiscriminate destruction.
Nvidia's own chief security officer drew a line in August 2025: mandatory hardware-level controls are "a gift to hackers and hostile actors," but optional, user-controlled software tools are acceptable.[26] Four months later, Nvidia built location verification as exactly that — an optional software service, customer-installed, read-only. The Chip Security Act would make it mandatory. The March 2026 draft rules would make Nvidia the enforcer. The line the CSO drew is the line the legislation erases.
Whether or not chip governance mechanisms are built, the disable pattern is not confined to silicon. It operates at every layer of the stack — and the second switch moves faster.
The cloud switch
In March 2024, Microsoft notified Russian organizations that approximately 50 cloud products would be suspended.[27] By mid-May, disconnections rolled through in batches — Power BI, Azure services, SharePoint, Teams, and specialized design and manufacturing software disappeared from Russian enterprise environments over a period of weeks.[28] Enterprise and government users lost access; individual accounts were initially excluded. Microsoft’s notification letter cited the EU’s 12th sanctions package. The framing was regulatory compliance. The effect was indistinguishable from a targeted infrastructure attack: critical business systems ceased functioning, data became inaccessible, and organizations scrambled to migrate workloads they had been told would always be available.
The Russia precedent illustrates a crucial feature of the cloud switch: it can be activated by multiple jurisdictions, not just Washington. Microsoft acted under EU sanctions law. The US could activate the same switch through the CLOUD Act, Treasury Department sanctions, or Commerce Department entity list designations. The question for any nation dependent on US hyperscalers is not whether Washington will flip the switch, but how many governments hold the authority to flip it — and whether your alignment with all of them is durable.
AWS, Microsoft, and Google collectively control roughly 63-74% of the global cloud infrastructure market, depending on whether the measurement includes all cloud services or only IaaS.[29] This concentration would matter less if these were neutral infrastructure providers. They are not. All three serve as contractors under the Pentagon’s Joint Warfighting Cloud Capability program, operating across all classification levels from unclassified to TS/SCI.[30] Their data centers host AI workloads for intelligence agencies and defense contractors alongside commercial banking and government services for allied and non-allied nations alike.
This dual-use reality makes data centers targetable under the same legal doctrines that have governed the bombing of telecommunications infrastructure in every major conflict since the 1990s. NATO systematically destroyed Yugoslavia’s telecom network in 1999 — transmitters were attacked across the country, the RTS broadcast headquarters were struck in a raid that killed 16 employees — under a doctrine that dual-use communications infrastructure serving military command and control constitutes a legitimate military target.[31] The international war crimes tribunal validated this reasoning. The US hit Iraqi telecom facilities on March 27, 2003, as part of the broader infrastructure campaign; a USAF planner during the 1991 Gulf War stated the goal was to “put every household in an autonomous mode and make them feel they were isolated.”[32] The March 1 drone strikes on AWS are the first application of this targeting logic — whether deliberate or incidental — to cloud infrastructure. Iran’s Fars News Agency claimed the strikes targeted the facilities for their role in supporting military and intelligence activities. Whether that claim is a post-hoc justification or a genuine targeting rationale, the precedent holds: a hyperscaler’s production infrastructure was struck, and the downstream effect on civilian services was immediate.
The cloud switch operates faster than the chip switch — hours to days rather than weeks. And it creates a compounding interaction with data-residency laws that the chip switch does not. When every availability zone in a country goes offline, the residency requirements designed to protect sovereign data become the mechanism that traps it in the blast radius. UAE banking data that must remain in the UAE cannot be transferred to Frankfurt.[33] The cloud switch doesn’t just deny service; it also blocks access to the network. It locks data inside the geography where the damage occurred.
The model switch
OpenAI does not serve China, Russia, Iran, North Korea, Syria, or Cuba.[34] This is not a technical limitation — it’s a geographic restriction enforced through IP-based blocking, account registration controls, and payment verification, with active enforcement set to escalate in July 2024.[35] Meta’s Llama license — the most widely used open-weight model family — requires licensees to comply with all applicable trade laws and regulations, which effectively bars any entity subject to US sanctions from legal use.[36] Meta separately withheld multimodal Llama 3.2 models from the European Union over data protection concerns — demonstrating that geographic restriction operates on regulatory grounds as well as sanctions grounds.[37]
The model layer is the most porous of the three. Open-weight models can be downloaded and run locally, creating a partial escape route. But “locally” still means on chips (layer one) hosted in cloud infrastructure (layer two). A nation that downloads Llama weights but runs them on Nvidia GPUs in an AWS region has escaped the model switch while remaining fully exposed to the other two. And the porosity decreases as models grow: serving a frontier model at production scale requires GPU clusters that most organizations cannot self-host. A government ministry running ChatGPT through an API is fully exposed. A research lab with its own GPU cluster running open-weight models is partially insulated. The insulation is real but thin. API revocation takes effect in minutes.
Three switches, one surface
Each switch is dangerous alone. Together, they create something qualitatively different — a coercion surface that no single mitigation addresses and no existing framework models.
The switches operate at different speeds, through different actors, under different legal authorities. A chip licensing dispute unfolds over months. A cloud suspension over days. A model API cutoff in minutes. A coordinated activation across all three layers — export controls tightening on chips, sanctions compliance triggering cloud suspensions, terms-of-service restrictions narrowing model access — would degrade a nation’s AI capability in stages, each stage making the next harder to mitigate. No coordinated activation across all three layers has been directed at a single nation as a deliberate campaign, but the Russia sanctions came closest. Between 2022 and 2024, Russia experienced chip export controls (October 2022, updated October 2023), cloud service suspensions (Microsoft, mid-2024), and model API restrictions (OpenAI, from launch) — arriving through different actors, under different legal authorities, over different months. The switches were tested individually. The cumulative effect was a progressive three-layer degradation of Russia’s commercial AI infrastructure — achieved without central coordination, through independent actors responding to the same geopolitical rupture.[38]
The chip switch is the slowest but the deepest: it affects not just current operations but future capability, because chips take months to procure and years to replace at scale. The model switch is the fastest but the shallowest: open-weight alternatives exist, and a technically sophisticated actor can route around it. The cloud switch sits in the middle — fast enough to cause immediate operational pain, deep enough to trap data in inaccessible infrastructure.
The chip switch is the slowest but the deepest: it affects not just current operations but future capability, because chips take months to procure and years to replace at scale. The model switch is the fastest but the shallowest: open-weight alternatives exist, and a technically sophisticated actor can route around it. The cloud switch sits in the middle — fast enough to cause immediate operational pain, deep enough to trap data in inaccessible infrastructure.
The three switches also differ in legal maturity, and tracking where each one sits matters for assessing how fast the coercion surface is forming. The cloud switch is fully operational: the CLOUD Act, FISA, and EU sanctions regulations give governments standing compulsion authority, tested and exercised. The model switch is contractual: providers enforce geographic restrictions through terms of service, without government compulsion, which is why it’s the most porous. The chip switch is the one being built in real time. Export controls restrict where chips can be sold, but there is no standing legal authority compelling a chipmaker to monitor deployed hardware, verify its location, or degrade its function remotely. The Chip Security Act would create that authority. The March 2026 draft rules would operationalize it. Nvidia has already built the voluntary software. A year ago, none of this existed. Today, it is a bill in committee and a draft rule under review. The switch can’t be flipped yet. It is being wired.
The compounding effect is that no mitigation strategy addresses all three simultaneously. Building a sovereign cloud routes around the cloud switch but leaves the chip and model dependencies intact. Training domestic models routes around the model switch, but still requires foreign chips on foreign cloud. Even running open-weight models on domestic cloud infrastructure powered by domestic chips — the full-stack solution — requires a talent pipeline, a software ecosystem, and a fabrication capability that most nations lack. The switches don’t just stack. They interlock.
The exit that doesn’t exist
Every strategy governments actually pursue addresses one or two switches while leaving the others open.
Data residency — keeping everything in-country — addresses the legal dimension. Your courts, your jurisdiction, no CLOUD Act exposure. But it consolidates all physical infrastructure into a single targetable geography and does nothing about chip or model dependencies. The UAE built precisely this architecture and discovered its failure mode on March 1.
Geographic distribution — spreading infrastructure across allied countries — addresses the physical dimension. No single point of failure, no blast radius trap. But it reintroduces the legal exposure that data residency was designed to prevent, and still does nothing about chips or models.
Sovereign cloud — building domestic infrastructure on local providers — addresses both legal and some physical risks. But OVHcloud, T-Systems, and Scaleway still run Nvidia GPUs. The chip dependency survives the cloud migration. France’s SecNumCloud certification requires French ownership at every layer of the cloud stack — and makes no provision for the silicon underneath it. On February 25, four days before the drone strikes, the UAE Central Bank announced the “world’s first sovereign financial cloud” with Core42, a subsidiary of G42 — itself a recipient of $1.5 billion in Microsoft investment, operating Nvidia GPU clusters. The announcement promised "data sovereignty," "robust protection against cyber threats," and "continuous availability of critical financial services."[47] Four days later, that sovereignty — all critical financial infrastructure concentrated in one targetable geography, exactly as promised — is what put them in the crosshairs.
If these switches exist, why hasn’t the market priced them in? Every European bank still uses AWS. Every Japanese AI lab still buys Nvidia. The answer is that enterprises are making a rational bet: the US-allied relationship is stable enough that the switches won’t be used against them. That bet has been correct for decades. But Turkey made the same bet — and crossed a single redline. The switches are most dangerous not in a stable alliance but in a deteriorating one — and exercising them against allies would accelerate exactly the alternative-seeking behavior that erodes the leverage. The reader’s job is to assess the durability of their own alignment, not to assume it.
The only strategy that addresses all three switches simultaneously is to build the entire stack domestically: your own chips, your own cloud, your own models, trained by your own talent, running in your own facilities. Brookings concluded in February 2026 that this “full-stack AI sovereignty is structurally infeasible for almost any country” — and their ten-layer analysis found that even the United States lacks complete sovereignty, since TSMC fabricates Nvidia’s most advanced chips on ASML’s lithography machines.[39]
The structural forces that block this exit are not primarily financial. They are the subject of what this Substack’s country series has been documenting for the past year.
Japan has committed ¥2 trillion to AI and still faces a talent doom loop: low AI salaries drive researchers abroad, companies can’t build AI capability, they buy from US providers, the domestic ecosystem stays small, and there’s no market pressure to raise salaries. The loop reinforces itself.[40] India produces world-class AI engineers by the hundreds of thousands and exports them — the talent pipeline functions as a drain, not a reservoir, because US compensation and research environments exert gravitational pull that no government AI mission can counteract.[41] The Gulf states had the capital and the infrastructure, and just discovered their geography is a target.
Europe has the most sophisticated regulatory architecture on Earth — GDPR, the AI Act, EUCS, DORA, the Data Act — and no European company has produced a frontier model. The EU’s Apply AI Strategy, published in October 2025, warns of “external dependencies of the AI stack that can be weaponised” while targeting €20 billion in public and private AI infrastructure investment that will still run on Nvidia chips.[42]
The pattern across these cases is consistent. These are not resource gaps that funding can close. They are system-level optimizations for outcomes that conflict with frontier AI development — stable employment over talent mobility, hardware manufacturing over software culture, regulatory ambition over industrial capacity. No government commitment can override a labor market that prices AI talent below the global clearing rate, or a corporate culture that treats software as a cost center rather than a strategic capability.
China is the exception that clarifies the rule. It is the only nation attempting full-stack sovereignty across all three layers simultaneously: SMIC for chip fabrication, Huawei Cloud and Alibaba Cloud for infrastructure, DeepSeek and Qwen for models. It is paying a performance penalty at every layer — SMIC’s most advanced process lags TSMC by two or more generations, Huawei’s Ascend chips trail Nvidia in training throughput, and Chinese cloud providers lack the global scale of the hyperscalers.[43] DeepSeek demonstrated that algorithmic efficiency can partially compensate for hardware disadvantage — its training methods extracted more capability per GPU than American labs had assumed possible. But the training still required thousands of restricted Nvidia GPUs that the export control regime was designed to prevent; the House Select Committee on China documented at least 60,000 Nvidia chips in DeepSeek’s infrastructure, many obtained through Southeast Asian intermediaries.[44] Algorithmic efficiency reduces thequantityof chips needed. It does not change the identity of the switch-holder. If you need 10,000 Nvidia GPUs instead of 50,000, Nvidia still holds the switch — you just need them to flip fewer of them.
China’s approach is a demonstration of what full-stack sovereignty actually costs — measured not in dollars but in structural preconditions most national systems cannot produce: domestic talent retained by state direction rather than market compensation, a fabrication program willing to accept years of inferior performance, and the political will to build inferior alternatives to every layer of a technology stack the rest of the world accesses freely. The cost is real. The question is whether anyone else is willing to pay it.
Choosing your dependencies
The verdict is not independence. For most nations, independence at every layer of the AI stack is structurally blocked by the same forces that prevent them from building frontier AI.
The realistic endpoint is managed interdependence — accepting dependency as a permanent condition but negotiating its terms. For a nation, the coercion audit is a sovereignty assessment: which layers of the stack do we control, which do we accept as dependencies, and what bilateral relationships underpin each dependency? For an enterprise, it’s a vendor risk methodology that no existing framework requires: at each layer of the stack, who holds the switch, under what legal authority can it be activated, and what is your organization’s exposure if the relationship between your government and that authority deteriorates? A European bank running AI workloads on AWS with Nvidia GPUs calling the OpenAI API has three switches, three actors, and three jurisdictional exposure points — and its DORA concentration risk assessment addresses none of them as coercion vectors.
The full operational translation — severity modeling by organizational profile, trigger indicators for reassessment, contractual remediation of force majeure gaps — goes beyond what any single analysis can deliver. But the first step is the vocabulary: knowing which switches exist, who holds each one, and at what speed each operates.
It means maintaining relationships with multiple providers at each layer — not for redundancy in the disruption sense, but to create competitive tension among the switch-holders. An organization that runs workloads on both AWS and a domestic cloud provider, trains on both Nvidia and AMD hardware, and deploys both proprietary and open-weight models has not eliminated dependency. It has diversified the leverage against it. That is a different calculation from disaster recovery, and it requires a different budget line.
It means building emergency migration frameworks before the emergency arrives. The digital embassy concept — commercial contracts pre-authorizing rapid cross-border migration of critical systems — has been explored by Gulf states since March 1.[45] Ukraine’s wartime data evacuation, which moved 10 petabytes of sovereign data to AWS regions beyond Russian artillery range within weeks of the 2022 invasion, provides the operational precedent.[46] Both required legal preparation completed before the crisis — precisely the preparation most nations have not done.
And it means conducting sovereignty risk assessments that model coercion, not just disruption — asking not only “what if AWS goes down” but “what if Washington decides our chips should run slower, our cloud access should be restricted, and our model APIs should be revoked, in that order, over a period of months.” That scenario has no precedent as a coordinated campaign against a US ally. Turkey is the closest case for any single layer, and Russia is the closest for the cumulative effect. But every mechanism required to execute it already exists: export controls for chips, sanctions compliance for cloud, terms of service for models. The switches are built. The question is who decides when to flip them — and whether you’ve structured your infrastructure for the possibility that one day, someone will.
The ungoverned
Every nation’s AI strategy has a layer where someone else holds the off switch. The question isn’t whether you’re sovereign. It’s how many layers deep your dependency goes, and whether you’ve chosen your dependencies or had them chosen for you.
That’s the conclusion for nations that play by the rules. DORA, NIS2, the AI Act, SecNumCloud, data residency, sovereign cloud — all of it is governance for the governed. Rules for the rule-followers. The frameworks are real. The compliance obligations are real. And they apply exclusively to actors who were never the threat.
Location verification, firmware attestation, cluster authorization — these make compliant actors visible. They do nothing about the actors who were never going to comply. Build switches into every GPU, and you create a world that cleanly divides into two: actors who can be coerced and those who cannot be found.[48]
It is 2031. In twelve minutes, across four continents, an AI-generated video of the American president ordering a nuclear strike against a Middle Eastern country hits social media in fourteen languages. Synthetic audio of the ECB president confirming the insolvency of three eurozone banks triggers the fastest sovereign debt selloff since 2012. Fabricated satellite imagery of naval formations in the Taiwan Strait floods intelligence-sharing networks. None of it is real. The bank runs are. The riots in Frankfurt are. The military mobilization in the Western Pacific is.
The source is traced within hours to an underground facility — thousands of smuggled GPUs running uncensored models trained outside Western labs, generating and distributing content at a speed no human operation could match. No firmware attestation ever touched those chips. No API ever logged those queries. No framework ever governed that facility. It never applied for a cloud account. It never accepted the terms of service. It never filed a concentration risk assessment.
B-2s fly out of Whiteman carrying GBU-57 bunker busters — 30,000 pounds each, built to punch through sixty meters of hardened earth.
The facility is a crater. The models were copied to dozens of nodes before the first bomb fell. The content is still generating. Global markets have crashed. The bank run is spreading faster than any central bank can respond. Forty-three US embassies are in flames. Soldiers are firing on crowds in Jakarta, in Karachi, in Cairo — crowds enraged by a strike that was never launched, ordered by a president who never spoke, in a video that was never real.
Access found nothing to surveil. Disable found nothing to degrade. Destroy hit the target.
The B-2 arrived twelve hours later. The damage was done in twelve minutes.
Notes
[1] AWS Health Dashboard status updates, March 1-2, 2026;CNBC, “Iran war: Digital services down in UAE after data center drone strikes,” March 3, 2026;DefenseScoop, “Commercial data centers emerge as targets in modern warfare after drones hit 3 AWS facilities,” March 3, 2026. Abu Dhabi Commercial Bank confirmed platforms and mobile app were unavailable for approximately 48 hours. The National (UAE), March 4, 2026.
[2]Reuters, “US mulls new rules for AI chip exports, including requiring US investments by foreign firms,” March 5, 2026; Bloomberg, “US Drafts Rules for Sweeping Power Over Nvidia’s Global Sales,” March 5, 2026. Specifically, the draft establishes a tiered system: chip installations seeking export license exemptions would require Nvidia monitoring and software preventing unauthorized clustering; installations up to 200,000 processors would be subject to physical visits by US export control officials; installations above 200,000 would require intergovernmental agreements and US AI data center investment commitments. The draft is described as the sixth iteration of a replacement for the Biden-era AI Diffusion Framework, which the Trump administration rescinded on May 13, 2025. A White House official told Axios the draft “does not reflect what President Trump has said on export controls nor does it reflect the direction of the Trump administration.” Axios characterized the draft as “Diffusion 2.0.”
[3] DORA:Regulation (EU) 2022/2554, effective January 17, 2025. NIST:Cybersecurity Framework 2.0, February 2024. NIS2:Directive (EU) 2022/2555. FSB: Third-Party Risk Management Toolkit, December 2023.
[4]DORAArticle 29(2) requires assessment of ICT concentration risk including whether providers are “not easily substitutable” and mandates identification of “alternative solutions.” Article 28 requires contractual exit strategies. Article 31 empowers ESAs to designate Critical Third-Party Providers for direct EU-level oversight.
[5]NIST CSF 2.0Govern function, GV.SC-01 through GV.SC-10. All ten subcategories are threat-agnostic — they address vendor assessment, risk integration, and monitoring without distinguishing deliberate coercion from accidental failure.
[6]NIS2 Recital 90warns of “undue influence by a third country on suppliers and service providers, in particular in the case of alternative models of governance” and references “concealed vulnerabilities or backdoors.” This appears in a recital (interpretive context, not directly binding operative text) and identifies these as “non-technical risk factors” for coordinated security assessments — but does not create a systematic framework for analyzing coercion mechanisms by stack layer. Additional frameworks that model disruption but not coercion:FSB, “Toolkit for Enhancing Third-Party Risk Management and Oversight,” December 2023. Language throughout addresses “disruption, outage or failure” at third-party service providers. On DORA’s geographic provisions: DORA Article 29(2)(c) requires assessment of risks arising from “ICT third-party service providers’ location relative to the financial entity.” This geographic risk provision acknowledges jurisdictional exposure as a factor in concentration risk but treats it as a compliance assessment input, not as a vector for deliberate coercion. On the EU ICT Supply Chain Security Toolbox:EU ICT Supply Chain Security Toolbox, adopted February 13, 2026 (European Commission). Explicitly addresses foreign interference as a non-technical risk. However, the toolbox builds on the 5G Toolbox framework and is designed primarily for telecommunications and critical infrastructure, not AI compute specifically.
[7] Farrell, Henry, and Abraham L. Newman. “Weaponized Interdependence: How Global Economic Networks Shape State Coercion.” International Security 44, no. 1 (2019): 42-79. The paper’s two core mechanisms: the “panopticon effect” (hub control enables surveillance of network participants) and the “chokepoint effect” (hub control enables denial of access). Extended in their 2025 Foreign Affairs article “The Weaponized World Economy.” A senior Trump administration official described weaponized interdependence as a “beautiful thing” and used the framework as a template for strengthening chip export controls.
[8] See “Two Sovereign Clouds, One Legal Wall,” The AI Realist. The piece traces the specific legal pathways through which the CLOUD Act and FISA Section 702 reach data stored in “sovereign” European cloud offerings, regardless of data residency or entity structure. The access dimension of AI infrastructure risk is covered in detail there and is not reproduced here.
[9]BIS Paper No. 154, “The AI Supply Chain” (Gambacorta & Shreeti, March 2025), reports Nvidia’s data center GPU revenue share at 92% as of 2023, sourced from IoT Analytics/Statista. This measures data center GPU revenue, not the broader AI accelerator market, which includes custom ASICs (Google TPUs, Amazon Trainium). Wells Fargo estimated the broader AI accelerator market at 80-90% Nvidia as of mid-2025. AMD’s MI300X and hyperscaler custom silicon have likely reduced the figure modestly by early 2026.
[10] Chip Security Act:S.1705(Senate, introduced by Sen. Tom Cotton, R-AR) andH.R.3447(House, introduced by Rep. Bill Huizenga, R-MI, and Rep. Bill Foster, D-IL). Section 4(a)(1) requires “location verification” within “not later than 180 days after the date of the enactment.” 32 House cosponsors as of February 2026, including the Chair and Ranking Member of the House Select Committee on China. Still in committee; no floor vote as of March 6, 2026. The bill has drawn opposition: the Center for Cybersecurity Policy argued it “threatens to create new cyber vulnerabilities” by mandating hardware-level tracking mechanisms that could be exploited by adversaries.
[11]Reuters, “Nvidia builds location verification tech that could help fight chip smuggling,” December 9, 2025 (Stephen Nellis, Michael Martina). Technology uses confidential computing capabilities and communication latency measurement to estimate the country where a chip is operating. Designed initially for Blackwell-generation chips; backward compatibility to Hopper and Ampere under consideration. Nvidia separately announced an opt-in fleet management service streaming GPU telemetry data to an Nvidia-hosted portal (Nvidia blog, December 10, 2025) — described as “read-only” and unable to “modify GPU configurations.”
[12]Reuters, March 5, 2026. The monitoring and anti-clustering software provisions are conditions for exemption from licensing requirements for installations under 1,000 chips, not blanket mandates on all chip sales. Commerce Department did not respond to requests for comment.
[13] Pentagon: F-35 Joint Program Office statement, March 2025. Belgium, Switzerland, Czech Republic, Germany issued similar denials. Interesting Engineering,The Aviationist, and The Defense Post all covered the denials.
[14]GAO-20-316, “Weapon System Sustainment: DOD Needs a Strategy for Re-Designing the F-35’s Central Logistics System,” March 2020, p. 1: ALIS is “one of three major components that make up the F-35, along with the airframe and engine” (attributed to a DOD official). The Autonomic Logistics Operating Unit (ALOU) is described as “the central computer unit that all F-35 data are sent through... There is only one ALOU.” Located at Lockheed Martin USAF Plant 4, Fort Worth, TX. GAO-16-439 flagged the single ALOU as a single point of failure. Note: ALIS has been transitioning to a successor system called ODIN (Operational Data Integrated Network) since 2022. ODIN replacement hardware is deployed but software transition remains incomplete as of March 2026. The structural dependency argument in the body applies to both systems — the centralization persists under ODIN. The F-35 ecosystem spans approximately 24-25 million lines of code across aircraft and off-board systems including ALIS/ODIN.
[15] International operators are prohibited from conducting F-35 test operations outside the continental United States. Partner nations have not been granted access to the F-35’s source code.The Aviationist, FlightGlobal, Defense News.
[16] Mission Data Files are compiled at the 513th Electronic Warfare Group, Eglin AFB, by a team of approximately 90 personnel. BAE Systems holds a USAF contract to support MDF production. MDFs require “rapid and frequent” updates during conflict. Aviation Today, January 2023;The War Zone. Israel is the sole exception: the F-35I “Adir” carries homegrown electronic warfare systems developed by Elbit Systems, operates outside the standard ALIS framework, and is the only variant with independent depot-level maintenance capabilities that no other partner has negotiated, at a cost of decades and billions in development.
[17] Turkey invested $1.4 billion combined in aircraft procurement and industrial participation. Formal removal announced July 17, 2019, via White House statement and Pentagon press conference.Defense News, July 17, 2019. Six aircraft built for Turkey remained at Luke AFB and Eglin AFB; USAF acquired them for $862 million in July 2020.
[18]CAATSA sanctionsimposed December 14, 2020, targeting the Presidency of Defense Industries (SSB) and four officials under Section 231, which was designed to penalize procurement of Russian defense equipment by adversary states. Paul Weiss, Gibson Dunn, CRS, and FDD confirm Turkey was the first NATO ally sanctioned under this framework.
[19] Export control designations can take effect immediately — theHuawei entity listingin May 2019 was effective same day. The “weeks to months” timeline applies to firmware-based governance mechanisms that depend on license renewal cycles or attestation infrastructure not yet fully deployed.
[20] Glenn Greenwald, No Place to Hide (May 2014), published a photograph from Snowden documents showing NSA employees opening a Cisco box during a supply chain interdiction operation. An internal NSA document described these as “some of the most productive operations in TAO.” The JETPLOW firmware implant — a persistence backdoor for Cisco PIX and ASA firewalls — was catalogued in the NSA’s ANT catalog, published byDer Spiegelin December 2013 (Jacob Appelbaum et al.). Bruce Schneier’s analysis: Schneier on Security, January 2014. Cisco’s CEO John Chambers personally complained to President Obama. Cisco’s China orders declined approximately 18% in the quarters following the Snowden revelations, per Cisco’s Q2 FY2014 earnings call. Note: the Cisco interdiction was a TAO (Tailored Access Operations) program, not PRISM. PRISM was a separate program compelling internet companies to provide data under FISA Section 702. Different programs, different legal authorities, different mechanisms.
[21] The “Athens Affair”: Ericsson’s AXE telephone switches for Vodafone Greece included built-in lawful intercept capability (the RES module), compliant with ETSI standards. Unknown actors exploited this pre-installed capability to wiretap 106 cell phones — including Prime Minister Costas Karamanlis, cabinet ministers, military officials, and journalists — for approximately nine months (June 2004 - March 2005). Kostas Tsalikidis, Vodafone’s network planning manager, was found dead in an apparent suicide the day after the malware was removed. In 2015, Greek authorities issued an international arrest warrant for CIA official William George Basil; Snowden documents confirmed NSA had “performed CNE operations against Greek communications providers” and maintained approximately 60 “fingerprints” for exploiting lawful intercept systems globally, including Ericsson’s. No court has convicted anyone in connection with the wiretapping; the US government has neither confirmed nor denied involvement. Attribution rests on the arrest warrant, the Snowden documents, and circumstantial evidence including the timing around the 2004 Olympics. Vassilios Prevelakis and Diomidis Spinellis, “The Athens Affair,” IEEE Spectrum, July 2007;The Intercept, September 2015 (James Bamford).
[22] Salt Typhoon: Chinese state-affiliated hackers (linked to the Ministry of State Security) compromised nine US telecommunications companies — AT&T, Verizon, T-Mobile, Lumen, Spectrum, Consolidated Communications, Windstream, and others — by exploiting CALEA-mandated lawful intercept systems. The hackers accessed metadata from over a million users, obtained the personal communications of President Trump and Vice President Vance, and acquired what the Wall Street Journal characterized as the near-complete list of phone numbers under active FBI/NSA wiretap. The “near-complete” qualifier originates from press reporting (WSJ, Washington Post), not from official government statements; the FBI’s public disclosures confirm the breach of lawful intercept systems and “the copying of select information subject to court-ordered US law enforcement requests” but have not publicly quantified the full scope of data obtained. Senator Mark Warner, chairman of the Senate Intelligence Committee, called it “the worst telecom hack in our nation’s history.” FBI confirmed the campaign targeted 80+ countries and 600+ organizations and had been ongoing since at least 2019. US Treasury sanctioned Sichuan Juxinhe Network Technology Co. in January 2025.Wikipedia, “2024 global telecommunications hack”;CRS Report IF12798; Nextgov, August 2025; EFF, October 2024.
[23] In 2013,Reuters reportedthat the NSA paid RSA Security $10 million to make Dual EC DRBG the default random number generator in the RSA BSAFE cryptographic library. RSA responded that they “categorically deny” knowingly colluding with the NSA to adopt a flawed algorithm. The algorithm had been standardized by NIST in SP 800-90A despite concerns raised by Dan Shumow and Niels Ferguson at Microsoft in 2007 about a possible kleptographic backdoor. In September 2013, following the Snowden disclosures, NIST reopened SP 800-90A for public comment and subsequently withdrew Dual EC DRBG. Reuters, December 2013; Bruce Schneier, “The Strange Story of Dual EC DRBG,” November 2007.
[24] Juniper Networks announced in December 2015 that “unauthorized code” had been discovered in ScreenOS, the operating system for its NetScreen firewalls. Academic analysis (Checkoway et al., ACM CCS 2016) found that the ScreenOS VPN implementation had been vulnerable to passive decryption since 2008 via Dual EC, and that in 2012, a party changed the elliptic curve Q parameter — enabling a different actor to exploit the same backdoor mechanism. Bloomberg reported in 2021 that investigators suspected Chinese government-backed hackers (APT 5); Juniper’s general counsel described it as the work of “a sophisticated nation-state hacking unit.” Only specific ScreenOS versions were affected (6.2.0r15–r18 and 6.3.0r12–r20), not the entire NetScreen installed base. Juniper said it added Dual EC “at the request of a customer” but did not name the customer. Bloomberg, September 2021; Matthew Green, “On the Juniper Backdoor,” December 2015; ACM CCS 2016.
[25] The NSA’s EternalBlue exploit targeted a vulnerability in Windows Server Message Block (SMB) protocol. The agency is believed to have used it for targeted intelligence collection for at least five years before the Shadow Brokers group leaked it in April 2017. EternalBlue was weaponized twice in rapid succession by different actors. WannaCry (May 12, 2017), attributed to North Korea’s Lazarus Group, encrypted systems at NHS hospitals across the UK, forcing diversion of ambulances and cancellation of surgeries; estimated damages $4-8 billion (Cyence/CBS). NotPetya (June 27, 2017), attributed to Russia’s GRU (Sandworm), caused even greater destruction: it halted Maersk’s global container shipping operations (requiring reinstallation of 45,000 PCs and 4,000 servers), knocked out Merck pharmaceutical production, and caused an estimated $10+ billion in total damages — making it the most destructive cyberattack in history. Both exploited EternalBlue despite Microsoft having released a patch (MS17-010) in March 2017.UK National Audit Office, “Investigation: WannaCry Cyber Attack and the NHS,” October 2017;Wired, “The Untold Story of NotPetya,” August 2018.
[26] Nvidia CSO David Reber Jr., "No Backdoors. No Kill Switches. No Spyware," Nvidia Blog, August 5, 2025. Published in English and Chinese. Reber argued that "hard-coded, single-point controls" and mandatory hardware-level mechanisms are "a gift to hackers and hostile actors," but explicitly distinguished these from "optional software features, controlled by the user," which he endorsed as "responsible, secure computing." The post compared mandatory chip governance proposals to the NSA's failed 1993 Clipper Chip initiative. Triggered by China's Cyberspace Administration summoning Nvidia executives over alleged backdoor capabilities in H20 chips. Nvidia's December 2025 location verification feature falls within the "optional software" category Reber endorsed — the Chip Security Act and March 2026 draft rules would move it into the "mandatory hardware-level control" category he warned against.
[27]The Record(Recorded Future News), “Russians will no longer be able to access Microsoft cloud services, business intelligence tools,” March 19, 2024;Bleeping Computer, “Microsoft to shut down 50 cloud services for Russian businesses,” same date. Softline, Microsoft’s Russian distributor, reported receiving a list of approximately 50 products to be suspended.
[28] Actual disconnections began in mid-May 2024 in batches, per Interfax (”Microsoft starts rolling disconnection of its cloud products in Russia”) and The Moscow Times. Microsoft’s notification letter cited EU 12th sanctions package (Regulation 2023/2878) compliance. Enterprise and government users affected; individual accounts excluded. TASS confirmed the scope. This was EU sanctions compliance — not a unilateral Microsoft decision — though the operational mechanism was indistinguishable from a targeted cutoff.
[29]BIS Paper 154reports “nearly 74%” for the top three IaaS providers, citing Gartner (2024). This measures IaaS specifically, “the most relevant [segment] for AI models.”Synergy Research‘s broader cloud infrastructure services figures show the Big Three at approximately 63% (including hosted private cloud). Canalys reports a similar ~63% for Q4 2024. The 63-74% range in the body reflects the measurement difference between all cloud services (lower) and IaaS alone (higher). All figures are revenue-based.
[30]JWCC(Joint Warfighting Cloud Capability) contracts awarded to AWS, Microsoft, Google, and Oracle across all classification levels (unclassified through TS/SCI). Department of Defense press releases, 2022-2023.
[31] NATO bombing of Yugoslavia, March-June 1999. Multiple telecom and TV transmitters destroyed (the frequently cited figure of 17 could not be independently verified against primary NATO or ICTY sources). Graphite bombs (BLU-114/B) disabled more than 70% of Serbia’s electricity supply on May 2, 1999, cascading to telecommunications. The Radio Television of Serbia headquarters in Belgrade was struck on April 23, 1999, killing 16 employees. The ICTY’s “Final Report to the Prosecutor” concluded the RTS building was a legitimate military target because the TV network was “part of the overall military communication system of the Serbian government.” Nine major power plants were targeted across 19 total sites. Wikipedia, “NATO bombing of Yugoslavia”; RAND MR-1351, Chapter 6;ICTY Final Report.
[32] US forces hit Iraqi telecommunications facilities on March 27, 2003.Washington Institute for Near East Policy, “Infrastructure Targeting and Postwar Iraq,” 2003, noted that military planners had learned from 1991 that “targeting certain forms of infrastructure (e.g., the national electrical grid or public telecommunications) causes more disruption to civilians than to the enemy military.” The isolation quote is widely attributed to Brig. Gen. Buster Glosson, architect of the 1991 air campaign, as reported in secondary sources including CESR/ReliefWeb, “Water Under Siege in Iraq.” The specific wording could not be verified against a primary transcript.
[33] See “Data Residency Is a Blast Radius,” The AI Realist, March 2, 2026, for the full analysis of how UAE data residency law (Federal Decree-Law No. 45 of 2021) and sector-specific banking and healthcare mandates prevented cross-region failover during the March 1 disruption.
[34]OpenAI’s API supported countries pagelists approximately 160 countries; China (mainland), Russia, Iran, North Korea, Syria, Cuba, and others are excluded. The exclusion predates the company’s API launch. OpenAI Help Center; ChinaTalk, “OpenAI Pulls the Plug on China,” July 2024.
[35] Active enforcement of geographic restrictions escalated July 9, 2024, with API traffic from unsupported countries blocked. Prior to this, Chinese developers had de facto API access via VPNs and third-party intermediaries.TechCrunch, The Register, BankInfoSecurity.
[36]Meta Llama Community License Agreement(versions 2, 3, 3.1, 3.2, 4): Section 1(b)(iv) requires that use comply with “applicable laws and regulations (including trade compliance laws and regulations).” The license does not explicitly name sanctioned entities or countries — the exclusion operates through the compliance requirement, placing the legal burden on the licensee.
[37] Meta withheld multimodal versions of Llama 3.2 from the EU over data protection/AI Act concerns.Slator, September 2024. The precedent demonstrates that geographic restriction of “open” models operates on regulatory grounds as well as sanctions grounds.
[38] Russia’s three-layer degradation: Chip layer — BIS entity list designations and October 2022 export controls cut off advanced GPU supply; updated October 2023 to close the A800/H800 loophole. Cloud layer — Microsoft suspended ~50 products from mid-2024; AWS and Google similarly restricted access. Model layer — OpenAI blocked Russian API access from launch; Western model providers uniformly geofenced Russia. No central authority coordinated the three-layer response; it emerged from independent actors (BIS, the EU, Microsoft, OpenAI) responding to the same geopolitical event through different legal instruments on different timelines.
[39] Brookings Institution, “Is AI Sovereignty Possible? Balancing Autonomy and Interdependence,” February 17, 2026. Authors: Brooke Tanner, Cameron F. Kerry, Andrew W. Wyckoff, Nicoleta Kyosovska, Andrea Renda, Elham Tabassi. The quote is verbatim from the executive summary and identified as the report’s central finding. Brookings maps sovereignty across 10 layers: minerals/energy, compute hardware, digital infrastructure, networks, software/standards, data assets, models, applications, plus governance and talent as crosscutting enablers.
[40] See “Japan Built the Bullet Train. Why Can’t It Build an LLM?,” The AI Realist. Japan’s structural constraints include a compensation system that pays AI researchers a fraction of US equivalents, an employment culture that penalizes job-hopping, and a demographic trajectory that closes the talent window before domestic capability matures.
[41] See “India Has a Million AI Engineers. So Why Can’t It Build an LLM?,” The AI Realist. India’s AI talent pipeline functions as an export mechanism, with the most capable engineers drawn to US compensation and research environments.
[42] European Commission, “Apply AI Strategy,” October 2025. Available viaEUR-Lex. The €20 billion figure is a mobilization target combining public and private investment annually over the digital decade, not a direct European Commission budget commitment. The strategy includes “AI Factories” (EuroHPC supercomputing facilities) while acknowledging “external dependencies of the AI stack that can be weaponised” by state and non-state actors.
[43] SMIC’s most advanced production process is approximately 7nm (N+2), compared to TSMC’s 3nm (N3E/N3P). Huawei’s Ascend 910B/C chips are competitive for inference workloads but lag Nvidia’s H100/H200 for large-scale training.
[44]House Select Committee on China, “DeepSeek Unmasked” report; The Wire China, March 2026. The 60,000-chip figure is from a partisan congressional body and has not been independently verified. DeepSeek has not publicly disclosed its hardware inventory.
[45] The “digital embassy” concept — commercial contracts pre-authorizing rapid cross-border migration of critical systems — has been explored by Gulf states following the March 1 strikes. See Semafor, March 2, 2026.
[46] Ukraine’s wartime data evacuation moved 10+ petabytes of sovereign data — population registries, land records, tax data — to AWS regions outside Ukraine in 2022, under emergency legislation passed one week before the Russian invasion.AWS, “Safeguarding Ukraine’s Data to Preserve Its Present and Build Its Future,” June 2022; Liam Maxwell, AWS re:Invent 2022. Estonia’s data embassy in Luxembourg (operational since 2017) provides the peacetime precedent.
[47] CBUAE announced the Sovereign Financial Cloud Services Infrastructure (SFCSI) partnership with Core42 on February 25, 2026. Core42 is a subsidiary of G42, which received a $1.5 billion investment from Microsoft in April 2024 and operates AI infrastructure on Nvidia GPU clusters. The SFCSI promises “centralised, highly secure, dedicated and isolated infrastructure that ensures data sovereignty” and “continuous availability of critical financial services for the entire UAE financial sector.” The platform was announced as a partnership agreement, not a deployed system — the actual infrastructure that ran UAE financial services on March 1 remained AWS. CBUAE press release via Zawya, February 25, 2026;The National(UAE), February 25, 2026; Gulf News, February 25, 2026.
[48] BIS export control rationale for advanced semiconductors explicitly cites weapons of mass destruction applications, including nuclear weapons simulation and design. TheOctober 2022 semiconductor export controls(87 FR 62186) reference end-use concerns including “weapons of mass destruction, military modernization, and human rights abuses.” Advanced GPUs capable of large-scale AI training are equally capable of generating synthetic media at industrial scale — deepfakes, fabricated satellite imagery, coordinated disinformation — which represents a distinct but potentially equally destabilizing dual-use concern.