- Only about a third of the 85 countries at the Responsible AI in the Military Domain summit in A Coruña signed a non-binding declaration on principles for AI use in warfare.
- The United States and China, the world’s leading military AI powers, opted out, exposing strategic fault lines in global governance of emerging defence technologies.
What happened: AI norms face superpower resistance
A summit aimed at establishing ethical guidelines for artificial intelligence in warfare ended with a sharp divide, as the United States and China declined to endorse a new international declaration.
The Responsible AI in the Military Domain (REAIM) summit in A Coruña, Spain, concluded with only 35 of 85 participating nations signing a non-binding statement of principles. The proposed framework emphasized maintaining human control over AI systems, clear military command, and thorough risk assessment.
The refusal by the world’s two leading military AI powers to sign is seen as a significant setback for efforts to create global norms. It underscores how strategic rivalry is stifling cooperation on governing emerging battlefield technologies, from autonomous weapons to AI-driven intelligence.
Dutch Defence Minister Ruben Brekelmans described the dynamic as a “prisoner’s dilemma,” where nations support responsible AI in theory but fear being placed at a disadvantage if adversaries do not follow the same rules.
The standoff highlights the widening gap between rapidly advancing military AI capabilities and stalled international efforts to regulate them. The outcome in Spain suggests that, for now, the race for AI supremacy is overriding the push to establish shared ethical boundaries for its use in war.
Also Read: China blocks Nvidia H200 AI chips despite US export clearance
Also Read: US approves Nvidia H200 exports to China
Why it’s important
The refusal by Washington and Beijing — the two nations with the most advanced and strategically focused military AI programmes — to back even a non-binding statement marks a clear fault line in AI governance. It highlights how contentious the subject of military automation and autonomy has become as AI capabilities advance rapidly.
Unlike past summits in The Hague (2023) or Seoul (2024), which produced broader but less concrete commitments endorsed by more countries, this year’s summit produced a clearer set of principles that many major powers were unwilling to fully endorse.
From a security perspective, these developments reinforce the idea that AI is no longer a niche technological issue but a core variable in international strategic competition, with significant implications for military planning, alliance cohesion and arms-control regimes. In financial markets, analysts note that uncertainty in governance frameworks tends to increase investment in dual-use AI systems as firms and states race to secure competitive advantages.
