Description
This paper critically examines the integration of artificial intelligence (AI) into nuclear command and control systems and its implications for traditional nuclear deterrence and the emerging AI arms race among major powers. While Kenneth Waltz’s deterrence logic posits that nuclear weapons induce caution among rational actors through the threat and fear of mutual destruction, AI challenges this stability by introducing speed, opacity, and algorithmic biases into decision-making processes. Russia’s pursuit of AI capabilities, driven by a strategic imperative to counter perceived technological advances by the United States and China, exemplifies how AI integration risks fuelling a new arms race that could pose serious challenges to strategic stability. This paper argues that the incorporation of AI into nuclear command and control systems risks undermining the rational deterrence that has long prevented nuclear conflict. Furthermore, the competition for AI superiority in nuclear command and control can and will lead to an arms race, as states strive to outmatch each other’s technological capabilities. This competition could make decisions to use nuclear weapons (both strategic and tactical) less predictable and more likely. To manage these risks, the paper advocates for transparency protocols and trust-building measures to limit AI’s role in nuclear command. Ultimately, preserving stability in an AI-enhanced world requires balancing te