Description
This article revisits Thomas Schelling’s foundational theories of deterrence and strategic interaction—particularly his emphasis on risk manipulation in brinkmanship—by exploring how generative artificial intelligence (GenAI) disrupts these core principles. It introduces “synthetic deterrence”: a new form of coercion in which AI-generated simulations replace traditional demonstrations of capability. Unlike conventional deterrence, which depends on credible threats and costly signaling, synthetic deterrence uses machine-generated content to fabricate signals, simulate adversary behavior, and manipulate perception. GenAI enables strategic ambiguity and weaponized uncertainty, blurring lines between deliberate and accidental escalation. By reshaping how threats are constructed and interpreted, deterrence is redefined as a cognitive contest over belief and perception—focusing less on projecting force and more on influencing how threats are perceived and responded to. This shift challenges assumptions about rationality, signaling, and risk, necessitating a fundamental reevaluation of coercive strategy in an era where machines influence the tempo, credibility, and meaning of strategic communication. This article makes three key contributions. First, it advances deterrence and bargaining theory by framing GenAI not as a passive tool, but as an active strategic agent that shapes perception and behavior. Second, it theorizes synthetic deterrence as a distinct mode of coercion grounded in AI-generated ambiguity and illusion. Third, it shows how GenAI-driven deception undermines crisis stability by eroding signal trust, deepening ambiguity, and accelerating decision cycles. These dynamics demand a fundamental reassessment of coercive strategy in an age of machine-shaped perception.