20–23 Jun 2023
Europe/London timezone

Artificial Intelligence and its Impact on International Relations theory

22 Jun 2023, 09:00

Description

Will the use of artificial intelligence in strategic/conventional weapon systems be stabilizing or de-stabilising? Does such technological innovation escalate the tensions on the battlefield? Or will it act as a deterrent, emphasising reactions only when a certain threshold or redlines are crossed? This paper argues that AI-equipped weapon systems will have limited destabilising effects on the military balance of two rival states. To do so, “offense-defence balance” as conceptualised by Charles Glaser, is operationalised to argue that, initially, AI-based weapon acquisition will shift the balance in favour of offense; but later, this first mover advantage will erode as more states start replicating the technological innovation and also offsetting the impact of offense by investing in heavy defensive technologies. Further, the application of such AI-based technologies in the tactical battlefield will clarify the intentions of other states; by signalling strong escalatory action if certain redlines are crossed by the adversary. In other words, the state is determined to initiate specific measures, if its tolerance level is breached without backing down or resorting to temper the situation by diplomatic means. Such mechanisms will curb intense security competition and unnecessary insecurity spirals by clearing making explicit the conditions that can cause instability. Unlike conventional wisdom that asserts, and even verified through wargames, AI-enabled weapons will escalate the situation; as an incentive to strike first, fast, and accurately will propel states to delegate sufficient autonomy to the system without any human in the loop. Such a theoretical proposition makes considerable logical sense, but its tendency to escalate is over-inflated in practical terms. In other words, it is presumed that the other side is ill-equipped; however, multiple defensive mechanisms like radar blinds, decoys, and denial technologies are deployed to offset the effect. Further, removing the human element from the loop is highly contested; the military as a bureaucratic organisation will resist any such move to eliminate their authority on the battlefield. The paper concludes that the effect of AI-enabled weapon system on escalation is limited. Indeed, in some situations, it can act as a deterrent, precluding others from escalating the situation by setting clear redlines. As the nature of warfare is changing, the tendency to cut off the human element from the decision-making loop is increasing. However, as a policy prescription, this paper suggests that human-machine teaming is the best optimal choice for responsible statecraft.

Speakers

Presentation materials

There are no materials yet.