Description
Artificial intelligence and machine learning techniques are being developed to improve decision-making around the resort to force. These technologies are valued for their capacity to rapidly collect and analyse big data, model unique courses of action, offer probabilistic recommendations and predictions regarding the type and degree of force required, and evaluate the benefits, risks, and costs of action and inaction. Those concerned with these developments highlight the possibility of automation bias in human-machine teaming, and the de-skilling of individual operators and policymakers. Representing a potentially deeper challenge, however, is the uncritical integration of AI and machine-learning technologies into decision-making institutions – military and political. This technification elevates datafied knowledge production at the expense of political wisdom, and cultivates an insensitivity to the tragic qualities of violence. Drawing on the lessons of tragedy, I argue that the speed, inflexibility, and false confidence of algorithmically assisted decision-making is likely to lead to more imprudent and immoral uses of force, not less.