17–20 Jun 2025
Europe/London timezone

AI, War and (In)Humanity

18 Jun 2025, 13:15

Description

By removing, reducing and reconfiguring human activity on the battlefield, the use of artificial intelligence (AI) in war has significant implications for armed conflicts, their regulation, and humanitarianism as a whole. Despite the precision and reliability that might be achieved through the increased automation of wartime decisions and actions, such as the selection and engagement of targets, the capacity of a machine to apply human traits such as empathy and caution is dubious at best. From a humanitarian perspective, outsourcing life and death decisions to machines is highly problematic.

In response to this, the argument made in this paper is threefold, highlighting: the need for a broadened definition of military AI as a tool for human ends; that military AI (its development and use, as well as attitudes towards it) risks the problematic and unprecedented removal of humanity from war and its regulation; and that humanity should be used as a criterion for the use of AI in war, in order to fortify the humanitarian project in the face of contemporary challenges, and ensure as robust protections as possible for civilians and combatants alike.

The application of AI need not, and should not, mean the complete removal of humans from military actions and decisions, the delegation of human duties to machines, or the replacement of human beings with technology. Rather, AI’s use should be exclusively limited to effectively supplementing and facilitating human agency and decision-making: a tool at the service of human actors, for the extension of human agency and the augmentation of human decision-making: a technological means for strictly human ends.

Speakers

Presentation materials

There are no materials yet.