17–20 Jun 2025
Europe/London timezone

Proxy Responsibility for AI-based Decisions in the Resort-to-Force’

20 Jun 2025, 09:00

Description

ntegrating AI into military decision processes on the resort-to-force raises new moral challenges, particularly with regard to responsibility for decisions made or significantly influenced by AI-enabled systems. A key question is: who is responsible when AI-enabled systems significantly influence such decisions? I argue that while we cannot attribute responsibility to the systems themselves, we must identify proxy responsibilities (Sienknecht 2024) in their environment to substitute for the missing human actor. In this paper, I elaborate the concept of proxy responsibility within human-machine teaming in the context of recourse to force. I propose to compartmentalize the resort-to-force decision-making process and implement an AI department that integrates the various institutions and systems involved. This AI department would be an institutional response to an institutional problem, as decisions on the use of force are usually made and shaped by different institutions. However, a substitute is never as good as the original, so at the end of the paper I discuss the potential risks of an institutional response and suggest possible safeguards to avoid such pitfalls. This approach helps to mitigate the complexity of integrating AI into military decisions and aims to contribute to ethical and responsible use

Speakers

Presentation materials

There are no materials yet.