Description
This paper examines the profound ethical challenges of integrating AI-powered decision-support systems (AI-DSS) into high-stakes crisis environments. By accelerating decision-making tempos and influencing life-and-death judgments, these systems raise urgent questions about moral authorship, accountability, and the preservation of meaningful human oversight. While debates over lethal autonomous weapons (LAWS) have received significant attention, far less scrutiny has been devoted to the more subtle yet equally consequential ways in which AI-DSS shape human ethical reasoning during crises.
Building on James Moor’s hierarchy of ethical agents and recent scholarship on machine ethics, the paper develops a hybrid exemplar framework that integrates deontological, consequentialist, and virtue ethics. This pluralist model is designed not to replace human ethical deliberation, but to enhance it by embedding structured moral reasoning directly into AI-DSS design and operation. Key features include tiered ethical overrides, dissent logging, trust calibration mechanisms, and transparent audit trails—tools that reinforce human accountability and guard against the uncritical delegation of moral authority to machines.
Moving beyond abstract theory, the paper translates its ethical framework into practical design and governance principles. It proposes concrete metrics for evaluating human–AI interaction, including transparency, override frequency, and the psychological effects of AI use on decision-makers. By embedding ethical safeguards directly into AI-DSS, this approach seeks to mitigate escalation risks, preserve human moral agency, and uphold the legitimacy of crisis management processes.
The paper concludes by offering a roadmap for responsible innovation and oversight, urging institutions to treat AI-DSS not merely as technical tools but as shared cognitive entities that co-shape decisions. In doing so, it reframes the AI ethics debate, shifting focus from autonomous weapons to the deeper systemic transformation of human judgment and accountability in an era of algorithmic warfare.