Description
When and why is it right or wrong to assign human characteristics to military robots? Addressing this question is increasingly important in the emergent age of warfighting enabled by artificial intelligence (AI) and characterised by human-machine teaming. Humans’ anthropomorphising of non-human entities is a deeply ingrained practice of sensemaking. And, when it comes to the militarisation of robotics and AI technologies, anthropomorphism might be one way of beneficially facilitating human trust in AI-enabled robots. On the other hand, the resemblance (designed or perceived) of military robots to human beings might also contribute to confusion and injustices in wartime. For example, some military ethicists have opposed the anthropomorphisation of ‘autonomous weapon systems’ out of a concern to avoid the erroneous attribution of moral responsibility to machines. Arguably, however, the ethical implications of deploying anthropomorphic warbots extend further than this concern, and a universal prohibition on anthropomorphism would be impractical and undesirable. Accordingly, this paper frames the rise of anthropomorphic warbots as presenting a challenge of ‘artificial humanity’. It is a challenge to reconsider the meaning and value of human-ness and humane-ness in war, and it can be approached as a matter of engineering, ethics, and/or human sociality. The paper then outlines an ethical framework of ‘responsible anthropomorphism’ applicable to AI-enabled warbots. A range of potential moral problems and possible solutions is explored by reference to three visions of anthropomorphic robots: as ‘users’ of force, as ‘assistants’ for warriors, and as ‘victims’ in war.