Description
The ideas of scientific objectivity and rationality have animated the accelerated development and use of autonomous weapons systems, Machine Learning techniques, sensors, and related technologies in warfare. Yet, this objectivity in algorithms in applying violence is questionable in many ways. Despite the remedial potential of algorithmic techniques in bringing good to humanity, machine rationalities in warfare also have the potential to perpetuate and amplify arbitrary values and realities of violence. Looking closely at the various stages/levels of an autonomous system used for violence, I argue that the idea of scientific objectivity in violence is illusory at best. I argue that, in the first place, these technologies – particularly those aimed at targeting humans – are fraught with imperial rationalities at political and philosophical levels, with the concept of policing remote populations central to the ideas that constitute algorithmic warfare, particularly in the Global South. The ‘objectivity’ that defines the techniques in algorithmic warfare involves using statistical instruments that find patterns, classify humans, label them, and make decisions based on the principles of speed and efficiency. However, all these practices are also closely related to the logics of disciplinary control (Downey, 2024) that lead to biases, ‘hallucination of threats into being’, psychic imprisonment (Hoijtink, Arentze & Gould, 2024), and a host of other dehumanising effects (collateral bodies, ‘digital dehumanisation’, using vulnerable populations as test subjects, algorithmic distance, and so on). This paper, therefore, questions the epistemological foundations for the use of algorithms and machine autonomy in the application of violence and suggests critical approaches to the governance of algorithmic warfare cognisant with lived realities.