Description
This paper explores how military Artificial Intelligence (AI) simultaneously reveals and conceals the reality of the battlefield. The application of military AI promises to enable greater accuracy, situational awareness, and depth of field in conflict. Yet by narrowing its focus on specific labels, identifiers, and geographical scopes, it is both revealing and concealing important details of the battlefield. AI systems provide a passage to see the world in minute detail, yet this is confined to the framing and limitations of the algorithm. In addition, AI-Decision Support Systems (AI-DSS) may present a series of options for courses of action, yet these too are limited. Overall, whilst bringing certain things to light, the use of military AI puts others in the dark, and what is cast aside is deemed as irrelevant and becomes invisible in conflict decision-making. This is a process of willing knowing to willing unknowing, where a choice is made on what permits the use of force on the battlefield and what details we are willing to miss or hide by giving less weight to them. This is problematic for broader situational awareness, the application of violence, and humanitarian safety. Therefore, this paper explores this dangerous dichotomy of military AI.