Description
A robot now vacuums my floor; I, meanwhile, focus on other things. As such, there is no intentionality to vacuum; vacuuming simply "goes on" (without me). This situation is ethically and ontologically unproblematic. In other activities (i.e., war), however, intentionality is understood (hitherto, at least) as intrinsic to the activity: as Clausewitz observed, the activity of war is essentialized by hostile intentions which result in fighting. What would it mean, therefore, if war simply "goes on" without us; is the telos of autonomous weapons “autonomous war”? The practices and forces of war have, of course, already been mechanized. But would autonomous hostility amount to the mechanization of war itself; is the concept of “autonomous war” (rooted in "autonomous hostility") even conceivable - logically, ontologically, and phenomenologically? My aim in this paper is merely to formulate these questions such that they signify the methodological work that pressingly needs to be done at this critical historical juncture. To this end, I: (1) scrutinize the concept of autonomy vis-à-vis war; (2) problematize the “intentionality gap” brought about by autonomous weapons; and (3) problematize the notion of “hostile intentionality” as essential to our understanding of war. Overall, I argue that we are methodologically ill-equipped to even raise these important questions, let alone answer them. Rapidly evolving technologies in the context of war, as such, must be understood as threatening an ethical, methodological, and even an ontological upheaval.