Description
The proliferation of artificial‑intelligence (AI) applications across all sectors of society has become a defining political issue of our time, oscillating between narratives of opportunity and threat. Within this broader debate, AI in the military domain occupies a particularly prominent position. Recent scholarship has converged on human control—its definition and mechanisms—as a central regulatory consensus for military AI. Yet the conceptual and operational foundations underpinning proposals for human control remain insufficiently examined.
This paper offers a critical, theoretical inquiry into human control of military AI. Drawing on post‑ and transhumanist literature, it problematises the prevailing conception of “humans” and the broader human‑technology interaction framework that dominate current control debates. The central argument is that the regulatory imperative of human control conflicts with the techno‑optimist discourse that pervades political, military, societal, and industrial circles. By exposing this tension, the study aims to enrich normative discussions about the governance of AI in the military domain.