Description
In light of the development of artificial intelligence (AI), governments have focused on the regulation and funding of technologies for both military and civilian purposes. Finding themselves in a global competition, the European Union, China, and the United States have presented innovation policies which lay out their visions of AI. We argue that, despite different innovation cultures and institutional settings, visions of future AI development by the three governmental actors are all shaped by their perceptions of human-computer interaction and interest in guaranteeing security. Our analysis shows that across governmental documents, AI is perceived as a capability which can be used to enhance (supra)national interests while anticipated risks can be managed. This correlates with human-centered perceptions of technology and assumptions about human-AI relationships of trust, implying notions about interpretability and human control. We connect to interdisciplinary debates of critical security studies, human-computer interaction and Science and Technology Studies to gain a better understanding of innovation politics embedded in economic competition. Governmental visions of technology pose a relevant issue to the identity-security nexus in IR as they reflect both affective and instrumental accounts of trust that can inhibit efforts to control developments and offer grounds for cooperation based on common perceptions.