Description
The concept of Responsible AI has emerged as an influential governance framework to address the challenges associated with the development and use of AI technologies across both civilian and military domains. Drawing on Science and Technology Studies, this article argues for conceptualizing Responsible AI as a narrative sustained by various actors across multiple spaces and with different political objectives. It conducts a narrative praxiography of how state actors create and sustain visions of what is ‘responsible’ use of AI technologies, especially in the context of employing AI systems to support human decision-making. This is illustrated via a study of three states with different narratives and practices of Responsible AI: France, the UK, and the US. Based on an analysis of open-access sources such as strategies and policies, contextualized by interviews conducted with AI governance experts from these states, the article proceeds in two main steps. First, it unpacks the official narratives maintained by state actors and their differences across the three contexts, showcasing that Responsible AI is a broad concept that can be narrated in various ways. Second, it assesses whether the practices of those states align with their narratives, highlighting the need to question the objectives behind these narratives.