Description
Today’s dynamics related to AI developments are discussed not only in terms of coexistence but as an existential risk to humanity and its survival. Different international actors (from states to organizations or big-tech companies) have started describing their positions and priorities which produce and reproduce similar trends. The paper asks how policy actors employ AI-related risks to ground the policy legitimation. The paper overviews state actors – the US, China, the United Kingdom and Japan, and organizations – the Council of Europe, the United Nations, and the European Union. Methodologically, the focus is on different documents released by the actors specifically dedicated to AI and AI governance. The paper argues that despite using the notion of risk as a leading and overlapping notion to raise urgency, these actors reveal their own political preferences. These priorities include not only their approaches towards technology but also how these actors position themselves in the international stage.