Description
The dissemination of disinformation on social media platforms often transcends screens and poses security threats to society. The widespread riots seen across the United Kingdom during the summer of 2024 demonstrate how online proliferation of information disorder can fuel the flames of unrest. In this case, it led to various forms of public disorder including looting, physical violence, arson as well as racist, Islamophobic and xenophobic attacks. With brand new legislation in the process of being enforced in the UK and various forms of moderation provided within social media platform policy at the time of the events, the question remains, are these deterrents efficient when implemented during tangible threats such as the 2024 riots? As of yet, academic research is sparse in concurrently evaluating legislation and platform policy, representing a gap in understanding factors behind the effective moderation of mis- and disinformation. Therefore, this research will address this gap by conducting a systematic content analysis of the Online Safety Act 2023 as well as social media platform policy on X (formerly Twitter), Facebook, Reddit and Telegram in order to evaluate its practicality when applied to the recent riots that occurred in the UK. These platforms have been selected for their diversity in policy and strategy towards the moderation of mis- and disinformation. This research argues that without a unified front between governments and social media corporations, the online dynamics driving the spread of information disorder—capable of sparking and influencing events such as those seen in the UK—will continue to thrive.