In February 2019 we launched Hate Speech: Measures and Counter-Measures in the Turing’s Public Policy Programme with the objective of “measuring, analysing and countering online hate speech with advanced computational methods.” We set out to produce impact-focused research which would both advance understanding of online hate and help policymakers, regulators, legal experts, security services and civil society activists to tackle its harmful effects.

To this end, we have built artificial intelligence (AI) tools to detect and categorise harmful content, created data pipelines and statistical models to monitor and analyse harmful content, and worked closely with stakeholders to shape policymaking, civic discourse and regulation. Since we started, online harms have become a key issue in political discourse, and just last month the UK Government published the Online Safety Bill, landmark legislation to protect people online. 

We are pleased to announce a new phase of our research with the launch of an Online Harms Observatory, in partnership with the Department for Digital, Culture, Media and Sport. It will provide real-time insights into the scope, prevalence and dynamics of harmful online content, using a mix of large-scale data analysis, AI and survey data. The Observatory scales up our work on online hate and extends to other sources of harm online, particularly harassment, extremism and misinformation. It aims to create a step change in how we understand the cross-cutting threats posed by this nexus of toxic content.

To deliver the Observatory we have created a genuinely multi-disciplinary team, comprising computer scientists, data scientists, social scientists and social psychologists (and more!). We are always looking to build new relationships and foster more innovative research. Contact Dr Bertie Vidgen at [email protected] to find out more.

Three complementary strands of work on online harms: a retrospective of the past two years

Building artificial intelligence for detecting and categorising harmful online content

We have built AI tools and labelled datasets for detecting Islamophobia, misogyny, Sinophobia, hateful memes, contextual abuse, hateful users and more. In all cases we have created and released the models and data, enabling others to build on our work. We’ve also critically evaluated AI detection tools, producing a review of open source labelled datasets, a review of new directions in abusive content detection research, a method for assessing abusive classifier’s interpretability, and HateCheck, a suite of diagnostic functional tests for hate speech detection models. Our work has been published in ACL, NAACL, EACL, Plos ONE and WOAH. Listen to a new Turing Podcast about detecting online hate, watch our CogX talk and see coverage in the BBC, MIT Tech Review and the Wall Street Journal.

Analysing the dynamics, prevalence and causes of harmful online content

At the start of the project we addressed one of the most basic (but remarkably difficult to answer) questions, How much online abuse is there?, releasing our first Public Policy Briefing Paper. We’ve since studied Islamophobic far right Twitter users, followers of the Russian-backed propaganda site RT on Twitter and have begun an exciting stream of work on vulnerability to online misinformation. Our latest work (forthcoming) examines how extremist content diffuses on social media, using a mix of network science and simulations.

Supporting the work of policymakers and regulators

We have worked with stakeholders across government, regulation and civil society, including DCMS, who we are partnering with for the new Online Harms Observatory. We have also worked closely with Ofcom to inform their new regulatory duties under the AVMSD, producing a 25,000 word report on online hate. Numerous closed and open events have been hosted for stakeholders, which resulted in publication of A Research Agenda for Online Hate at the end of 2020. We have submitted evidence to numerous consultations, including the Online Harms White Paper, the House of Lords Select Committee on Democracy and Digital Technologies, and the Law Commission’s work on hate crime and online abuse. We are particularly proud to have contributed to the excellent work of the Carnegie Trust in tackling online harms (including their Code of Practice on Online Hate Crime) and the coalition of civil society organisations they convene.

The Online Harms Observatory is supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1 and EPSRC Grant EP/W006022/1, particularly the “Criminal Justice System” theme within those grants and The Alan Turing Institute. The Observatory is being delivered in partnership with the department for Digital, Culture, Media & Sport.