Hate speech: measures and counter-measures

Measuring, analysing and countering online hate speech with advanced computational methods


Hateful content online is a growing problem in the UK. It can pollute civic discourse, inflict harm on targeted victims, create and exacerbate social divisions, and erode trust in the host platforms. The Hate speech: measures and counter-measures project is developing and applying advanced computational methods to systematically measure, analyse and counter hate speech across different online domains, including social media and news platforms.

This project is funded by the UKRI Strategic Priorities Fund (ASG).

Explaining the science

This research is using advanced computational methods, including supervised machine learning, stochastic modelling and natural language processing, to detect and analyse hate speech. Initial research is aimed at developing supervised machine learning classifiers to detect and categorise different strengths and targets of hate speech.

Project aims

The aim is to understand the scale and scope of online hateful content, taking into account its different forms, from ‘everyday’ subtle actions to overt acts of aggression and criminality, and the different targets, such as ethnic minorities and women. The project also aims to understand the dynamics and drivers of hate, providing granular insight into when, where and why it manifests.


The tools being developed for automatically identifying and categorising hateful content will be of interest to government, policymakers, companies, and other researchers. The project researchers will work across stakeholders to establish best practices, share findings and offer insight. The code and data involved our being made as accessible as possible and the researchers will be blogging about their work as the project progresses.

Recent updates

May 2020

Researchers from the Universities of Oxford, Surrey, Sheffield and the George Washington University, led by The Alan Turing Institute’s Hate Speech: Measures & Counter-measures project, have developed a tool that uses deep learning to detect East Asian prejudice on social media. The tool is available open source, along with the training dataset and annotation codebook. It can be used immediately for research into the prevalence, causes and dynamics of East Asian prejudice online and could help with moderating such content. You can find the paper describing the methodology and results on arXiv.

Read more about it here.

November 2019

August 2019

'Challenges and frontiers in abusive content detection' presented by Bertie Vidgen at ALW3: 3rd Workshop on Abusive Language Online.

July 2019

Turing article: The Turing’s Public Policy Programme responds to the Online Harms White Paper

June 2019

Helen Margetts and Bertie Vidgen presented at CogX on 11th June. Watch the recording of the talk.

March 2019

Blog post by Bertie Vidgen, 'Four ways social media platforms could stop the spread of hateful content in aftermath of terror attacks'

January 2019

Blog post reporting on the project's first classification work.


Researchers and collaborators

Contact info

Bertie Vidgen [email protected]