Online hate research hub

Collating and organising resources for research and policymaking on online hate


Welcome to The Alan Turing Institute's hub for online hate research. This is an ongoing project to collate and organise resources for research and policymaking on online hate. These resources aim to cover all aspects of research, policymaking, the law and civil society activism to monitor, understand and counter online hate. Some of the resources may cross into closely related areas, such as offline hate, online harassment and online extremism. Resources are focused on the UK but include international work as well. 

If you have any suggestions, feedback - or would just like your outputs to be featured here - please email Dr. Bertie Vidgen, [email protected] 

This resource is supported by Wave 1 of the UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1, particularly the Criminal Justice System theme within that grant, and The Alan Turing Institute.

Turing resources for online hate research



Reading list from The Alan Turing Institute

Reading list for the 'Hate speech: Measures and counter-measures' project at The Alan Turing Institute containing materials on: online hate speech detection, dynamics of online abuse, online misogyny, machine learning and natural language processing.

Hate speech data

A comprehensive catalogue of datasets annotated for hate speech, online abuse, and offensive language. They may be useful for e.g. training a natural language processing system to detect this language.

The list is maintained by Leon Derczynski (University of Copenhagen) and Bertie Vidgen (The Alan Turing Institute).


Tracking and countering online abuse

Off the shelf measurement tools



Hate Sonar

Hate Sonar is a hate speech detection library for Python. The webpage takes a text input and returns a classification of hate speech, offensive language or neither.

Perspective API

Perspective API uses machine learning models to score the perceived impact a comment might have on a conversation. Developers and publishers can use this score to give real-time feedback to commenters or help moderators do their job, or allow readers to more easily find relevant information.

Parallel dots

Parallel dots identifies abusive and offensive language. It uses Long Short Term Memory (LSTM) algorithms to classify the text. It is trained on social media data and news data differently for handling casual and formal language.


Hatebase is the world's largest structured repository of regionalized, multilingual hate speech.


The Factmata API helps understand any piece of online content in terms of its quality, safety and credibility. It scores content on nine signals: Hate Speech, Sexism, Political Bias, Clickbait, Insults, Obscenity, Toxicity, Identity Hate and Threats.

iFeel Sentiment Analysis Framework

iFeel is a web application that allows detection of sentiments in any form of text including unstructured social media data. iFeel is free and gives access to 18 sentiment analysis methods.

The Online Hate Index (ADL)

The Online Hate Index (OHI) is a joint initiative of ADL’s Center for Technology and Society and UC Berkeley’s D-Lab, and is designed to transform human understanding of hate speech via machine learning into a scalable tool that can be deployed on internet content to discover the scope and spread of online hate speech.

GATE Hate tagger

A service from the University of Sheffield that tags abusive utterances in any text. It includes a feature, "type", indicating the type of abuse if any, such as sexist, racist etc, and a "target" feature that indicates if the abuse was aimed at the addressee or some other party. The service can be run on any English language text.


The Hatemeter aims to systematize, augment and share knowledge on anti-Muslim hatred online, and to increase the efficiency and effectiveness of NGO/CSOs in preventing and tackling islamophobia at the EU level by deeloping and testing an ICT tool (i.e., the Hatemeter platform).

Tech solutions for countering online abuse




Tune is the Chrome Extension for the Perspective API (Jigsaw). It allows customisation of how much toxicity can be seen in comments across the internet.

Opt out tools

Opt out is a browser extension for Firefox that works like an ad blocker, but hides misogyny instead of adverts from the feed.

BBC Own It app

The Own it app is part of the BBC’s commitment to supporting young people in today’s changing digital environment. Using a combination of self-reporting and ‘machine learning’, the app builds up a picture of the user’s digital wellbeing and serves relevant content, information and interventions designed to help them understand the impact that their online behaviours can have on themselves, and on others.

Block Together

Block Together is a web app intended to help cope with harassment and abuse on Twitter through the use of block lists, which can also be shared amongst users.


HeartMob is a platform where users can report and document hate across platforms and receive support and guidance from the community.

No Homo Phobes

No Homo Phobes tracks the daily, weekly and all-time use of homophobic language on Twitter using simple keyword searches.

Hack Harassment

The Hack Harassment group, set up by Intel, Vox Media and Recode is a collaboration of the world’s leading minds within the technology industry who are creating ways of utilizing the latest technologies to develop solutions and tools to help users deal with the toxic effects of digital abuse.

Hack Hate (2020)

An entirely virtual hackathon, run from October 2020 onwards by 'Police Rewired'. Teams work with experts to devise a great new prototype or find new insights in open data that could positively impact the fight against hate crime. Supported by a range of charities and police organisations.

Evidence on prevalence of online hate



Home Office - 2018/2019 hate crime figures

Home Office statistics on hate crimes recorded by the police in England and Wales, as well as information on hate crime from the Crime Survey for England and Wales 2018 to 2019.

Home Office - 2017/2018 hate crime figures

Home Office statistics on hate crimes recorded by the police in England and Wales, as well as information on hate crime from the Crime Survey for England and Wales 2017 to 2018. Experimental figures on online hate crime are also provided.

Crown Prosecution Service statistics on hate crime (2018-2019)

Crown Prosecution Service statistics on hate crimes in the UK from 2018-2019. It brings together information on CPS performance in prosecuting racist and religious hate crime, homophobic and transphobic crime, crimes against the older person and disability hate crime.

Crown Prosecution Service statistics on hate crime (2017-2018)

Crown Prosecution Service statistics on hate crimes in the UK from 2017-2018. It brings together information on CPS performance in prosecuting racist and religious hate crime, homophobic and transphobic crime, crimes against the older person and disability hate crime.

Turing Public Policy Briefing Reports
Full report

A systematic review of the prevalence of online abuse in the UK, conducted by The Alan Turing Institute. It contains evidence from five sources: government statistics, surveys, platform transparency reports, academic studies and reports from civil society.


Academic research on online hate

Research groups, centres and projects



The Alan Turing Institute

The 'Hate speech: Measures and counter-measures' project at The Alan Turing Institute is developing and applying advanced computational methods to systematically measure, analyse and counter hate speech across different online domains, including social media and news platforms.

University of Leicester

The Centre for Hate Studies at the University of Leicester is researching issues of hate and extremism. As well as undertaking large-scale studies, they are also regularly commissioned by organisations within the public, private and third sector to conduct smaller, tailored pieces of research.

Cardiff University

HateLab at Cardiff University is a global hub for data and insight into hate speech and crime. HateLab uses data science methods, including ethical forms of AI, to measure and counter the problem of hate both online and offline. The Online Hate Speech Dashboard has been developed by academics with policy partners to provide aggregate trends of online hate over time and space.


The VOX-Pol Network of Excellence (NoE) is a European Union Framework Programme 7 (FP7)-funded academic research network focused on researching the prevalence, contours, functions, and impacts of Violent Online Political Extremism and responses to it.

Gonzaga University

The Gonzaga Institute for Hate Studies promotes inquiry, scholarship, and action-service toward understanding what hate is, how it arises and manifests, how it is experienced in context, what and how it may be addressed appropriately and effectively.

Aarhus University

The Research on Online Political Hostility Project at Aarhus University provides in-depth knowledge of the causes, consequences, and counter-strategies related to online political hostility in all its forms.

King’s College London

The International Centre for the Study of Radicalisation at King’s works across a number of different academic disciplines and in several languages, conducting thematic research on issues such as extremism and terrorism. Researchers aim to harness the capacity of big data to bring an empirical understanding to the study of international security and terrorism issues.


The MANDOLA project aims to improve understanding of the prevalence and spread of online hate speech and empower ordinary citizens to monitor and report hate speech. It is a consortium of various research institutes, consultancies and universities.

International Network for Hate Studies

The International Network for Hate Studies aims to provide an accessible forum through which anyone can engage with the study of hate and hate crime.

California State University, San Bernardino

The Center for the Study of Hate and Extremism is a nonpartisan research and policy center that examines the ways that bigotry, advocacy of extreme methods, or Terrorism, both Domestically and Internationally deny civil or Human Rights to people on the basis of relevant status characteristics. The center seeks to aid scholars, community activists, government officials, law enforcement, the media and others with objective information for their examination and implementation of law, education and policy.

Free University of Berlin

The NOHATE project at the Free University of Berlin aims to analyse hateful communication on social media platforms, in online forums and commentary sections in order to identify underlying causes and dynamics as well as develop methods and software for (early) recognition of hateful communication and potential strategies for de-escalation.

George Washington University

The Program on Extremism at George Washington University provides analysis on issues related to violent and non-violent extremism. Through academic enquiry, The Program produces empirical work that strengthens extremism research as a distinct field of study. The Program aims to develop pragmatic policy solutions that resonate with policymakers, civic leaders, and the general public.

Swansea University

The Cyber Threats Research Centre (CYTREC) at Swansea University explores a range of online threats, from terrorism, extremism and cybercrime, to child sexual exploitation online and grooming. CYTREC is an interdisciplinary centre. Its experts have backgrounds in law, criminology, political science, linguistics and psychology. It is also collaborative, and engages with non-academic stakeholders at all stages of the research process.

Ontario Tech University

The Centre on Hate, Bias and Extremism at Ontario Tech University explores the ways in which hate, bias and extremism challenge values of inclusion and equity, along lines of race, ethnicity, religion, gender, sexual orientation, disability and other relevant status characteristics, both singly and interactively. It recognizes the historical continuities that underlie contemporary patterns of discrimination, exclusion and violence directed toward those who are targeted.

The University of Sheffield

NLP and computer science researchers researching various aspects of online hate and misinformation, with a particular focus on abuse directed against MPs. They have published several papers in these areas.

The Cyberbullying Research Center

The Cyberbullying Research Center is dedicated to providing up-to-date information about the nature, extent, causes, and consequences of cyberbullying among adolescents. It is directed by Dr. Sameer Hinduja (Florida Atlantic University) and Dr. Justin W. Patchin (University of Wisconsin-Eau Claire). It has operated since 2005.


 'When Law and Hate Collide: Perspectives on Hate Crime' project aimed to investigate a range of key issues, such as 'What is Hate Crime?', 'Do we need a Hate Crime Law?' and 'Why do offenders commit Hate Crimes?'

Journals and conferences



Patterns of Prejudice

Patterns of Prejudice provides a forum for exploring the historical roots and contemporary varieties of social exclusion and the demonization or stigmatization of racial, ethnic, national or religious Others across the world. It probes the language and construction of ’race’, nation, colour and ethnicity, as well as the linkages between these categories.

Race and Class

Race & Class is a fully peer reviewed journal containing contributions from scientists, artists, novelists, journalists, politicians and black and Third World activists and scholars. It bridges the gap between academia, activism and policy.

Journal of Hate Studies

The Journal of Hate Studies is an international scholarly journal promoting the sharing of interdisciplinary ideas and research relating to the study of what hate is, where it comes from, and how to combat it. It presents essays, theory, and research that deepen the understanding of the development and expression of hate. The Journal aims to provide a deeper understanding of the processes that encourage the expression of hate so that methods of challenging and stopping its expression may be based on theory and research.

Hate Studies Conferences

A list of upcoming international events on Hate Studies.

Journal special issues



Interdisciplinary Approaches to Tackling Online Abuse

This SI in First Monday brings together a diverse range of perspectives which will help the field of online abuse to mature, both by generating new insights and knowledge, and also by providing a moment to 'pause' about where the field is. We invite submissions on all topics related to the study of online abuse, especially research which engages with issues relating to the ethics, explainability, fairness and use of content detection systems. The Deadline is towards the end of 2020.

Detecting, Understanding and Countering Online Harms

This SI in OSNEM seeks high-quality scientific articles (including data-driven, experimental and theoretical research) which examine harmful behaviours, communities, discourses and ideas in online social networks and media. We welcome submissions on any online harm but particularly encourage papers which focus on online hate, misinformation, disinformation, extremism and terrorism. Data-driven approaches, supported by publicly available datasets, are strongly encouraged. The Deadline is towards the end of 2020.

Intelligent Systems for Tackling Online Harms

The aim of this SI in Personal and Ubiquitous Computing is to bring together a community of researchers interested in tackling online harms and mitigating their impact on social media. We seek novel research contributions on misinformation- and harm-aware intelligent systems assisting users in making informed decisions in the context of online misinformation, hate speech and other forms of online harms. The Deadline is towards the end of 2020.

The Use of Artificial Intelligence to Address Online Bullying and Abuse

This SI in The International Journal of Bullying Prevention looks for a variety of submissions from a range of disciplines that examine various aspects of AI applications to address abuse. This includes but it is not limited to: communication, education, psychology, sociology, philosophy, computer science and engineering, human computer interaction, science and technology studies, among others. The Deadline is towards the end of 2020.

Special Issue on Offensive Language

This SI in the Journal for Language Technology and Computational Linguistics presents four papers on offensive language detection and classification. This SI is now live and the deadline has passed.

Workshops (Computer science conferences)



Workshop on Online Abuse & Harms

The Workshop on Online Abuse and Harms (WOAH) provides a venue for research into online hate and harassment, co-located with ACL or EMNLP.

Workshop on Social Threats in Online Conversations

The Workshop on Social Threats in Online Conversations is a venue for research using advanced NLP to detect, understand, and defend against current and future threats in online social platforms.

Workshop on Trolling, Aggression and Cyberbullying

The Workshop on Trolling, Aggression and Cyberbullying focuses on the applications of NLP and Machine Learning to tackle abusive online behaviour.


Shared tasks (Computer science conferences)



OffensEval 2020 (SemEval): Multilingual Offensive Language Identification in Social Media

OffensEval 2020 features a multilingual dataset with five languages (Arabic, Danish, English, Greek and Turkish). The data are annotated using the hierarchical tagset proposed in the Offensive Language Identification Dataset (OLID) and used in OffensEval 2019. Three sub-tasks have been created (A: Offensive language identification, B: Automatic categorisation of offence types, C: offense target identification).

OffensEval 2019: Identifying and Categorizing Offensive Language in Social Media

OffensEval 2019 features a monolingual dataset which breaks down offensive content into the type and target of offense. Three sub-tasks have been created (A: Offensive language identification, B: Automatic categorisation of offence types, C: offense target identification).

HASOC 2019: Hate Speech and Offensive Content Identification in Indo-European Languages

HASOC 2019 aims to leverage the synergies of both Twitter and Facebook. HASOC shared task is offered in 3 sub-tasks (A: Binary Hate speech and Offensive language identification for English, German, Hindi, B: fine-grained classification of hateful/offensive posts into Hate, Offence and Profane, C: identification of the type of offence into Targeted Insults and Untargeted).

Germeval 2019: Shared Task on the Identification of Offensive Language

GermEval 2019 offers three detection subtasks in the German language (A: whether a tweet includes some form of offensive language or not, B: Fine-grained classification into Profanity, Insult and Abuse, C: classification of explicit and implicit offensive language).

hatEval 2019: Shared Task on Multilingual Detection of Hate

hatEval 2019 consists of Hate Speech detection in Twitter and features two specific targets, immigrants and women, in a multilingual perspective, for Spanish and English. There are two subtasks (A: Hate Speech Detection against Immigrants and Women: systems have to predict whether a tweet in English or in Spanish with a given target (women or immigrants) is hateful or not hateful, B: Aggressive behavior and Target Classification for hateful tweets in English and Spanish, identifying whether they are aggressive or not aggressive and then whether the target harassed is an individual or group).

Evalita 2020: Task on Automatic misogyny identification (AMI)

AMI at Evalita 2020 offers two datasets, a raw dataset that comprises tweets manually labelled for two levels (Misogyny/Not and Aggressive/Not) and a synthetic dataset that has been annotated only for misogyny. Two tasks are proposed (A: performance ranked according to the average F1, B: performance ranked according to the weighted combination of AUC computed on the test raw dataset AUCraw and three per-term AUC-based bias scores computed on the synthetic dataset).

Evalita 2018: Task on Automatic Misogyny Identification (AMI)

AMI at Evalita 2018 offers two balanced datasets for the Italian and the English languages. The corpora have been manually labelled by several annotators according to three levels: Misogyny ( Misogyny vs Not Misogyny), Misogynistic Category (Discredit, Derailing, Dominance, Sexual Harassment & Threats of Violence, Stereotype & Objectification) and target (Active vs Passive).


Government, regulation and the law (UK)



APPG on Hate Crime

The All-Parliamentary Parliamentary Group on Hate Crime brings together civil society, parliamentarians, law enforcement, academics, and specialist support agencies to improve public knowledge and awareness of hate crime in the UK.

Mayor’s Office of Policing and Crime (MOPAC) - Online Hate Crime Hub (Metropolitan Police)

Five dedicated Met police officers, led by a Detective Inspector, make up the Online Hate Crime Hub, which aims to improve the police response to online hate by gathering intelligence, improving understanding and testing new investigation methods.

The Commission for Countering Extremism

The Commission for Countering Extremism supports society to fight all forms of extremism. It advises the government on new policies to deal with extremism, including the need for any new powers. It was created under Prime Minister Theresa May in response to the 2017 Manchester Arena bombing.

True Vision, Report It

True Vision provides information about illegal online hate content and advice on reporting hate crimes.

The Law Commission

The Law Commission is a statutory independent body that reviews the law in England and Wales. Since 2018 it has conducted a wide-ranging review into hate crime to explore how to make current legislation more effective and consider if there should be additional protected characteristics such as misogyny and age.

The Crown Prosecution Service

The Crown Prosecution Service prosecutes criminal cases that have been investigated by the police and other investigative organisations in England and Wales. This includes hate crime. As of 2018, they have published updated guidelines on prosecuting cases involving communications sent via social media.

Counter Terrorism Internet Referral Unit (Metropolitan Police)

The Counter-Terrorism Internet Referral Unit (CTIRU) was set up in 2010 by ACPO (and run by the Metropolitan Police) to remove unlawful terrorist material content from the Internet with a focus on UK based material. CTIRU works with internet platforms to identify content which breaches their terms of service and requests that they remove the content on a voluntary basis. CTIRU also compile a list of URLs for material hosted outside the UK which are blocked on networks of the public estate.

The National Counter Terrorism Security Office

The National Counter Terrorism Security Office (NaCTSO) is a police unit that supports the ‘protect and prepare’ strands of the government’s counter terrorism strategy.

Extremism Analysis Unit (Home Office)

The EAU has a remit to analyse extremism in the UK and abroad where it has a direct impact on the UK and/or UK interests. The EAU is a cross-government resource, with government departments able to commission research and analysis. The EAU does not have any executive or police powers or any operational role, it does not take operational decisions or determine policy or strategy. It provides independent analysis to policy and operational colleagues, who are responsible for such decisions.

The Research, Information and Communications Unit (Home Office)

The Research, Information and Communications Unit (RICU) is a specialist unit within the Home Office, responsible for the development and implementation of evidence-based communication campaigns designed to measurably increase public resilience to, and deter involvement in Serious and Organised Crime.

Joint Extremism Unit (Home Office, HM Prison and Probation Service)

The specialist taskforce will analyse intelligence compiled by about 100 counter-terrorism experts working across the country to assess the threat posed by radicalisation in prisons. It will advise prisons in England and Wales on how to deal with specific threats, as well as instruct and train prison and probation staff on how best to deter offenders from being lured into extremism.

Joint Terrorism Analysis Centre (MI5)

The Joint Terrorism Analysis Centre (JTAC) analyses and assesses all intelligence relating to international terrorism, at home and overseas. It sets threat levels and issues warnings of threats and other terrorist-related subjects for customers from a wide range of government departments and agencies, as well as producing more in-depth reports on trends, terrorist networks and capabilities.


Government, regulation and the law (International)



The EU code of conduct on countering illegal hate speech online

The EU Code of Conduct on countering illegal hate speech online aims to prevent and counter the spread of illegal hate speech online. Most major platforms have signed up to it, including Facebook, Twitter and Google.

The EU Commission, Racism and Xenophobia

The EU Commission on Racism and Xenophobia aims to monitor and combat different forms of online prejudice, racism and intolerance as they are incompatible with the values and principles upon which the EU is founded.

The European Commission against Racism and Intolerance

The European Commission against Racism and Intolerance (ECRI) is a unique human rights monitoring body which specialises in questions relating to the fight against racism, discrimination, xenophobia, antisemitism and intolerance in Europe.

Germany (NetzDG)

The Network Enforcement Act is a German law which aims to combat agitation and fake news in social networks.

The Organization for Security and Co-operation in Europe (Hate Crime)

The Organization for Security and Co-operation in Europe’s (OSCE) Office for Democratic Institutions and Human Rights (ODIHR) monitors and investigates hate crime abuses across the globe.


A report from UNESCO about countering online hate speech.

Home Affairs Select Committee

A cross-party committee of MPs responsible for scrutinising the work of the Home Office Home Affairs Committee. in 2017, they produced a widely discussed report on online abuse, extremism and hate.


Civil society



Center for Countering Digital Hate

The Center for Countering Digital Hate's primary goal is to disrupt alliances between hate actors and political actors in digital spaces. The Center was established in 2019 to deal with the increasing use of racial and religious intolerance, sexism, homophobia, and other forms of identity-based hate to polarise societies and undermine democracy.

Hope Not Hate

Hope Not Hate was founded in 2004 to provide a positive antidote to the politics of hate. It conducts original research, campaigns and raises awareness about hate and the far right.

Stop Funding Hate

Stop Funding Hate is an activist group which campaigns against divisive media hate by persuading advertisers to pull their support for certain venues and publications.

Amnesty International, The Troll Patrol

Amnesty International’s Troll Patrol project uses crowdsourcing, data science and machine learning to measure violence and abuse against women on Twitter.

Peace Tech Lab

PeaceTech Lab works to reduce violent conflict using technology, media, and data to accelerate and scale peacebuilding efforts.

Network Contagion Research Institute

The mission of the Network Contagion Research institute is to track, expose, and combat misinformation, deception, manipulation, and hate across social media channels.

Counter Extremism Project

The Counter Extremism Project is a not-for-profit, non-partisan, international policy organization formed to combat the growing threat from extremist ideologies.


Project Someone works to build awareness, create spaces for pluralistic dialogues, and combat online hate. Their multimedia materials, training curricula and programs aim to prevent hate speech and build resilience towards radicalization that leads to violent extremism.

The Cybersmile Foundation

The Cybersmile Foundation is a multi-award-winning nonprofit organization committed to digital wellbeing and tackling all forms of bullying and abuse online.

Life After Hate

Life After Hate is a charity that helps people leave the violent far-right to connect with humanity and lead compassionate lives.


SELMA (Social and Emotional Learning for Mutual Awareness) is a two-year project co-funded by the European Commission which aims to tackle the problem of online hate speech by promoting mutual awareness, tolerance, and respect.


Tell MAMA is a secure and reliable service that allows people from across England to report any form of Anti-Muslim abuse.

Community Security Trust (CST)

The Community Security Trust is a charity that protects British Jews from antisemitism and related threats.


Galop is an LGBT+ anti violence charity.


Stonewall campaigns for the equality of lesbian, gay, bi and trans people across Britain.

Fix the Glitch

Fix the Glitch works towards ending online abuse through hosting workshops across the country on Digital Citizenship and Digital Self-Care, working with other organisations to highlight the impact of online abuse and campaigning so that decision makers implement policies working towards strengthening digital citizenship and ending online abuse.

Stop Hate UK

Stop Hate UK is one of the leading national organisations working to challenge all forms of Hate Crime and discrimination, based on any aspect of an individual’s identity. Stop Hate UK provides independent, confidential and accessible reporting and support for victims, witnesses and third parties.

Get The Trolls Out

A project and campaign to combat discrimination and intolerance based on religious grounds in Europe. Led by the Media Diversity Institute with 6 partners, it aims to harness the power of social media to disseminate innovative media outputs and generate dialogue in order to deliver a powerful counter-narrative against hate speech.

Hate Is a Virus

 Their goal is to combat anti-Asian racism fueled by the Covid-19 and help fundraise up to $1 Million Dollars to support the Asian American community.

Shared Endeavour Network (ISD and Mayor of London)

The Shared Endeavour Network connects businesses, cultural, sport and grassroots organisations with public sector and civil society actors to jointly stand up to hate, intolerance and extremism across the capital.

Cach Partnership

A campaigning body to tackle hate crime in the UK.

Inclusion London

A group which supports Deaf and Disabled people’s organisations in London and campaigns for equality for Deaf and Disabled people.

Hate Crime Unit

A London-based student-led project dedicated to addressing the pervasive problem of hate crime through sustainable social justice.

Antisemitism Policy Trust

An All Party Parliamentary Group on Antisemitism. Their mission is to educate and empower parliamentarians, policy makers and opinion formers to address antisemitism.

Think tanks and other research centres



Demos, Centre for the Analysis of Social Media

Demos’ Centre for the Analysis of Social Media (CASM) is a digital research hub that provides unique insights and expertise across tech policy and its impact on society, economy and democracy. CASM is a joint venture with the University of Sussex and has conducted numerous projects into various forms of online hate speech and abuse.

The Institute for Strategic Dialogue

The Institute for Strategic Dialogue (ISD) is a London-based think tank that aims to counter extremism. It has conducted extensive research into online hate speech and the far right as well as numerous policy reports.

Royal United Services Institute, The Global Research Network on Terrorism and Technology

The Royal United Services Institute (RUSI) is the world's oldest independent think tank on international defence and security. The Global Research Network on Terrorism and Technology is a consortium of academic institutions and think tanks that conducts research and shares views on terrorist content online; exploring the recruiting tactics terrorists use online; the ethics and laws surrounding terrorist content moderation; public-private partnerships to address the issue; and the resources tech companies need to adequately and responsibly remove terrorist content from their platforms.

Chatham House

Chatham House, also known as the Royal Institute of International Affairs, is a not-for-profit and non-governmental organisation based in London whose mission is to analyse and promote the understanding of major international issues and current affairs. Their technology governance research examines the development and application of shared principles, norms, rules, decision-making procedures, and programmes that shape the use of information technology and the internet worldwide.

Runnymede Trust

Runnymede is the UK's leading independent race equality think tank. Runnymede generates intelligence to challenge race inequality in Britain through research, network building, leading debate, and policy engagement.

Media Diversity Institute

Media Diversity Institute (MDI) works internationally to encourage accurate and nuanced reporting on race, religion, ethnic, class, age, disability, gender and sexual identity issues in media landscapes around the world.

Private sector



Moonshot Countering Violent Extremism

Moonshot CVE is a London-based technology social enterprise that aims to understand and counter online extremism. It developed the "Redirect Method" in 2017, which involves delivering targeted advertising to Google and social media users searching for extremism-linked keywords. Advertisements then link to neutral or counter narratives provided by trusted figures, citizen journalists and defectors from the searched group on YouTube playlists and other sites.

Jigsaw (Perspective)

Jigsaw is a unit within Google that forecasts and confronts emerging threats, with the aim of creating future-defining research and technology for a safe world. Jigsaw develops cutting-edge research and technology to counter emerging technology threats that destabilize the internet and society.

Faculty AI

Faculty AI is an AI company that has developed state-of-the-art products to detect, measure and counter online abuse, hate, harassment and terrorist content.

Zinc Network

Zinc Network is a communications agency that works with governments and civil society organisations to drive positive social change. They have worked on countering fake news and building community resilience to violence and extremism.


Factmata is an online content moderation company which has developed products for fact checking and online toxicity and abuse detection.


Crisp provides 24/7/365 early-warning risk intelligence as a service for leading brands, global enterprises and social media platforms. Risks include activist attacks, hate speech, threats, fake news, false rumours, illegal content, compliance failures, and more.