Digital platforms and internet-enabled technologies have had a transformative effect on our social and political lives—how we communicate, form communities, work, shop, organise politically, travel, consume information and news, socialise and entertain ourselves. However, it is increasingly apparent that these data-intensive platforms can also threaten the integrity of democratic processes, jeopardise people’s safety and security, and work against social values such as equity, fairness and transparency.
In The Turing’s Hate speech: measures and counter-measures project, we are working to address one of the most important social hazards that we face online; the problem of abusive online content, from harassment to hate speech. In our new policy briefing we address one of the most fundamental questions in this research area: How much online abuse is there?
Over the last six months we have reviewed five sources of evidence: (1) Government statistics on criminal online abuse, (2) reports by civil society charities, (3) the transparency reports of the major social media platforms, (4) measurement studies by academics, and (5) survey data, including previously unreleased analyses from the 2019 Oxford Internet Survey. Our findings are available in our six-page executive summary and corresponding full report, and summarised as follows:
- The prevalence of illegal online abuse in the UK is incredibly low. In Figure 1 we show the number of online hate crimes by target group for 2016/2017 and 2017/2018, which totaled 1,171 and 1,784 respectively. We don’t have figures for 2018/2019 – in the Home Office’s most recent hate crime report, no figures were given for online hate due to concerns about the quality of statistics. This is a relatively new area for Government to monitor, and analysis of online hate statistics is undoubtedly complex. That said, the lack of available figures from the Home Office leaves a considerable and important gap in our understanding of online hate.
- Only four of the major platforms provide sufficiently detailed ‘transparency’ reports to assess how much abusive content they host. Noticeably, Snapchat, Instagram, LinkedIn, WhatsApp and Pinterest do not make statistics on online abuse available. From our analysis of Facebook, YouTube, Twitter and Reddit we estimate that the level of abuse is very low, possibly around 0.001% of all content, but it is hard to be sure given that none of the platforms publicly share the total amount of content that they host.
- Most of the evidence we reviewed suggests that the prevalence of online abuse on social media platforms is low. Academic studies somewhat complicate this picture by showing that on ‘niche’ platforms like 8chan and Gab, prevalence is higher. They also suggest that at certain times the level of abuse peaks, for example following a terrorist attack. But, overall, the evidence indicates that most people most of the time will not encounter abuse when they go online.
- However, in strong contrast, a large number of people report having been exposed to online abuse at some point. Using survey data, including previously unseen analyses from the Oxford Internet Survey (OxIS), we find that between 30-40% of people in the UK have seen online abuse. We also find that 10-20% of people in the UK have personally been targeted by abusive content. And our analysis of OxIS shows that experiences of online abuse vary considerably across demographics:
- Ethnicity: Black people and those of ‘other’ ethnicities are far more likely to be targeted by, and exposed to, online abuse than White and Asian people. Differences in experiences of online abuse according to ethnicity are shown in Figure 2.
- Age: Younger people are more likely to be targeted by, and exposed to, online abuse. People aged 18-30 are at least twice as likely as people aged over 52 to observe cruel/hateful content online.
- Gender: Surprisingly, our analysis of OxIS did not identify a substantial difference according to gender. However, we advise caution as other survey data suggests that gender plays an important role in shaping people’s experiences of online abuse. To fully understand this issue we need more research and, potentially, to develop new methods of measurement.
Conclusion
Our review of the available evidence on the prevalence of online abuse is far from complete. Indeed, one of our key findings is that we need to build out better data and encourage more data sharing between tech companies, government and academia if we are to fully understand and tackle this issue. So, what can we do to make this happen? Our three key recommendations are:
- A representative survey dedicated to understanding people in the UK’s experiences of online abuse should be administered each year, rather than as a subsection of other surveys.
- Government statistics on different types of illegal online abuse, including both hate speech and harassment, need to be centrally collated and published in a single bulletin. Efforts should be made to improve the coverage, comparability and quality of Government statistics – and online hate crime should be reinstated as part of the Home Office’s reporting.
- A publicly accessible monitoring platform should be established to provide real-time insight into the prevalence of online abuse. Whilst we recognise the limitations of computational tools, and of relying on ‘big’ rather than high quality datasets, efforts should be made to leverage recent computational advances.
Online abuse threatens to reinforce existing inequalities and discrimination in social and public life. It could deepen divisions within and across our communities, and even discourage a whole generation of young women from public life, as has recently been reported.
Internet-enabled technologies have had positive, transformative effects across society. But Tim Berners-Lee, known as the inventor of the world wide web, recently voiced his concern that more must be done to tackle the web’s “downward plunge to a dysfunctional future”. If we are to have a fair, open and accessible internet then hazards such as online abuse must be dealt with – without, at the same time, infringing on freedom of speech and open expression.
To this end, The Turing’s Hate speech: measures and counter-measures project will create more resources for all researchers, policymakers, civil society actors and industry practitioners over the coming months.