Abstract
In light of current policy conversations around online safety, we sought to understand experiences of online harms and attitudes towards their mitigation amongst the British public. To do so, we asked a nationally representative sample of over four thousand people the extent to which they had experienced content which they consider to be harmful online (such as hate speech, misinformation, bullying or violence), as well as what they thought social media platforms and the government should do to tackle harmful content online. Our findings show that exposure to online harms amongst the British public is high and demonstrate that people strongly welcome action to tackle such content. These findings come at a time of heightened national attention to a myriad of topics concerning the next phase of internet regulation, and highlight the importance of efforts from researchers, practitioners and policy-makers in working towards a safer online environment.
- Our results suggest that exposure to online harms amongst the British public is high. Two thirds (66%) of all adults in the sample reported that they had witnessed harmful content online before, whilst for participants aged 18-34 this was almost 9 in 10 (86%). Participants in the youngest age bracket reported the highest exposure to harm, with 41% of 18-24 year olds indicating that they had been exposed to harmful content many times.
- Participants across all demographic groups strongly welcomed action from social media platforms to tackle online harms. Almost 80% of respondents thought that social media platforms should ban or suspend users who create harmful content, and almost 75% thought that platforms should remove harmful content. This was consistent across age, gender, educational background, income and political ideology.
- The majority of respondents support increased action from the government to tackle online harms.
More than 70% of respondents said that the government should be able to issue large fines for platforms that fail to deal with harmful content online, while 66% thought that legal action should be taken against platforms that fail to deal with harmful content online.
Full report PDF
Tracking experiences of online harms and attitudes towards online safety interventions
The Alan Turing Institute’s public policy programme
The public policy programme works alongside policy makers to explore how data-driven public service provision and policy innovation might solve long running policy problems and to develop the ethical foundations for the use of data science and artificial intelligence in policy-making. Our aim is to contribute to the Institute's mission – to make great leaps in data science and artificial intelligence research in order to change the world for the better – by developing research, tools, and techniques that have a positive impact on the lives of as many people as possible.
Funding
This work was supported by funding from the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC).
Additional information
If you have questions about this report or would like more information about The Alan Turing Institute’s research, please contact Florence Enock ([email protected])