Almost 90% of young people exposed to harmful content on social media

Monday 20 Mar 2023

Nearly 90% of people aged between 18 and 34 have witnessed or received harmful content online at least once, according to early findings from a national survey of public attitudes towards AI and data-driven technologies, published today by The Alan Turing Institute and The Ada Lovelace Institute. 

The results come at a time when conversations around the Government’s Online Safety Bill – a new set of laws to protect children and adults online – are heightening.  

The Bill, which is being finalised, aims to make social media companies more responsible for their users’ safety on their platform, forcing social media giants like Twitter and Facebook to quickly remove illegal content or face large penalties. 

The survey results also showed that two thirds of all adults in the UK have seen or received harmful content online, including hate speech, false information, fake images and bullying, at least once.  

And more than 40% of people from the youngest age group in the study – people aged between 18 and 24 - have been exposed to these kinds of harmful content online many times.  

When survey respondents were asked about what social media platforms and the government could do tackle harmful content online, 80% thought that creators of harmful content should be banned or suspended from social media.  

And more than 70% said that the government should issue large fines for platforms that fail to deal with harmful content online. A similar proportion of people think that legal action should be taken against platforms who fail to deal with harmful content online.  

Dr Florence Enock, study lead and Research Associate at The Alan Turing Institute, said: “This research is really important because it highlights just how prevalent harmful online content is across the UK. We’re very concerned to see that so many people have been impacted. The Online Safety Bill is coming at a crucial time and it’s really important that the Government has the means to protect people from harm online.” 

Professor Helen Margetts, Director of Public Policy at The Alan Turing Institute and Principal Investigator, said: “It’s worrying to see that so many young people have been subjected to harmful content online. We’re continuing to develop new ways to better understand online hate and other forms of harm, helping Government to implement and police laws, like those that will be brought in under the Online Safety Bill. These results show clearly that people welcome action by both government and platforms to tackle online harm.” 

These results come from a major new survey of public attitudes to AI and data-driven technologies conducted by The Alan Turing Institute and The Ada Lovelace Institute. 

A representative sample of more than 4,000 UK adults were surveyed. As well as being asked about harmful content, participants were also asked about a range of specific uses of AI, from facial recognition and medical diagnostics to driverless cars and credit scoring. Further findings from the survey will be published in summer 2023.

Top image credit: Camilo Jiminez, Unsplash