Misinformation is broadly thought of as content that makes false or misleading claims. It can take many forms, from fabricated headlines and news stories, to out-of-context audio recordings, images and video that are misleadingly presented. Increasingly, it includes content that has been manipulated or created artificially, including the synthetic videos and audio recordings sometimes known as ‘deepfakes’.
With more voters heading to the polls in 2024 than ever before, concern about online misinformation is high. Many initiatives and interventions to combat its spread have been developed, including by social media platforms. And though their efficacy has been tested through research, far less is known about public perception and use of these tools.
In our research into online safety with the Turing’s public policy programme, we use data science and artificial intelligence to measure, understand and mitigate online harms. Our work ranges from investigating online hate speech and countermeasures, to understanding different experiences of online harms by gender.
As part of this work, we conducted a nationally representative survey of 2,000 adults living in the UK to find out what people are currently doing to protect themselves against the threat of misinformation online. We examined people’s awareness, attitudes and engagement in relation to a range of interventions, along with their general susceptibility to misinformation and trust in different institutions.
How do people feel about misinformation on social media?
Our survey results demonstrated that both exposure to misinformation and concern about it are high.
Almost all respondents report having witnessed misinformation on social media, with only 3% of social media users saying they have never witnessed misinformation. 86% report being concerned about this misinformation.
We also asked questions around public trust in institutions, including mainstream news organisations, government and academia. Our research shows that the concern about misinformation is reflected in low trust in a range of institutions.
What tools do we have to guard against online misinformation?
Misinformation is clearly a concern, so what are online platforms doing to mitigate these harms and how are social media users protecting themselves?
We explored three types of interventions that platforms and the public can use in the face of online misinformation.
Behind-the-scenes interventions
These are interventions that platforms use 'behind the scenes' to combat the spread of misinformation. We asked participants about four of these:
- Demonetisation: ensuring publishers of misinformation can no longer make money from it, for example, through adverts.
- Downranking: using algorithms to make the content appear less frequently on people's newsfeeds or be shown to fewer users.
- Early moderation: preventing certain types of content from being uploaded.
- Deplatforming: removing a user or group from a platform.
The high overall concern around misinformation that we found in the population is reflected in support for platform-initiated interventions.
While about half of respondents on average had heard of the four behind-the-scenes interventions that we asked them about, the majority were comfortable with their implementation (72% or above for all four). This supports research elsewhere finding that people are overwhelmingly in support of action from both government and from social media platforms to tackle online harms.
Publicly presented interventions
Platforms can also choose to intervene in more publicly-presented ways. We asked respondents about the following methods:
- Public awareness campaigns: raise awareness about the prevalence of misinformation and the harm that believing in and sharing such content may cause.
- Accuracy prompts: encourage people to pause before liking or sharing content in order to consider its truthfulness.
- Fact-check labels: partially or fully overlay content and usually warn users that claims made in the content have been disputed by third-party fact-checkers, sometimes offering links to more information.
- Debunking campaigns: aim to correct false beliefs by countering claims made in misinformation with detailed factual explanations.
For most of these interventions, engagement is promising when people engage with them. However, only about half the population have seen public awareness campaigns, fact-check labels and debunking campaigns, and about a quarter have seen accuracy prompts. This is far fewer than report having seen misinformation online.
Participatory interventions
Participatory interventions are interventions that the public might actively seek out to equip themselves against misinformation. We asked about three types:
- Media literacy courses: aim to equip people with skills to help them critically evaluate content, recognise content that may be misinformation, and reduce susceptibility to believing and sharing such content.
- Inoculation games: short games that people of all ages can play online to try to help them learn common signs of misinformation.
- Self-help resources: aim to allow individuals to investigate the truthfulness of a claim or gather additional context relating to something they have seen online.
We found that the majority of the public are unaware of participatory interventions while the proportion of people who had used any of them was less than 8% - and as low as 3% for some.
We also found that simply increasing awareness was not a solution to increasing uptake. Even when people had been told about the interventions, only a small proportion indicated they would take a media literacy course (14%) or play an inoculation game (18%) in the future, while for self-help resources the proportion was higher (33%) but still a minority.
Looking forward
With the increasing availability of technologies that can quickly and convincingly create and spread false content online, it is critical that the public are equipped with the right tools to protect themselves against the spread of misinformation and its adverse effects.
No matter how effective various interventions are shown to be, they will only be useful if the public is supportive of and engaged with their implementation. Knowing that the public are open to these interventions is a great first step, but we’d like to see online platforms capitalise on this public support, raising awareness of tools like media literacy courses, and ultimately enabling us all to play our part in improving the quality of the information environment.
Read the full paper and survey results here
Top image: pressmaster
Other images: Enock et al (2024)