There is no evidence that AI enabled misinformation meaningfully impacted recent UK or European election results, according to research published today by The Alan Turing Institute. However, concerns remain about disinformation damaging the integrity of the democratic system and new risks posed by parody or pornographic deepfakes.
Researchers from the Centre for Emerging Technology and Security (CETaS) at the Turing identified just 16 confirmed viral cases of AI disinformation or deepfakes during the UK general election, while only 11 viral cases were identified in the EU and French elections combined.
Despite reassuring findings about the impact of AI on election results in line with previous Turing research, there are emerging concerns about instances of realistic parody or satire deepfakes which, while intended as humour, can include misleading election claims that some voters interpret as factual.
This poses new challenges for regulators in striking a careful balance between countering disinformation while also protecting free speech and recognising the benefits of satire in political discourse.
And politicians, particularly women, were targeted with deepfake pornography smears, harming their wellbeing and posing risks to their professional reputations.
The researchers also found evidence that voters confused legitimate political content with AI-generated material, which could erode public confidence in the wider online information beyond just the election context.
Alongside these potential risks, CETaS researchers also highlighted examples seen in recent elections showing where AI could be beneficial to the democratic process including:
- Amplification of environmental issues by climate campaigners through clearly labelled AI-generated parodies
- Experiments which connect politicians and voters in new ways like ‘AI Steve’
- Fact-checkers like Full Fact using AI to scrutinise political claims much more quickly than through human review alone
Sam Stockwell, lead author and Research Associate, at The Alan Turing Institute, said: “Echoing our previous report, there remains no evidence AI has impacted the result of an election, but we remain concerned about the persistent erosion of confidence in what is real and what is fake across our online spaces.”
“It’s right that people are sceptical about the information they see, but by ensuring we use reliable sources, and cross reference them with others, we can all be more confident about getting the information we need to inform our decisions at the ballot box.”
Researchers highlighted that the creation and circulation of deepfakes and other forms of AI-enabled disinformation in recent elections can be attributed to both domestic and state-sponsored groups.
This includes members of the public who shared fake content, albeit with no explicit intention of undermining the election.
There is some evidence of political candidates sharing deepfakes online; while researchers identified only one instance of this in the UK, this was more prevalent in the EU and French elections.
And in all three elections, researchers found signs of interference from people or groups linked to Russia, although the interference did not have a meaningful impact on any of the results.
AI-Enabled Influence Operations: Threat Analysis of the 2024 UK and European Elections is the second in a series of three reports looking at the impact of AI on election security. The final report, which will be published in November, will examine AI threats during the US election and contain recommendations for strengthening future election resilience.
Top image credit: Jenny on the moon via Adobe Stock