Since the start of 2024, my colleagues and I at the Turing’s Centre for Emerging Technology and Security (CETaS) have been studying the impact of AI on the security of our elections, and on democracy more generally.
For researchers in this field, the chance to build a new evidence base has been unprecedented, with more than two billion people in at least 50 countries voting.
Looking back to the start of the year, there were significant concerns about the proliferation of new generative AI models, and their potential to create large volumes of harmful disinformation to disrupt elections.
And as the year comes to a close, we can see that we did encounter many challenges, but the impacts were not always clear-cut. AI was used in malicious ways in most major elections, but there is a lack of evidence that it measurably affected any election results.
Yet this is not a cause for complacency, as both the technology itself – and the hype surrounding the threat it poses – are polluting our information environment and undermining trust in the wider democratic system.
This year we’ve seen AI-generated content designed to damage the reputation of political candidates, AI bot farms mimicking voters, and fabricated celebrity endorsements.
We even saw incidents such as female politicians being targeted with deepfake pornography smears, harming their wellbeing and underscoring the discriminative gendered aspect with some of these threats.
Taking action to secure our elections
Whilst we should be reassured by the lack of clear impact on election results, we nevertheless need to take action to safeguard the integrity of future voting processes and crucially give voters the confidence that elections can be secure in the age of AI.
CETaS has just published the final report in a series of three reports this year, where we take stock of evidence from the recent US presidential election, then provide recommendations for the future – with a focus on how UK institutions can counter the malicious use of AI in elections.
The solutions we are advocating have been informed by an extensive literature review and workshops with cross-sector experts, and centre around four strategic objectives designed to target different aspects of the online disinformation process.
1. Curtailing generation
The ideal situation is that we can put up a range of barriers to deter people with malicious intent from creating online disinformation in the first place, and there are a variety of ways to tackle this.
Interventions recommended in our report include strengthening the authenticity of credible information sources so these become harder to fake, such as by automatically embedding provenance records in digital content produced by the UK government and other sectors at its origin. This way, we can reliably track and trust information around the creation, modification and ownership of digital content – ensuring it comes from verified sources.
We would also like to see a review to understand weaknesses in existing legislation (including defamation, privacy and electoral laws) which could be exploited with malicious AI-generated content either targeting political candidates or designed to undermine election integrity. This will help to strengthen legal deterrence against anyone who might want to create this content.
2. Constraining dissemination
It is inevitable that some disinformation will be created, so we need measures which can also reduce its effectiveness – including the way such content spreads on social media platforms.
In the report we suggest a range of interventions including the development of standardised benchmarks and guidance for deepfake detection tools, providing minimum quality assurances for those using them.
We would also like to see Ofcom create a new Code of Conduct aimed at systematically targeting online disinformation. Drawing inspiration from the EU’s Code of Practice on Disinformation, the new code would set out self-regulatory standards on demonetising disinformation content creators, defining unpermitted manipulative behaviours associated with bot accounts, tools for empowering users against disinformation, and requiring transparent incident reporting.
And the Electoral Commission should expand existing guidance for UK political parties on both the appropriate use of AI tools and clear redlines on misuse. In turn, political parties should update their party codes of conduct with this guidance to create accountability for candidates and campaigners.
3. Counteracting engagement
There are also risks of users interacting with disinformation on digital platforms once it gets disseminated. Our report equally sets out how the malicious influence of this content could be reduced and add further barriers for those wanting to use AI tools for politically manipulating individuals.
Accessible fact-checking apps and more resources invested by social media platforms in decentralised fact-checking initiatives are important to help with verifying the volume of content on user feeds, while we also need robust ways of dealing with significant disinformation incidents that create doubt among the public over the integrity of elections.
For instance, we recommend that the government establishes a UK ‘Critical Election Incident Public Protocol’ comprising a range of senior government experts. This would inform the public of concerning election threats which emerge, and restore trust in the validity of our democratic processes.
The way the media reports on major disinformation incidents is also crucial, and we would like to see revised media guidance drawing on insights from journalists and fact-checkers. Such information could include media outlets refraining from linking the original source content in online articles to avoid more users potentially sharing it, as well as the importance of framing impact in a way that does not exaggerate the threat of these activities.
4. Empowering society
Ensuring all citizens, and the bodies that protect us, have the right skills and capabilities can build long-term resilience against those who want to spread deceptive election information.
This includes understanding potential gaps in the regulatory powers and remit of the Electoral Commission and Ofcom so they are able to effectively tackle these threats.
We also believe that researchers monitoring online disinformation must be given trusted access to social media platform data, to assess and mitigate against the most serious malicious voter-targeting activities. Without this access, researchers will never gain a full picture of the problems we face and how to address them.
Digital literacy and critical thinking initiatives also show a lot of promise but are currently not well adopted by the public. We would like to see the government introduce mandatory programmes in primary and secondary schools, along with providing more accessible materials for adults – covering issues like deepfakes, how to verify content, and how AI algorithms work.
Overall, our report contains 15 actions, including recommendations for DSIT, the Ministry of Justice, Ofcom, the Electoral Commission, the Cabinet Office, the Independent Press Standards Organisation and the Department for Education.
This is not an issue we should defer until the next general election. We have a golden window of opportunity to take action now, learning the lessons of 2024 and putting the systems in place which will deter malicious interference and reassure the public that vital democratic processes are protected in the future.
Please take a look at the report ‘AI-Enabled Influence Operations: Safeguarding Future Elections’, published 13 November.
You can also read the previous reports in the series: ‘AI-Enabled Influence Operations: Threat Analysis of the 2024 UK and European Elections’ (published September 2024) and ‘AI-Enabled Influence Operations: The Threat to the UK General Election’ (published May 2024).
Top image: LanaSham via Adobe Stock