Everyone’s talking about AI and elections, perhaps unsurprisingly with nearly half of the world’s population eligible to vote this year.
At the same time, it is becoming increasingly apparent that new AI tools can generate realistic text, images, audio and video for deceptive purposes.
Threats to elections are not new, but AI could enhance the risks, with examples emerging of AI-generated fake news or deepfakes that mislead voters, and concerns about AI’s ability to enable cyber-attacks on election infrastructure through sophisticated phishing emails.
However, we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence to date demonstrates no clear signs of significant impacts on election results caused by AI-related interference, compared to polling data.
As we look towards the UK general election on 4 July, our new briefing AI-Enabled Influence Operations: The Threat to the UK General Election uncovers the types of AI threats being seen around the world and where they could be deployed in the election cycle, alongside recommendations on how organisations responsible for election security can mitigate risks.

What are the election threats AI could enable?
Most election threats are not new but have the potential to be enhanced by AI, such as phishing emails, polling disinformation and fake news sources. Through our research we’ve identified three categories of current election security threats:
- Election campaign threats: These are designed to manipulate the behaviour or attitudes of voters towards specific political candidates or particular views on political issues. Some of these threats may originate from hostile foreign countries, while others originate from political parties themselves. Examples include automated bot accounts on social media platforms, or highly realistic AI-generated content deceiving voters about a political candidate’s appearance, endorsements or activities.
- Election information threats: These are designed to undermine the quality of the information environment surrounding elections, to confuse voters and damage the integrity of electoral outcomes. Examples could include AI-supported content that misinforms voters about the time, manner and place of voting.
- Election infrastructure threats: These are designed to target the systems and individuals responsible for securing the integrity of election processes, with the aim of manipulating election outcomes or eroding confidence in election results. For instance, AI systems making it easier to infiltrate election voter databases.
Whilst threats like these may not have changed the results of an election so far, there is evidence of second-order impacts damaging the democratic system, such as confusion among voters over whether AI-generated content is real, deepfakes inciting online hate against political figures, and politicians exploiting AI disinformation for potential electoral gain. Added together, these issues could lead to a concerning erosion of trust in our democratic processes beyond the election cycle.
When might we see election threats emerge?
Through analysis of when AI threats have materialised in recent elections around the world and insights from academic literature it’s possible to anticipate when different threats could emerge and what they hope to achieve.
Threats could emerge up to a year in advance, initially focusing on undermining the reputation of targeted political candidates or shaping voter attitudes on specific campaign issues.
Activities much closer to polling day focus on polluting and congesting the information space, to confuse voters over elements of the election campaign or the voting process itself.
And after the polls close, operations are designed to erode confidence in the integrity of the election outcome, for instance through allegations of electoral fraud. This also undermines longer-term public trust in democratic processes.
Protecting our elections from AI-enabled threats
Though there is concern about AI undermining the upcoming UK general election in July, it’s important to point out that proactive action can be taken to enhance our resilience and protect our democratic process.
While effective election security in the long term requires a multi-stakeholder approach and wider societal initiatives, such as media literacy campaigns, our current work is focused on regulators and government departments specifically tasked with election security oversight. Owing to the very limited window of opportunity, we believe action is urgently needed.
Our recommendations are set out in full in the briefing but are summarised here:
- The Electoral Commission and Ofcom should jointly set out clear ‘fair use guidelines’ and request voluntary agreements for the use of AI by political parties for election campaigning.
- The Electoral Commission should work with Ofcom and the Independent Press Standards Organisation (IPSO) to publish new guidance for media reporting on content which is either alleged or confirmed to be AI-generated.
- Clarification statements should be made in relation to Section Six of Ofcom’s Broadcasting Code and IPSO’s Editors’ Code of Practice on AI incident reporting during polling day.
- The Electoral Commission should ensure any forthcoming voter information includes guidance for how individuals can remain vigilant to AI-based election threats.
- The Ministry of Justice should publish guidance for political parties and the general public on the use of AI to create fabricated election endorsements from individuals, and how this may engage existing defamation legislation.
- The Electoral Commission should require political parties to officially register all legitimate party-affiliated websites and use content provenance techniques to sign their digital materials, to incentivise all political candidates to expose AI-generated content.
- The Government’s Defending Democracy Task Force (DDTF) and the Joint Election Security and Preparedness Unit (JESP) should coordinate cross-government red-teaming exercises with local election officials, media outlets and social media platforms on AI-enabled election interference.
- To improve the evidence base and prepare for the upcoming general election, the DDTF should create a live repository of AI-generated material from recent and upcoming elections – including the local UK elections in May.
The potential of AI to undermine democracy is a complex issue, and we must remember that just because new AI systems could enable electoral interference, this does not necessarily mean the public will be more susceptible to such manipulation.
We should also remember that these technologies can be used to enhance democracy, rather than being purely a threat to it. For instance, AI chatbots could help to improve the connection between political candidates and voters, including translating manifesto content into a variety of languages.
Please engage with our research in AI-Enabled Influence Operations: The Threat to the UK General Election and look out for our final report in September which will provide longer-term policy and technical recommendations for protecting the integrity of democratic processes.
Top image: Red Dot / Unsplash