Time rapidly running out for regulators to counter AI threats before July general election

Wednesday 29 May 2024

Filed under

Regulators must urgently tackle the threats posed by AI ahead of July’s general election to preserve trust in the democratic system, according to new research by The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) published today.

The researchers are urging Ofcom and the Electoral Commission to use a rapidly diminishing window of opportunity to address the use of AI to mislead the public and erode confidence in the integrity of the electoral process.  

Recent advances in AI technology have caused many people to be concerned about its use to spread disinformation, influence voters, and disrupt the integrity of election processes with the aim of manipulating the outcome of elections or eroding trust in democracy.  

In this new study, researchers caution against fears that AI will directly impact election results. They noted that, to date, there is limited evidence that AI has prevented a candidate from winning compared to the expected result. Their research found that of 112 national elections taking place since January 2023 or forthcoming in 2024, just 19 had examples of AI-enabled interference.

However, there are early signs of damage to the broader democratic system. This includes confusion among the electorate over whether AI-generated content is real, which damages the integrity of online sources; deepfakes inciting online hate against political figures, which threatens their personal safety; and politicians exploiting AI disinformation for potential electoral gain.

The evidence also found that current ambiguous electoral laws on AI could lead to its misuse in the upcoming general election, such as with people using generative AI systems like ChatGPT to create fake campaign endorsements, which could damage the reputation of individuals implicated and undermine trust in the information environment.  

The authors make several recommendations outlining what could be done to mitigate potential threats to the UK’s election process.  

This includes urging the Electoral Commission and Ofcom to create guidelines and request voluntary agreements for political parties detailing how they should use AI technology for campaigning, while requiring AI-generated election material to be clearly marked as such. They also say these organisations should work with the Independent Press Standards Organisation (IPSO) to publish new guidance for media reporting on content which is either alleged or confirmed to be AI-generated, particularly during polling day in light of broadcasting restrictions.  

The researchers believe that the Electoral Commission should ensure any forthcoming voter information contains guidance for how individuals can remain vigilant to AI-based election threats (such as attempts to cause confusion over the time and place of voting).  

They also recommend that the UK Government’s Defending Democracy Task Force (DDTF) and the Joint Election Security and Preparedness Unit (JESP) coordinate exercises with local election officials, media outlets and social media outlets, simulating possible deepfakes of political candidates and AI voter suppression efforts to prepare to deal with these situations when they arise. They say that the DDTF should create a live repository of AI-generated material from recent and upcoming elections so they can analyse trends to inform future public information campaigns.  

The researchers created a timeline of how AI threats develop in the lead up to an election. In the weeks, months and hours leading up to an election, AI could be used to undermine the reputation of political candidates, falsely claim that candidates have withdrawn, shape voter attitudes on a particular issue or create deceptive political ads.  

During the polling period, deepfake attacks, polling disinformation and AI-generated knowledge sources (such as fake news articles) are likely to circulate and create confusion over how, where and when to vote. And after the election, we are most likely to see political candidates being declared the winner before the results have been announced, as well as deepfakes and AI bots claiming that there has been election fraud to undermine election integrity.  

Sam Stockwell, Research Associate at The Alan Turing Institute and lead author, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information. That’s why it’s so important for regulators to act quickly before it’s too late.”

Dr Alexander Babuta, Director of CETaS at The Alan Turing Institute, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face. Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process.”

Top image credit: Sue Edmondson via Adobe Stock

Sam Stockwell

Research Associate at the Centre for Emerging Technology and Security