Beyond the deepfake: five Turing experts look at how AI is impacting democracy

How might emerging technology affect future elections?

Monday 03 Jun 2024

Filed under

Distinguishing between what is true and false, biased and unbiased, is becoming increasingly difficult. The pace of technological change and the rapid spread of misinformation has raised significant concerns, particularly in a year where billions of people around the world go to the polls.

But while much of the discourse has focused on the threat of deepfakes, AI could impact our democracies in many ways. We asked five Turing experts to share their take on how AI is influencing the way we think about and process information.

We should be wary about swapping search engines for LLMs to get our politics information

Jonathan Bright (Head of AI for Public Services and Head of Online Safety)

The internet has already radically shifted the landscape of democratic communication. Traditional information ‘gatekeepers’ such as news organisations have been pushed aside, or at least placed on a par with blogs, substacks, WhatsApp channels, Discord servers, social media, and other ways of getting messages out.

Large language models (LLMs) such as ChatGPT and Gemini seem like the next big shift in this regard. As they become more integrated in our daily lives, it seems inevitable that we’ll use them for political information, much like we turn to search engines for answers. LLMs are like search engines but their results are more than just lists of websites. Instead, they provide the answer itself, written up by the model. But research casts doubt on their ability to get key answers right, such as critical information about how and where to vote, something which in the UK can trip up even the most politically engaged constituents.

Most tech companies creating these tools would claim to be politically neutral, but ensuring these models are unbiased is also challenging. There is a lot of work still to be done to find better ways to train LLMs to adopt neutral viewpoints. Perhaps the biggest challenge of all is that some of the new effects of LLMs are being felt in a year which is arguably the most important in democratic history, while the technology itself is still in its infancy.

We must avoid adult assumptions about how children experience misinformation

Mhairi Aitken (Ethics Research Fellow)

Children today, the first generation to grow up with generative AI, must navigate misinformation like no generation before in their quest to find accurate information.

Yet assumptions made about the ways children experience misinformation are often based on adult perspectives, which risks leading to policies or safeguards that may be irrelevant or ineffective. No adults today have experienced being a child growing up at a time with generative AI, so children really are the experts on this, and their voices and experiences should be at the heart of policy and decision-making.

Our research shows that children are very enthusiastic and capable of nuanced discussions about the role of AI in their lives. They offer insightful ideas about leveraging AI to create real value and about implementing the necessary safeguards. Importantly, these ideas are based on children’s unique understandings of how technology impacts children’s lives. In the age of generative AI, we need far more of these processes to bring children’s experiences, views and ideas into policy-making and governance processes. Only by doing so can we develop effective policies and safeguards that reflect the actual experiences, interests and needs of children.

We should be alert to the risks of cyber-attacks and hacking on elections around the world

Ardi Janjeva (Research Associate, Centre for Emerging Technology and Security)

Though there’s much focus on the ease of creating fake videos, audio or text, we should remember that people with malicious intent can also use technologies like AI to target the infrastructure responsible for elections. At worst, this could assist in the manipulation of election outcomes or erode confidence in election results. The transition to digital electoral infrastructure in many countries, while bringing benefits in terms of accessibility, also risks making those countries more attractive targets for those who want to cause harm.

While evidence of the use of AI to facilitate election-related cyber intrusions is currently limited, cyber-attacks and leaks of voter databases are increasingly common globally: in 2021-22 the UK’s Electoral Commission suffered from a breach which led to names and addresses of voters registered between 2014 and 2022 being made public.

However, such intrusions may not be motivated by an explicit desire to undermine an election: financial incentives may often play a more important role. Recognising the different actors and intentions at play will be central to developing effective and proportionate mitigation strategies to these threats.

We can protect ourselves through enhanced media literacy

Florence Enock (Senior Research Associate, Online Safety)

Online misinformation remains a global issue with 86% of people worldwide reporting exposure to false information. A recent Turing survey found that this applies to 97% of people using social media in the UK. Correspondingly, people are concerned about the spread of such content, with 85% of people globally worried about false information online. While awareness of the problem is an important first step for tackling it, high levels of concern mean that people may start losing trust in all information, rather than only showing scepticism towards false content. This problem can be made worse by some of the very solutions designed to tackle misinformation, with research showing that exposure to general warnings and awareness campaigns about the prevalence of false content can lead to disbelief in true headlines as well as false ones. This demonstrates the need to equip people with the right tools to discern true from false information.

Despite the development of many interventions over the past decade, like media literacy courses, the challenge is in encouraging people to actively engage. While people are generally supportive of initiatives designed to tackle misinformation, most do not use available resources even though they are effective. No matter how impactful interventions are shown to be, they will only be useful if people take part. As it becomes easier for false content to quickly spread, it is more important than ever to engage people with the right tools to protect themselves against the spread of online misinformation and its adverse effects.

We can take heart: AI will enhance and improve democracy

Sam Stockwell (Research Associate, Centre for Emerging Technology and Security)

Despite concerns being raised over the potential threat AI could pose to elections, it is also important to recognise the benefits that these systems offer for democratic processes.

For voters, chatbots could help to improve connections with political candidates. This includes being able to receive instant, tailored responses to queries about a candidate’s manifesto, and responses being translated into the preferred language of the user. The same systems could also help to summarise voter registration requirements, but voters must take into account the risks of inaccurate information being produced given that AI models may not be updated with the latest data.

For resource-constrained political parties, generative AI tools can enable the inexpensive creation of professional and personalised campaign material at scale, thereby making election contests fairer. Finally, new AI systems could add an extra layer of election security. This includes assisting human reviewers in proofreading election materials to ensure legal compliance (e.g. with mail-in ballots). As a number of key elections take place in 2024, we therefore need to adopt a balanced perspective on the role AI can play.

 

Top image: roibu