Close to nine in ten people (87.4%) in the UK are concerned about deepfakes affecting election results, according to new research published by The Alan Turing Institute today.
A similar proportion (91.8%) are also concerned about the broader spread of deepfakes, with particular concerns about their potential impact on online child sexual abuse material, increasing distrust in information and the manipulation of public opinion, based on a nationally representative survey of 1403 people living in the UK.
The survey is one of the first of its kind since recent improvements in deepfake technology and the rise of political deepfakes online. Researchers believe that high profile deepfakes of public figures, including celebrities such as Taylor Swift, may have created a heightened awareness of this type of content.
The survey results also showed that nearly half of all respondents (49.3%) reported seeing non-harmful video deepfakes created for educational or entertainment purposes. Whereas, on average, 15% of people have been exposed to harmful deepfakes, including deepfake pornography, frauds, and scams as well as other potentially harmful content such as health or religious misinformation or propaganda.
When the researchers specifically asked about people’s exposure to common targets of deepfakes, they found that 50.2% had seen a deepfake of a celebrity online, compared with 34.1% having seen one featuring a politician.
They also found that while men were more likely to report having seen or heard a deepfake online, women were more likely to report being worried about becoming a target of harmful deepfakes.
Despite high awareness of deepfakes among the UK population most people reported a lack of confidence in their ability to detect them, nearly 70% of people said they do still trust the genuineness of audio and visual content online.
And when asked about their experience of creating deepfakes, less than one in 10 (8%) said they have used tools to create them.
Tvesha Sippy, researcher at The Alan Turing Institute and lead author, said: “In just a few years, it’s clear that deepfakes have become a significant concern for the British public. Their rapid progress and improving believability have the potential to erode trust in online information.
“It’s vital that there is a concerted effort to find constructive ways to tackle the increase in the spread of fake content. The focus now should be on raising awareness of how to identify fake content and to improve media literacy.”
Respondents also had the opportunity to choose their preferred solution to tackle the issue. Most commonly, they chose to ban or suspend users who created harmful content and to require platforms to make it easier for people to report harmful deepfakes and request their removal.
Dr Jonathan Bright, head of online safety at The Alan Turing Institute, said: “It’s clear there’s a lot of concern about the use of deepfakes and their implications – not only in our electoral processes but also in their ability to spread false information quickly, and in a convincing way. People should always remember to check their sources when they see information online and be conscious that audio and video content they come across on social media might not always be authentic.”
In a recent study by the Turing, researchers recommended that regulators must urgently tackle the threats posed by AI ahead of July’s general election. While the researchers found limited evidence to date that AI has prevented a candidate from winning compared to the expected result, they did find signs of early damage to the broader democratic system.