Behind the Deepfake: 8% Create; 90% Concerned

Surveying public exposure to and perceptions of deepfakes in the UK

Abstract

This article examines public exposure to and perceptions of deepfakes based on insights from a nationally representative survey of 1403 UK adults. The survey is one of the first of its kind since recent improvements in deepfake technology and widespread adoption of political deepfakes. The findings reveal three key insights. 

First, on average, 15% of people report exposure to harmful deepfakes, including deepfake pornography, deepfake frauds/scams and other potentially harmful deepfakes such as those that spread health or religious misinformation or propaganda. In terms of common targets, exposure to deepfakes featuring celebrities was 50.2%, whereas those featuring politicians was at 34.1%. Analysing gender and age effects, we find that men and younger people were more likely to report being exposed to deepfakes. And 5.7% of respondents recall exposure to a selection of high profile political deepfakes in the UK. Second, while exposure to harmful deepfakes was relatively low, awareness of and fears about deepfakes were high (and women were significantly more likely to report experiencing such fears than men). As with fears, general concerns about the spread of deepfakes were also high; 90.4% of the respondents were either very concerned or somewhat concerned about this issue. Specifically, most respondents (at least 91.8%) were concerned that deepfakes could add to online child sexual abuse material, increase distrust in information and manipulate public opinion. Third, while awareness about deepfakes was high, usage of deepfake tools was relatively low (8%). And most respondents were not confident about their ability to detect deepfakes. At the same time, most people were trustful of audiovisual content online. 

Our work highlights how the problem of deepfakes has become embedded in public consciousness in just a few years; it also highlights the need for media literacy programmes and other policy interventions to address the spread of harmful deepfakes.

Turing affiliated authors