The term ‘deepfake’ usually refers to an AI-generated video, image or piece of audio content that is designed to mimic a real-life person or scene. The content might be created from scratch, or pre-existing content may have been manipulated. Deepfakes are often created with the intention to deceive or entertain viewers. Not all deepfakes are plausible, however as the technology improves, they are becoming more realistic and harder to detect.
Why can deepfakes be a problem?
As deepfakes become more realistic and widespread, it is becoming harder to tell what is real and what is fake, risking an erosion of public trust in all kinds of information sources. There is particular concern about the spread of deepfakes that perpetuate disinformation for electoral gain and deepfakes that mimic public figures.
The manipulation of content, such as photography or audio, also raises ethical issues around consent. This is particularly the case with explicit content such as deepfake pornography that is based on a real-life subject.
However, there are some potential positive uses. For example, video deepfakes could make visual effects more accessible to amateur filmmakers or human rights activists create content that maintains human expression while protecting their identities.
How are deepfakes made?
There are two main groups of methods used to create image and video deepfakes: generative adversarial networks (GANs) and diffusion models.
Generative adversarial networks
A GAN is composed of two models that play a game against each other. The first model, the generator, either selects a real image or video, or generates a fake one. The second model, the discriminator, decides whether the image or video is real or fake. The generator wins the game if the discriminator can’t tell that a generated content is fake. Playing this game over and over trains the generator to generate realistic content, whilst improving the discriminator’s ability to guess correctly whether the content is real.
Diffusion models
A diffusion model is trained to restore an image or video to its original state after visual ‘noise’ has been added. (See here for an example of noise – random variations in brightness or colour that reduce an image’s quality.) Some diffusion models are trained with guidance, such as text prompts encouraging them to generate particular images, whilst others try to decide what the likeliest output will be on their own. The resultant models can ‘inpaint’ missing patches in an image, filling in the gaps with something plausible. Models such as Stable Diffusion and DALL-E 2 are both examples of diffusion models that take text prompts as part of their input. Diffusion models are newer than GANs and likely to become more prominent in deepfake generation. There is some reason to believe they may be easier to train than GANs.
How can we detect deepfakes?
Detecting deepfakes is becoming increasingly challenging as content becomes more widespread and realistic. Detectors can struggle to keep up with emerging generative AI methods for creating deepfakes, whilst the sheer volume of synthetic content online and in the media makes it difficult to sift through it all. However, there are several ways that deepfakes can be detected.
Some deepfaked images contain clear spatial and visual inconsistencies, such as differences in noise patterns, or colour differences between edited and unedited portions. Video and audio deepfakes, meanwhile, can be given away by time-based inconsistencies, such as mismatches between speech and mouth movements. Deepfake generation methods such as GANs and diffusion models can also leave detectable ‘fingerprints’ within the pixels of images or videos.
Another way to detect deepfakes is through their distribution channels. For instance, when used for malicious purposes, deepfakes are often circulated on social media by bot and troll accounts. These accounts can partly be detected through their metadata (a set of data which is providing information about one or more aspects of the data) and behaviour, avoiding the need to directly detect the deepfakes themselves.
How realistic are deepfakes going to get?
Deepfakes used to be easier to spot. For example, some older deepfake videos contained people who didn't blink. However, once people who didn’t blink became a telltale sign of fake content, deepfakes appeared with people who did blink. In general, it is likely that deepfakes will continue to become more realistic as deepfakers are driven by constant innovation in detection methods to improve their deepfake generation techniques.
Here at The Alan Turing Institute, we are involved in a number of projects aiming to tackle some of the issues raised by deepfakes. Researchers at the Turing’s Centre for Emerging Technology and Security, for example, have been exploring how we can stop AI threats damaging democracy, whilst work within the Turing’s online safety team focuses on how people can protect themselves against online misinformation.
The Turing’s Applied Research Centre for Defence and Security recently supported the UK government’s Home Office to launch the ‘Deepfake Detection Challenge’. The event brought together participants from government, policing, technology companies and academia to discuss real-life case studies highlighting the challenges that deepfakes pose across different government sectors.
Deepfakes are here to stay, but work at the Turing and research centres around the globe seeks to limit their harms and make the online world a safer, more trustworthy space for all.
Top image: terovesalainen