Turing Lecture: Regulating unreality

Learn more Add to Calendar 07/11/2019 06:00 PM 07/11/2019 08:00 PM Europe/London Turing Lecture: Regulating unreality Location of the event
Thursday 11 Jul 2019
Time: 18:00 - 20:00

Event type

Lecture
£4.60

Event series

The Turing Lectures

Outcomes

Introduction

In collaboration with the Barbican. Please note that bookings for this lecture are through the Barbican.

About the event

Turing Lecture banner

“Deepfakes” or the use of AI to convincingly simulate or synthesise content, voice, images or video for malicious purposes (Nesta, 2018) , has become prominent recently, perhaps most obviously as a means to create realistic but fake pornography involving celebrities or particular victims .

As such,  there has already been some discussion regarding whether these “unreal” products constitute criminal material such as revenge porn ( better referred to as nonconsensual sharing of private or sexual images) or images of child sexual abuse ( “child pornography”) (Chesney and  Citron,2018) .

Its implications are however far greater. Techniques to generate deepfakes are evolving in response to a parallel arms war of detection techniques, and may eventually result in a world where the problems currently being experienced with “fake news” expand to everything we see, hear and experience , not just news we read. Obvious areas where this may impact on the law include the law of evidence: the law of intellectual property, primarily copyright: and fraudulent misrepresentation and anti-consumer scams: but these will only be the start of a deluge when our world of reality becomes inscrutably a constructed and manipulated text.

We suggest that pro-actively a number of governance approaches should be examined to try to forestall the impact of deepfakes on reality. These include;

  • The “Scottish revenge porn” paradigm (legal fictions to allow for uncertainty of what is real)
  • The “best evidence” or certification paradigm (corroboration as a rule of evidence, certification) photo DNA  - best ways of what is establishing
  • The spam paradigm (technological not legal solutions as best governance approach)
  • The Data Protection paradigm ( manipulation of *personal* data is already adequately governed) The “fake news” paradigm ( marketing/ political manipulation as the main target of governance  not  privacy, criminal justice or evidence)
  • The ethical or “Black Mirror” paradigm (e.g. even if legal prohibition not desirable or practical, should there be ethical constraints? E.g.  should we let people market synthesised voices of the dead after death for private or sentimental use?)

Finally we consider in the light of work on transparency as to whether robots/ bots are persons  (EPSRC Principles of Robotics; California Senate Bill No 1001,  2018) if  a duty to certificate or separate reality from unreality or to filter unreality out, to the standard of comprehension of a reasonably perceptive person, should be imposed on service providers and platforms. Should the right to know what is real be a new human right?

 

Agenda

18:00 - 18:30  Registration

18:30 - 18:35  Introduction

18:35 - 19:35  Regulating unreality, Professor Lilian Edwards (Newcastle Law School, UK)

19:35 - 19:55  Q&A

 

 

Speakers

Location

Barbican Centre (Frobisher Auditorium)
Silk Street
London
EC2Y 8DS

51.5202077, -0.093786399999999