Driving data futures: AI explainability with a human face

Learn more Join waitlist Add to Calendar 09/18/2019 05:15 PM 09/18/2019 07:00 PM Europe/London Driving data futures: AI explainability with a human face Location of the event
Wednesday 18 Sep 2019
Time: 17:15 - 19:00

Event type


Audience type


Event series

Driving data futures


In the lecture series 'Driving data futures', the public policy programme of The Alan Turing Institute invites audiences to learn and critically engage with new research at the intersection of new technologies, public policy, and ethics.

In this second event of the series, we will focus on the human dimension and limitations of AI explainability. Our goal will be to create conversation about the ethical stakes of algorithmically supported decision-making by placing AI explanations in the real-world contexts of the lives and communities they affect.

About the event


17:15 – Doors open

17:30 – 17:35 – Introduction

17:35 – 17:45 – Dr Reuben Binns – Explaining generalisation and individual justice

17:45 – 17:55 – Dr Alison Powell – The limits of explainability

17:55 – 18:05 – Dr David Leslie – AI explanation and the content lifecycle

18:05 – 18:30 – Speaker discussion and comments on each other’s talks

18:30 – 19:00 – Q&A with the audience

About the event

To consider the human face of AI explainability, this event will serve as a platform for open and interdisciplinary discussion about the importance, limits, and implications of explainability.

Dr Reuben Binns will talk about explainability in the context of individual justice. Many machine learning applications rely on large datasets – information from other people – on the basis of which they make statistical inferences regarding individual decision subjects. Dr Binns will discuss the challenges and limitations that justice in the individual case presents for explanation and justification of data-driven decisions.

Dr Alison Powell will speak about the limits of explainability, reflecting on the technical and societal contexts in which explanations can occur. Looking beyond current explainability narratives, Dr Powell will explore how a limited view of explainability as a practice could continue to reiterate the interests of a narrow set of actors within the AI industry.

Dr David Leslie will tell us about the central role that human interpretation and evaluation should play in the explanation of AI-supported decisions. Dr Leslie will argue for the importance of translating the technical machinery of AI systems - their variables, inferences, and functional rationale - back into the everyday language of the socially relevant meanings that informed the purposes and objectives of their design in the first place. He will claim that, only by undertaking this critical and holistic task of re-translation, will implementers be able to justifiably apply algorithmically generated results to the concrete contexts of the human lives they impact.


Register now



Professor David Leslie

Director of Ethics and Responsible Innovation Research at The Alan Turing Institute and Professor of Ethics, Technology and Society, Queen Mary University of London

Dr Reuben Binns

Postdoctoral Research Fellow in Artificial Intelligence with the Information Commissioner’s Office (ICO)