Introduction
In the lecture series 'Driving data futures', the public policy programme of The Alan Turing Institute invites audiences to learn and critically engage with new research at the intersection of new technologies, public policy, and ethics.
In this second event of the series, we will focus on the human dimension and limitations of AI explainability. Our goal will be to create conversation about the ethical stakes of algorithmically supported decision-making by placing AI explanations in the real-world contexts of the lives and communities they affect.
About the event
Agenda:
17:15 – Doors open
17:30 – 17:35 – Introduction
17:35 – 17:45 – Dr Reuben Binns – Explaining generalisation and individual justice
17:45 – 17:55 – Dr Alison Powell – The limits of explainability
17:55 – 18:05 – Dr David Leslie – AI explanation and the content lifecycle
18:05 – 18:30 – Speaker discussion and comments on each other’s talks
18:30 – 19:00 – Q&A with the audience
About the event
To consider the human face of AI explainability, this event will serve as a platform for open and interdisciplinary discussion about the importance, limits, and implications of explainability.
Dr Reuben Binns will talk about explainability in the context of individual justice. Many machine learning applications rely on large datasets – information from other people – on the basis of which they make statistical inferences regarding individual decision subjects. Dr Binns will discuss the challenges and limitations that justice in the individual case presents for explanation and justification of data-driven decisions.
Dr Alison Powell will speak about the limits of explainability, reflecting on the technical and societal contexts in which explanations can occur. Looking beyond current explainability narratives, Dr Powell will explore how a limited view of explainability as a practice could continue to reiterate the interests of a narrow set of actors within the AI industry.
Dr David Leslie will tell us about the central role that human interpretation and evaluation should play in the explanation of AI-supported decisions. Dr Leslie will argue for the importance of translating the technical machinery of AI systems - their variables, inferences, and functional rationale - back into the everyday language of the socially relevant meanings that informed the purposes and objectives of their design in the first place. He will claim that, only by undertaking this critical and holistic task of re-translation, will implementers be able to justifiably apply algorithmically generated results to the concrete contexts of the human lives they impact.