Introduction

The ability to interpret the predictions of a machine learning model brings about user trust and supports understanding of the underlying processes being modeled. In many application domains, such as the medical, insurance and criminal justice domains, model interpretability and explainability can be a crucial requirement for the deployment of machine learning, since a model’s predictions would inform critical decision-making. Unfortunately, most state-of-the-art models — such as ensemble models, kernel methods, and neural networks — are perceived as being complex “black-boxes”, the predictions of which are too hard to be interpreted.

About the event

In this seminar, we will outline the challenges in achieving machine learning model interpretability, explainability and trustability. We will then present research progress in turning “black-box” models into “white-box” models. We also introduce key ideas on how to develop more interpretable algorithms for risk prediction, time-series prediction and treatment effects as well as how to test and communicate the goal of interpretability, explainability and trustability is achieved. We will conclude by defining the research agenda that lies ahead.

Agenda

18:00-18:30 - Registration

18:30-19:30 -  Machine learning interpretability, explainability and trustability - Mihaela van der Schaar and Sir Alan Wilson

19:30-19:50 - Q&A

19:50-20:30 - Close

Speakers

Organisers

Location

1 Wimpole St

London, W1G 0LZ

51.516132120067, -0.14696865692008