Introduction
The ability to interpret the predictions of a machine learning model brings about user trust and supports understanding of the underlying processes being modeled. In many application domains, such as the medical, insurance and criminal justice domains, model interpretability and explainability can be a crucial requirement for the deployment of machine learning, since a model’s predictions would inform critical decision-making. Unfortunately, most state-of-the-art models — such as ensemble models, kernel methods, and neural networks — are perceived as being complex “black-boxes”, the predictions of which are too hard to be interpreted.
About the event
In this seminar, we will outline the challenges in achieving machine learning model interpretability, explainability and trustability. We will then present research progress in turning “black-box” models into “white-box” models. We also introduce key ideas on how to develop more interpretable algorithms for risk prediction, time-series prediction and treatment effects as well as how to test and communicate the goal of interpretability, explainability and trustability is achieved. We will conclude by defining the research agenda that lies ahead.
Agenda
18:00 - 18:30 - Registration, tea and coffee
18:30 - 18:35 - Introduction
18:35 - 19:25 - Machine learning interpretability, explainability and trustability
19:25 - 19:40 - Q&A
19:40 - 20:30 - Networking reception