About the event
Increasingly, algorithms are shaping the way we see the world. They are being deployed to make decisions about sensitive parts of our lives, from our eligibility for a loan to the length of our sentence if we commit a serious crime. But how does algorithmic decision-making work and how do we know how decisions are made and if they are fair?
The demand for transparency, validation and explainability of automated advice systems is not new. Back in the 1980s, extensive discussions were held between proponents of rule-based systems and those based on statistical analysis, partly based on which were more transparent and how they should be evaluated. More recently, Onora O'Neill's emphasis on demonstrating trustworthiness, and her idea of 'intelligent transparency', has focused attention on the ability of algorithms to, if required, show their workings.
In this talk, Professor Spiegelhalter will argue that we should ideally be able to check (a) the basis for the algorithm, (b) its past performance, (c) the reasoning behind its current claim, (d) its uncertainty around its current claim and e) that these explanations should be open to different levels of expertise. These ideas will be illustrated by the Predict system for women choosing follow-up treatment after surgery for breast cancer, which has four levels of explanation of its conclusions.
David Spiegelhalter is Winton Professor for the Public Understanding of Risk at Cambridge University. He works to improve the way in which risk and statistical evidence are taught and discussed in society, and makes frequent media appearances. In 2017-2018 he was President of the Royal Statistical Society, and in 2011 he came 7th in an episode of Winter Wipeout.
Read more about David Spiegelhalter