Launching guidance from Project ExplAIn

At the cutting edge of practice-centred guidance on explainable AI

Monday 02 Dec 2019

Research area

Introduction

AI and machine learning technologies are helping people do remarkable things. From assisting doctors in the early detection of diseases and supporting scientists who are wrestling with climate change to bringing together diverse groups from around the globe through real-time speech-to-speech translation, AI systems are enabling humans to successfully confront an ever-widening range of societal challenges.

This progress has, however, brought with it a new set of difficulties. Many machine learning applications, such as those in natural language processing and computer vision, complete their assigned tasks by identifying subtle patterns in large datasets. These systems accomplish this by linking together many hundreds, thousands—or sometimes even millions—of data points at a time. Humans don’t think this way and because of this have difficulty understanding and explaining how these sorts of AI systems reach their results.     

This gap in AI explainability becomes crucial when the outcomes of AI-assisted decisions have a significant impact on affected individuals and their communities. If an AI system is opaque then there is no way to ensure that its data processing is robust, reliable and safe. Similarly, in cases where social or demographic data are being used as inputs in AI decision-support systems—for instance, in domains such as criminal justice, social care, or job recruitment—the employment of ‘black box’ models leaves designers and deployers no way to properly safeguard against possibilities of lurking biases that may produce inequitable or discriminatory results.

Over the last year, The Alan Turing Institute and the Information Commissioner’s Office (ICO) have been working together to discover ways to tackle these difficult issues. The ultimate product of this joint endeavour—the most comprehensive practical guidance on AI explanation produced anywhere to date—has now been released for consultation. The consultation runs until 24 January 2020, with the final guidance due to be released later in the year.

Where did this come from?

The project underpinning this work, Project ExplAIn, came about as a result of Professor Dame Wendy Hall and Jérôme Pesenti’s 2017 independent review on growing the AI industry in the UK. This was followed in 2018 by the Government’s AI Sector Deal, which tasked the ICO and the Turing to “…work together to develop guidance to assist in explaining AI decisions.”

In February 2019, two five-day-long citizens’ juries on AI explanation were staged in Coventry and Manchester. These were designed to elicit public preferences about what people expect from explanations of AI-assisted decisions. The juries used a deliberative format with the assistance of expert witnesses, who provided jurors with background information about the technical, legal and ethical dimensions of AI explainability. The juries were followed by three roundtables, where the feedback from the citizens were presented to and then discussed by a range of academic and industry stakeholders, from data scientists and researchers to data protection officers, C-suite executives and lawyers. The results of these public engagement activities as well as extensive desk research have provided the basis for the guidance.

Why is this guidance necessary?

Increasingly organisations are using AI to help them make decisions. Where they are processing personal data to do this, they have to comply with certain parts of the General Data Protection Regulation. Moreover, where their AI-assisted decisions raise possibilities of discrimination against protected characteristics such as age, disability or race, organisations must comply with the 2010 Equality Act.  

But beyond this, an organisation’s capacity to explain its AI-assisted decisions to those affected by them builds trust among the public. It also improves the transparency and accountability of internal governance processes by having an informed workforce that can then maintain oversight of what these systems do and why. Society benefits too, as the priority of designing explainable AI models can improve their reliability, safety, and robustness. It can also help surface the existence of potential issues of bias within these AI systems and in the data they use, which can then be addressed and possibly mitigated.

How will it help?

Wherever organisations use personal data to make AI-assisted decisions, they should be able to explain those decisions to the people affected by them. The guidance we have produced provides an accessible overview of the key principles, concepts and tools that can help organisations provide explanations in practice.

What’s in the guidance?

At the heart of the guidance is a series of related questions: What makes for a good explanation of decisions supported by AI systems? How can such explanations be reliably extracted and made understandable to a non-technical audience? How should organisations go about providing meaningful explanations of the AI-supported decisions they make? What do the people affected by these decisions deserve, desire and need to know?

The main focus of the guidance is the need to tailor explanations to the context in which AI systems are used for decision-making. This vital contextual aspect includes the domain or sector in which an organisation operates, and the individual circumstances of the person receiving the decision.

The guidance also stresses a principles-based approach to the governance of AI explanations. We present four principles of explainability that provide ethical underpinnings for the guidance and that steer the practical recommendations contained in it:

  • Be transparent: Be open and candid regarding how and where your organisation uses AI decision-support systems and provide meaningful explanations of their results.
  • Be accountable: Ensure appropriate oversight of AI decision-support systems and be answerable to others in your organisation, to external bodies, and to the individuals affected by AI-assisted decisions.
  • Consider context: Choose AI models and explanations that are appropriate to the settings and potential impacts of their use-cases, and tailor governance processes to the structures and management processes of your organisation.
  • Reflect on impacts: Weigh up the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome, and think about how the system may affect the wellbeing of individuals and wider society.

Building off these principles, we identify a number of different explanation types, which cover various facets of an explanation, and will often be used in concert with each other:

  • Responsibility: who is involved in the development and management of an AI system, and who to contact for a human review of a decision.
  • Rationale: the reasons that led to a decision, delivered in an accessible way.
  • Fairness: steps taken to ensure that AI decisions are generally unbiased and fair, and whether or not an individual has been treated equitably.
  • Safety and performance: steps taken to maximise the accuracy, reliability, security and robustness of the decisions the AI system helps to make.
  • Impact: the effect that the AI system has on an individual, and on wider society.
  • Data: what data has been used in a particular decision, and what data has been used to train and test the AI model.

For organisations, the emphasis is on how to set up and govern the use of AI systems to be suitably transparent and accountable, and that they prioritise, where appropriate, using inherently explainable AI models before choosing less interpretable models, such as ‘black box’ systems. We outline the art of the possible in these considerations, to help the governance and technical teams in organisations think about how to extract explanations from their AI systems.

When delivering an explanation to the individual affected, there are a number of contextual factors that will inform what they should be told first, and what information to make available separately. We call this ‘layering’ explanations, which is designed to avoid information overload. These contextual factors are:

  • Domain: the setting or sector in which the AI system is deployed to help make decisions about people. What people want to know in the health sector will be very different to the explanation they will want in the criminal justice domain.
  • Impact: the effect an AI-assisted decision can have on an individual. Varying levels of severity and different types of impact can change what explanations people will find useful, and the purpose the explanation serves.
  • Data: the data used to train and test an AI model, and the input data used for a particular decision. The type of data used can influence an individual’s willingness to accept or contest an AI-assisted decision, and the actions they take as a result of it.
  • Urgency: the importance of receiving, or acting upon, the outcome of a decision within a short timeframe.
  • Audience: the individuals the explanation is being given to will influence what type(s) of explanation will be useful.

How has the Turing been involved?

Our Ethics Fellow David Leslie and the public policy programme have incorporated the state of the art in the responsible design and implementation of interpretable AI systems into this guidance. They have also drawn from the results of the Project ExplAIn citizens’ juries and existing frameworks such as the Turing’s own Understanding Artificial Intelligence Ethics and Safety to provide the strong ethical foundations that underpin it.

How can I feed in to the ICO’s consultation?

The guidance is intended to be a useful and inclusive tool, so the ICO and the Turing welcome comments from members of the public, experts, and practitioners who are developing and deploying AI systems. You can find details on responding to the consultation here.


About the authors: David Leslie is the Ethics Fellow in the public policy programme at The Alan Turing Institute. Helena Quinn is Senior Policy Officer at the Information Commissioner’s Office. She is currently seconded to the Information Commissioner’s Office from The Alan Turing Institute.

 

Explaining decisions made with AI

A co-badged guidance from the Information Commissioner’s Office and The Alan Turing Institute 

This infographic video produced by Fable Studios consists of an introduction to 'Explaining Decisions Made with AI', a guidance co-produced by the Information Commissioner’s Office and The Alan Turing Institute. The video provides basic information about the importance of explainable AI. It includes an introduction to the four principles of AI explainability and a description of the six explanation types which are meant to assist organisations with delivering understandable explanations to relevant stakeholders. The purpose of the video is to provide an accessible entry point to the guidance and to direct towards the complete version of the guidance, to learn more about how to implement it in practice. 

Funders

UKRI