AI Ethics and Governance in Practice: AI Explainability in Practice

The seventh workbook in the AI Ethics and Governance in Practice Programme.

Abstract

In 2021, the UK's National AI Strategy recommended that UK Government’s official Public Sector Guidance on AI Ethics and Safety be transformed into a series of practice-based workbooks. The result is the AI Ethics and Governance in Practice Programme. This series of eight workbooks provides end-to-end guidance on how to apply principles of AI ethics and safety to the design, development, deployment, and maintenance of AI systems. It provides public sector organisations with a Process Based Governance (PBG) Framework designed to assist AI project teams in ensuring that the AI technologies they build, procure, or use are ethical, safe, and responsible. 

This is the seventh workbook in the series purpose of this workbook is to introduce participants to the principle of AI Explainability. Understanding how, why, and when explanations of AI-supported or -generated outcomes need to be provided, and what impacted people’s expectations are about what these explanations should include, is crucial to fostering responsible and ethical practices within your AI projects. 

To guide you through this process, we will address essential questions: What do we need to explain? And who do we need to explain this to? This workbook offers practical insights and tools to facilitate your exploration of AI Explainability. By providing actionable approaches, we aim to equip you and your team with the means to identify when and how to employ various types of explanations effectively. 

You can download a summary of the workbook below, alongside the full workbook.  

Citation information

Leslie, D., Rincón, C., Briggs, M., Perini, A., Jayadeva, S., Borda, A., Bennett, SJ. Burr, C., Aitken, M., Mahomed, S., Wong, J., Waller, M., and Fischer, C. (2024). AI Explainability in Practice. The Alan Turing Institute.

Turing affiliated authors

SJ Bennett

Research Associate, Data Justice and Global Ethical Futures

Dr Christopher Burr

Innovation and Impact Hub Lead (TRIC-DT), Senior Researcher in Trustworthy Systems (Tools, Practices and Systems)