Overview
Transparency when responsibly developing AI systems concerns two aspects: (1) justifiable design of the AI system and (2) ability of the AI system to explain its decisions. This transparency contributes to establishing mutual trust in the human-machine collaboration process and places the design and development of transparent AI in the interdisciplinary context of social interaction. From the machine learning point of view, some promising approaches to transparent AI now use natural language to explain their decisions, which will help to increase the trust that human users have in the machine. However, they are not yet exploited by the industry.
This course aims to familiarise users with the main techniques of designing explainable and transparent systems and being able to use them in practice for AI. It will also enable AI practitioners to build NLP classification models which explain their decisions using natural language.
Relevance
The focus of the course is on informing research students, system designers and AI practitioners of the key principles of explainable AI system design for human-in-the-loop applications. This approach will better enable them to be responsible and to take a range of factors into account when developing their systems, including stakeholders (users, technical experts, management, clients), the work setting, and better teamwork.