Fairness and Responsibility in Human-AI interaction in medical settings

Preparing the healthcare workforce for AI-augmented clinical decision making.

Duration

11 - 20 hours

Level

Learner

Course overview

AI has transformative potential in healthcare by delivering consistent, high-quality care for patients while relieving workload pressures in healthcare systems. Realisation of this potential has been challenged by issues of algorithmic trustworthiness, confidence and fairness around the use of AI for augmented clinical reasoning and decision making (CRDM). The UK NHS AI-Lab in partnership with Health Education England (HEE) have recently published a report highlighting the challenges around model bias, transparency, explainability and cognitive bias in the human-AI interaction for AI-augmented CRDM. Education on these issues is generally lacking amongst healthcare professionals, leading to reduced confidence and increased susceptibility to cognitive biases, which have the potential to exacerbate inequalities and reduce fairness.

This course has been commissioned as part of our open funding call for Responsible AI courses, with funding from Accenture and the Alan Turing Institute.

Who is this course for?

Clinicians and the stakeholders of clinical settings who use AI for augmented clinical reasoning and decision making

Learning outcomes

By the end of this course, learners will be able to:

  • Understand the basic components of the life-cycle of machine learning systems 
  • Understand key ethical concepts (e.g. fairness, justice, benevolence) with an inclusive approach 
  • Evaluate ethical risks & benefits of using AI in healthcare settings 
  • Apply examples and models of widespread human decision-making biases to their own decisions in a healthcare setting 
  • Weigh the benefits and risks of employing AI recommendation systems in their clinical decisions 
  • Understand the potential for AI tools to affect human biases in clinical decision making, and to impact on fairness and equality. 
  • Use AI to reach more accurate and more ethical decisions in their medical practice

Course details

The course is divided into two main modules.

Module 1 is designed to provide knowledge and real understanding about the central issues on AI and its ethical challenges, particularly for clinicians and stakeholders in clinical settings.

  • Lesson 1.1 Understanding the basic components of the lifecycle of Machine Learning systems 
  • Lesson 2.1 Ethical Risks in AI 
  • Lesson 2.2 Biases in AI 
  • Lesson 2.3 Ethical risks of AI in clinical and medical settings

Module 2 is divided into six scenarios, on Human-AI interactions in medical settings:

  • Triage tool for ER chest pain assessment
  • Urgent care radiological detection of stroke
  • Oncology treatment strategy recommendation engine
  • CT lung nodule detection for radiology
  • Automated dermatology reporting
  • Chest X-ray diagnostic assistance algorithm

The goal of the scenarios is to challenge and test the behavior of the learner against a number of metrics that describe fundamental aspects of the decision making process in specific clinical settings. The aim is not to provide a rigid right/wrong assessment, but to produce a rigorous quantitative description of the learner's choices.

License

This course is released under a CC BY 4.0 license.
Materials can also be found on GitHub.

Image by Alan Warburton / © BBC / Better Images of AI / Medicine / CC-BY 4.0

Instructors