Assessing and mitigating bias and discrimination in AI: Beyond binary classification

A guide to evaluating and mitigating bias in AI systems, going beyond binary classification tasks.

Duration

21 - 30 hours

Level

Learner

Course overview

Artificial Intelligence is widely used in sensitive domains such as healthcare, insurance, recruitment and credit scoring. In these cases, it is imperative that algorithms work fairly for all target users, without discriminating certain groups over others. The responsible AI community has been active in creating tools and techniques for measuring and mitigating bias. However, most of the literature has so far been concerned with binary classification tasks. This course will extend beyond binary classification. We will cover how to measure and mitigate bias in a variety of tasks such as multiclass classification, regression, recommender systems and clustering algorithms.

This course has been commissioned as part of our open funding call for Responsible AI courses, with funding from Accenture and the Alan Turing Institute.

Who is this course for?

The course is designed for a technical audience, specifically data science and machine learning practitioners or researchers who are concerned about the fairness of their algorithms. The students are expected to know basic linear algebra, machine learning and programming. The programming language used for the exercises is Python.

Additionally, the course is built as an extension of a previous course on "Assessing and Mitigating Bias and Discrimination in AI". We therefore expect learners to be familiar with basic concepts of fairness, and how to measure and mitigate bias in binary classification tasks.

Learning outcomes

 By the end of the course, students will be able to:

  • explain why fairness is an issue in regression, multiclass classification, recommender systems and clustering tasks
  • describe different definitions of fairness that apply to these tasks
  • measure and mitigate fairness for regression, multiclass classification, recommender systems and clustering tasks
  • practice on a range of case studies and practical programming exercises
  • explain how robustness, privacy and explainability interact with fairness in regression, multiclass classification, recommender systems and clustering tasks

License

This course is released under a CC BY 4.0 license.
Materials can also be found on GitHub.

 

You may also be interested in:

Article: The Best Technical Resources for Bias Mitigation

Course: Assessing and Mitigating Bias and Discrimination in AI 

Details

1. Bias and Fairness in Regression

Module Name

Topic

Section 1.1 Introduction to Regression
Section 1.2 Fairness in Regression
Section 1.3 Measuring Bias in Regression
2. Bias and Fairness in Multiclass Classification

Module Name

Topic

Section 2.1 Introduction to Multiclass Classification
Section 2.2 Fairness in Multiclass Classification
Section 2.3 Measuring Bias in Multiclass Classification
3. Bias and Fairness in Recommender Systems

Module Name

Topic

Section 3.1 Introduction to Recommender Systems
Section 3.2 Fairness in Recommender Systems
Section 3.3 Measuring Bias in Recommender Systems
Section 3.4 Mitigating Bias in Recommender Systems
4. Bias and Fairness in Clustering

Module Name

Topic

Section 4.1 Introduction to Clustering
Section 4.2 Fairness in Clustering
Section 4.3 Measuring Bias in Clustering
Section 4.4 Mitigating Bias in Clustering
5. Trade-offs of Bias with other verticals of trustworthy AI

Module Name

Topic

Section 5.1 Trade-offs of Bias with other verticals (Regression and Multiclass)
Section 5.2 Trade-offs of Bias with other verticals (Clustering)
Section 5.3 Trade-offs of Bias with other verticals (Recommender systems)
6. Case Study Exercises

Module Name

Topic

Case Study Exercise 1 Regression
Case Study Exercise 2 Multiclass Classification
Case Study Exercise 3 Recommender system
Case Study Exercise 4 Clustering

Instructors