In 2019, Apple launched its first credit card. However, the fanfare ended abruptly, turning into a major scandal when it became apparent that the ML algorithm behind the credit card gave sexist and racist credit limits. This illegality arose because Apple did not understand the algorithms employed, instead using it as a “black box”. That this was an issue for the largest company in the world at the time highlights the need for transparency and fairness when using ML algorithms. This course aims to increase an awareness for the participant on common pitfalls and misconceptions when using AI, and additionally provide the tools to avoid falling into said common pitfalls when using AI in the future. The course is split into three parts. The first part is aimed at a general audience and highlights the key issues that actors need to be aware of. The second part is aimed at a more technical audience where practical examples highlight specific misrepresentations that can naturally emerge. The final part is again aimed at the general audience and places these issues of data misrepresentation into the broader context of responsible AI.
The heart of the course highlights an important, often overlooked, technical component of Responsible AI, namely avoiding misrepresentations in data analytics. A lot is known about how such misrepresentations occur and hence responsible AI should explicitly avoid common issues. Through this course we are not only aiming to expose the skills needed, but also frame this technical component in the broader context of responsible AI. This includes considering its place within an ethical AI audit process for certification.