What does it mean to ensure automated decision-making systems act fairly? How can we ensure that ‘black box’ algorithms perform their functions transparently? And how can personal data be protected securely? These are the types of questions that members of this interest group seek to answer.
Explaining the science
Turing Research Fellow Dr. Brent Mittelstadt's research addresses the ethics of algorithms, machine learning, artificial intelligence and data analytics. Over the past five years his focus has broadly been on the ethics and governance of emerging information technologies, including a special interest in medical applications.
Recent research in machine learning has thrown up some interesting measures of algorithmic fairness – the different ways that a predictive algorithm can be fair in its outcome. In this talk, Suchana Seth explores what these measures of fairness imply for technology policy and regulation, and where challenges in implementing them lie. The goal is to use these definitions of fairness to hold predictive algorithms accountable.
George Danezis is a Reader in Security and Privacy Engineering at the Department of Computer Science of UCL, and Head of the Information Security Research Group. He has been working on anonymous communications, privacy enhancing technologies (PET), and traffic analysis since 2000.
Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.
There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.
The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.
Opening ‘black box’ systems to improve comprehension and explanation of algorithmic decision-making
Challenges: Algorithmic opacity, lack of public understanding, proprietary knowledge
Examples: Counterfactual explanation, local interpretable model-agnostic explanations (LIME)
Preserving protected characteristics like gender and ethnicity in automated systems
Challenges: Encoding human values into algorithmic systems, anticipating and mitigating potential harms
Examples: Mathematically provable methods to ensure those with protected characteristics are not discriminated against
Balancing innovation with privacy in analysis of personal data
Challenges: Ensuring that sensitive personal data remains private, while enabling the value of this data to be extracted on an aggregate basis
Examples: Differential privacy, privacy-preserving machine learning
How to get involved
To join us, please email [email protected]
Interest group leader and Turing Fellow Adrian Weller is also an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), where he leads their project on Trust and Transparency. The CFI’s mission for interdisciplinary researchers to work together to ensure that humanity makes the best of the opportunities of artificial intelligence as it develops over coming decades, closely allies with that of this interest group and the wider institute.