Fairness, transparency, privacy

What can we do to ensure the decisions made by machines do not discriminate, are transparent, and preserve privacy?




What does it mean to ensure automated decision-making systems act fairly? How can we ensure that ‘black box’ algorithms perform their functions transparently? And how can personal data be protected securely? These are the types of questions that members of this interest group seek to answer.

Explaining the science

Past events

Is differential privacy a silver bullet for machine learning?’ – 8 July

Watch the recording

A hybrid presentation held by Nicolas Papernot, Nicholas is Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute. The event was chaired by Ali Shahin Shamsabadi (Research Associate, the Turing’s AI Programme).

Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, algorithms for private machine learning have been proposed. In this talk, we first showed that training neural networks with rigorous privacy guarantees like differential privacy requires rethinking their architectures with the goals of privacy-preserving gradient descent in mind. Second, we explored how private aggregation surfaces the synergies between privacy and generalization in machine learning. Third, we presented recent work towards a form of collaborative machine learning that is both privacy preserving in the sense of differential privacy, and confidentiality-preserving in the sense of the cryptographic community. We motivate the need for this new approach by showing how existing paradigms like federated learning fail to preserve privacy in these settings.

Polygraphs and deception detection: past, present and AI-powered future? – 7 July

Watch the recording

In this event, we introduced the history of the polygraph (or ‘lie detector’ as it is commonly known), its current use, and the issues surrounding technological approaches for the detection of deception, such as ‘iBorderCtrl’ and emotion AI. We heard from author and journalist, Amit Katwala, including lessons from his recent book about the polygraph’s early US history. Expert commentary was provided by Emeritus Professor Don Grubin, forensic psychiatrist and polygraph test expert and provider, and Professor Jennifer Brown, chartered forensic and chartered occupational psychologist, Mannheim Centre for Criminology, LSE. The event was chaired by Marion Oswald, Senior Research Associate with The Alan Turing Institute's AI programme, and Associate Professor of Law, Northumbria University. Marion has published articles on the history of the polygraph and analogies to AI, and with Kotsoglou on the use of the polygraph in the criminal justice system in England and Wales.

Regulating AI; discussion on the why, when and how’ – 5 May

Watch the recording

The event was chaired by Professor Lilian Edwards, Turing Fellow and Professor of Law, Innovation and Society at Newcastle University and panellists were: Professor Simon Chesterman, Dean of the National University of Singapore Faculty of Law and Senior Director of AI Governance at AI Singapore, Dr James Stewart, University of Edinburgh Lecturer in the School of Social and Political Science and Janis Wong, Research Associate at the Turing. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including the European Union which is developing its AI Act, the first law on AI by a major regulator anywhere. From self-driving cars and high-speed trading to algorithmic decision-making, the way we live, work, and play is increasingly dependent on AI systems that operate with diminishing human intervention. These fast, autonomous, and opaque machines offer great benefits but also pose significant risks. The discussion explored how our laws are dealing with AI, as well as what additional rules and institutions are needed, including the role that AI might play in regulating itself.

‘A multidisciplinary study of predictive artificial intelligence technologies in criminal justice systems’ – 25 March

Watch the recording

A presentation by Pamela Ugwudike (Associate Professor in Criminology, Director of Research) and Age Chapman (Professor of Computer Science, Co-Director of the Centre for Health Technologies). In this talk, Pamela and Age will describe the findings of their Turing-funded project which was implemented by a multidisciplinary team of researchers based at the University of Southampton. The project explored a classic predictive policing algorithm to investigate conduits of bias. Several police services across western and non-western jurisdictions are using spatiotemporal algorithms forecast crime risks. Known collectively as predictive policing algorithms, the systems direct police dispatch to predicted crime risk locations. While many studies on real data have shown that the algorithm creates biased feedback loops, few studies have systematically explored whether this is the result of legacy data, or the algorithmic model itself. To advance the empirical literature, this project designed a framework for testing predictive models for biases. With the framework, the project created and tested: (1) a computational model that replicates the published version of a predictive policing algorithm, and (2) statistically representative, biased and unbiased synthetic crime datasets, which were used to run large-scale tests of the computational model. The study found evidence of self-reinforcing properties: systematics in the model generated feedback loops which repeatedly predicted higher crime risks in the same locations. The study also found that multidisciplinary analysis of such systems is vital for uncovering these issues and shows that any study of equitable AI should involve a systematic and holistic analysis of their design rationalities.

Please click here to watch Paul Miller’s NSW Australia Ombudsman’s presentation about their recent report ‘The new machinery of government: using machine technology in administrative decision-making’ on Mar 8th Paul reflected on the increasing use of machine technologies in government decision-making processes in NSW, where agencies are known to be using such technologies in the areas of traffic and fines enforcement, policing, and assessment of child protection risk. The presentation illustrates why it is important to engage early with other disciplines such as lawyers, when automating processes that include human decisions which have legal implications.    


In this paper, Bender and her co-authors take stock of the recent trend towards ever larger language models (especially for English), which the field of natural language processing has been using to extend the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks.

AI I research is routinely criticized for its real and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility that it reflects. AI research often falls outside the purview of existing feedback mechanisms such as the Institutional Review Board (IRB), which are designed to evaluate harms to human subjects rather than harms to human society. In response, we have developed the Ethics and Society Review board (ESR), a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research.

Turing Research Fellow Dr. Brent Mittelstadt's research addresses the ethics of algorithms, machine learning, artificial intelligence and data analytics. Over the past five years his focus has broadly been on the ethics and governance of emerging information technologies, including a special interest in medical applications.

Recent research in machine learning has thrown up some interesting measures of algorithmic fairness – the different ways that a predictive algorithm can be fair in its outcome. In this talk, Suchana Seth explores what these measures of fairness imply for technology policy and regulation, and where challenges in implementing them lie. The goal is to use these definitions of fairness to hold predictive algorithms accountable.

George Danezis is a Reader in Security and Privacy Engineering at the Department of Computer Science of UCL, and Head of the Information Security Research Group. He has been working on anonymous communications, privacy enhancing technologies (PET), and traffic analysis since 2000.


Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.

There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.

The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.

Talking points

Opening ‘black box’ systems to improve comprehension and explanation of algorithmic decision-making

Challenges: Algorithmic opacity, lack of public understanding, proprietary knowledge

Examples: Counterfactual explanation, local interpretable model-agnostic explanations (LIME)

Preserving protected characteristics like gender and ethnicity in automated systems

Challenges: Encoding human values into algorithmic systems, anticipating and mitigating potential harms

Examples: Mathematically provable methods to ensure those with protected characteristics are not discriminated against

Balancing innovation with privacy in analysis of personal data

Challenges: Ensuring that sensitive personal data remains private, while enabling the value of this data to be extracted on an aggregate basis

Examples: Differential privacy, privacy-preserving machine learning

How to get involved

Click here to request sign-up and join

Recent updates

Interest group leader and Turing Fellow Adrian Weller is also an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), where he leads their project on Trust and Transparency. The CFI’s mission for interdisciplinary researchers to work together to ensure that humanity makes the best of the opportunities of artificial intelligence as it develops over coming decades, closely allies with that of this interest group and the wider Institute.



Contact info

[email protected]