Introduction

What does it mean to ensure automated decision-making systems act fairly? How can we ensure that ‘black box’ algorithms perform their functions transparently? And how can personal data be protected securely? These are the types of questions that members of this interest group seek to answer.

Explaining the science

In this paper, Bender and her co-authors take stock of the recent trend towards ever larger language models (especially for English), which the field of natural language processing has been using to extend the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks.


AI I research is routinely criticized for its real and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility that it reflects. AI research often falls outside the purview of existing feedback mechanisms such as the Institutional Review Board (IRB), which are designed to evaluate harms to human subjects rather than harms to human society. In response, we have developed the Ethics and Society Review board (ESR), a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research.


Turing Research Fellow Dr. Brent Mittelstadt's research addresses the ethics of algorithms, machine learning, artificial intelligence and data analytics. Over the past five years his focus has broadly been on the ethics and governance of emerging information technologies, including a special interest in medical applications.


Recent research in machine learning has thrown up some interesting measures of algorithmic fairness – the different ways that a predictive algorithm can be fair in its outcome. In this talk, Suchana Seth explores what these measures of fairness imply for technology policy and regulation, and where challenges in implementing them lie. The goal is to use these definitions of fairness to hold predictive algorithms accountable.


George Danezis is a Reader in Security and Privacy Engineering at the Department of Computer Science of UCL, and Head of the Information Security Research Group. He has been working on anonymous communications, privacy enhancing technologies (PET), and traffic analysis since 2000.

Aims

Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.

There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.

The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.

Talking points

Opening ‘black box’ systems to improve comprehension and explanation of algorithmic decision-making

Challenges: Algorithmic opacity, lack of public understanding, proprietary knowledge

Examples: Counterfactual explanation, local interpretable model-agnostic explanations (LIME)

Preserving protected characteristics like gender and ethnicity in automated systems

Challenges: Encoding human values into algorithmic systems, anticipating and mitigating potential harms

Examples: Mathematically provable methods to ensure those with protected characteristics are not discriminated against

Balancing innovation with privacy in analysis of personal data

Challenges: Ensuring that sensitive personal data remains private, while enabling the value of this data to be extracted on an aggregate basis

Examples: Differential privacy, privacy-preserving machine learning

How to get involved

Click here to request sign-up and join  

Recent updates

Interest group leader and Turing Fellow Adrian Weller is also an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), where he leads their project on Trust and Transparency. The CFI’s mission for interdisciplinary researchers to work together to ensure that humanity makes the best of the opportunities of artificial intelligence as it develops over coming decades, closely allies with that of this interest group and the wider institute.

Organisers

Researchers

Contact info

[email protected]