Introduction

What does it mean to ensure automated decision-making systems act fairly? How can we ensure that ‘black box’ algorithms perform their functions transparently? And how can personal data be protected securely? These are the types of questions that members of this interest group seek to answer.

Upcoming events

Polygraphs and deception detection: past, present and AI-powered future? – Thursday 7 July, 14:00-15:15

We would like to invite you to a Zoom webinar ‘Polygraphs and deception detection: past, present and AI-powered future?’ on Thursday 7 July 14:00-15:15. In this event, we introduce the history of the polygraph (or ‘lie detector’ as it is commonly known), its current use, and the issues surrounding technological approaches for the detection of deception, such as ‘iBorderCtrl’ and emotion AI. We will hear from author and journalist, Amit Katwala, including lessons from his recent book about the polygraph’s early US history. Expert commentary will be provided by Emeritus Professor Don Grubin, forensic psychiatrist and polygraph test expert and provider, and Professor Jennifer Brown, chartered forensic and chartered occupational psychologist, Mannheim Centre for Criminology, LSE.

The event will be chaired by Marion Oswald, Senior Research Associate with The Alan Turing Institute's AI programme, and Associate Professor of Law, Northumbria University. Marion has published articles on the history of the polygraph and analogies to AI, and with Kotsoglou on the use of the polygraph in the criminal justice system in England and Wales.

Find out more and register here. You will receive a confirmation email containing information on how to add the meeting into your calendar.  

Speakers   

amit

Amit Katwala is the author of Tremors in the Blood: Murder, Obsession and the Birth of the Lie Detector, which was published by HarperCollins in April 2022. He is a Senior Writer at WIRED magazine, and has also written for The Guardian, The Times and many others.

don

Don Grubin is the Emeritus Professor of Forensic Psychiatry at Newcastle University in northeast England.  His work led to the introduction of mandatory polygraph testing of individuals on parole license in England and Wales with convictions for sexual, domestic abuse and terrorist related offences who are also considered high risk, as well as to a number of police forces making use of polygraphy in their management of registered sex offenders.  He and his colleagues train and provide quality control for all probation and police examiners in the UK. He has served on a range of UK government advisory committees, and is psychiatric advisor to the National Health Services’ Offender Personality Disorder Pathway. 

jen

Jennifer Brown: After completing post-doctoral research on environmental risk assessments, Jennifer joined the Hampshire Constabulary as one of the first civilian research managers to be employed by the Police Service in England and Wales.  Whilst working for the police she was involved in establishing offender profiling as a professional activity to assist the police in hard to solve rape and murder cases. She returned to the HE sector in 1994 firstly at the University of Portsmouth working on a degree programme for police officers and then to the University of Surrey where she set up a Masters course in Forensic Psychology. She joined LSE’s Mannheim Centre in 2010 and was invited to be Deputy Chair to Lords Stevens’ Independent Commission of Enquiry into the Future of Policing and is currently Deputy chair of the London Policing Ethics Panel. She remains an active researcher into police occupational culture, has written and published papers on evidence-based practice in policing and most recently co-edited the second edition of the Cambridge Handbook of Forensic Psychology.

____________________________________________________________________________________________________________

Is Differential Privacy a Silver Bullet for Machine Learning?’ – Friday 8 July, 13:00-14:00

We would like to invite you to a hybrid presentation by Nicolas Papernot; ‘Is Differential Privacy a Silver Bullet for Machine Learning?’ on Friday 8 July 13:00-14:00, Enigma room, British Library, London or online.

Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute. This event will be chaired by Adrian Weller (Turing’s AI programme director) and Ali Shahin (Turing’s AI programme research associate).

Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, algorithms for private machine learning have been proposed. In this talk, we first show that training neural networks with rigorous privacy guarantees like differential privacy requires rethinking their architectures with the goals of privacy-preserving gradient descent in mind. Second, we explore how private aggregation surfaces the synergies between privacy and generalization in machine learning. Third, we present recent work towards a form of collaborative machine learning that is both privacy preserving in the sense of differential privacy, and confidentiality-preserving in the sense of the cryptographic community. We motivate the need for this new approach by showing how existing paradigms like federated learning fail to preserve privacy in these settings.

Find out more and register here. You will receive a confirmation email containing information on how to add the meeting into your calendar. After registering, please email [email protected] if you would like to attend in person. Limited spaces are available, so please register early to avoid disappointment.  A light lunch will be available before the event and refreshments afterwards to facilitate networking opportunities. 

Speaker

nicolas

Nicolas Papernot is an Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute. His research interests span the security and privacy of machine learning. Some of his group’s recent projects include proof-of-learning, collaborative learning beyond federation, dataset inference, and machine unlearning. Nicolas is an Alfred P. Sloan Research Fellow in Computer Science. His work on differentially private machine learning was awarded an outstanding paper at ICLR 2022 and a best paper at ICLR 2017. He serves as an associate chair of the IEEE Symposium on Security and Privacy (Oakland) and an area chair of NeurIPS. He co-created and will co-chair the first IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) in 2023. Nicolas earned his Ph.D. at the Pennsylvania State University, working with Prof. Patrick McDaniel and supported by a Google PhD Fellowship. Upon graduating, he spent a year at Google Brain where he still spends some of his time.

Explaining the science

Watch the recording of ‘Regulating AI; discussion on the Why, When and How’ – a hybrid discussion held on 5 May. 

The event was chaired by Professor Lilian Edwards, Turing Fellow and Professor of Law, Innovation and Society at Newcastle University and panellists were: Professor Simon Chesterman, Dean of the National University of Singapore Faculty of Law and Senior Director of AI Governance at AI Singapore, Dr James Stewart, University of Edinburgh Lecturer in the School of Social and Political Science and Janis Wong, Research Associate at the Turing. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including the European Union which is developing its AI Act, the first law on AI by a major regulator anywhere. From self-driving cars and high-speed trading to algorithmic decision-making, the way we live, work, and play is increasingly dependent on AI systems that operate with diminishing human intervention. These fast, autonomous, and opaque machines offer great benefits but also pose significant risks. The discussion explored how our laws are dealing with AI, as well as what additional rules and institutions are needed, including the role that AI might play in regulating itself.


‘A Multidisciplinary Study of Predictive Artificial Intelligence Technologies in Criminal Justice Systems’, Friday 25 March presentation by Pamela Ugwudike (Associate Professor in Criminology, Director of Research) and Age Chapman (Professor of Computer Science, Co-Director of the Centre for Health Technologies). In this talk, Pamela and Age will describe the findings of their Turing-funded project which was implemented by a multidisciplinary team of researchers based at the University of Southampton. The project explored a classic predictive policing algorithm to investigate conduits of bias. Several police services across western and non-western jurisdictions are using spatiotemporal algorithms forecast crime risks. Known collectively as predictive policing algorithms, the systems direct police dispatch to predicted crime risk locations. While many studies on real data have shown that the algorithm creates biased feedback loops, few studies have systematically explored whether this is the result of legacy data, or the algorithmic model itself. To advance the empirical literature, this project designed a framework for testing predictive models for biases. With the framework, the project created and tested: (1) a computational model that replicates the published version of a predictive policing algorithm, and (2) statistically representative, biased and unbiased synthetic crime datasets, which were used to run large-scale tests of the computational model. The study found evidence of self-reinforcing properties: systematics in the model generated feedback loops which repeatedly predicted higher crime risks in the same locations. The study also found that multidisciplinary analysis of such systems is vital for uncovering these issues and shows that any study of equitable AI should involve a systematic and holistic analysis of their design rationalities.

Please click here to watch Paul Miller’s NSW Australia Ombudsman’s presentation about their recent report ‘The new machinery of government: using machine technology in administrative decision-making’ on Mar 8th Paul reflected on the increasing use of machine technologies in government decision-making processes in NSW, where agencies are known to be using such technologies in the areas of traffic and fines enforcement, policing, and assessment of child protection risk. The presentation illustrates why it is important to engage early with other disciplines such as lawyers, when automating processes that include human decisions which have legal implications.    

 

In this paper, Bender and her co-authors take stock of the recent trend towards ever larger language models (especially for English), which the field of natural language processing has been using to extend the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks.


AI I research is routinely criticized for its real and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility that it reflects. AI research often falls outside the purview of existing feedback mechanisms such as the Institutional Review Board (IRB), which are designed to evaluate harms to human subjects rather than harms to human society. In response, we have developed the Ethics and Society Review board (ESR), a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research.


Turing Research Fellow Dr. Brent Mittelstadt's research addresses the ethics of algorithms, machine learning, artificial intelligence and data analytics. Over the past five years his focus has broadly been on the ethics and governance of emerging information technologies, including a special interest in medical applications.


Recent research in machine learning has thrown up some interesting measures of algorithmic fairness – the different ways that a predictive algorithm can be fair in its outcome. In this talk, Suchana Seth explores what these measures of fairness imply for technology policy and regulation, and where challenges in implementing them lie. The goal is to use these definitions of fairness to hold predictive algorithms accountable.


George Danezis is a Reader in Security and Privacy Engineering at the Department of Computer Science of UCL, and Head of the Information Security Research Group. He has been working on anonymous communications, privacy enhancing technologies (PET), and traffic analysis since 2000.

Aims

Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.

There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.

The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.

Talking points

Opening ‘black box’ systems to improve comprehension and explanation of algorithmic decision-making

Challenges: Algorithmic opacity, lack of public understanding, proprietary knowledge

Examples: Counterfactual explanation, local interpretable model-agnostic explanations (LIME)

Preserving protected characteristics like gender and ethnicity in automated systems

Challenges: Encoding human values into algorithmic systems, anticipating and mitigating potential harms

Examples: Mathematically provable methods to ensure those with protected characteristics are not discriminated against

Balancing innovation with privacy in analysis of personal data

Challenges: Ensuring that sensitive personal data remains private, while enabling the value of this data to be extracted on an aggregate basis

Examples: Differential privacy, privacy-preserving machine learning

How to get involved

Click here to request sign-up and join

Recent updates

Interest group leader and Turing Fellow Adrian Weller is also an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), where he leads their project on Trust and Transparency. The CFI’s mission for interdisciplinary researchers to work together to ensure that humanity makes the best of the opportunities of artificial intelligence as it develops over coming decades, closely allies with that of this interest group and the wider Institute.

Organisers

Dr Adrian Weller

Programme Director for Artificial Intelligence, Turing Fellow and Turing AI Acceleration Fellow

Researchers

Contact info

[email protected]