Introduction
What does it mean to ensure automated decision-making systems act fairly? How can we ensure that ‘black box’ algorithms perform their functions transparently? And how can personal data be protected securely? These are the types of questions that members of this interest group seek to answer.
Explaining the science
Events
Deepfakes, Disinformation and the Year of Elections: An Interim Scorecard
Date and time: Wednesday 4th September 2024, 11:00 – 13:00
Location: The Alan Turing Institute, in-person only
We are pleased to announce this upcoming panel event, featuring a keynote by EU expert on media disinformation Prof. Rasmus Nielsen, Professor of Political Communication at the University of Oxford and Director of the Reuters Institute for the Study of Journalism and chaired by expert in internet, AI and privacy law Prof. Lilian Edwards, Professor of Law, Innovation and Society at the Newcastle Law School. Places are restricted and registration is required.
2024 has been described as “the year of elections”. Already this year, we have seen elections in multiple countries including Mexico, France, the United Kingdom, India and the EU Parliament. While “fake news” is now a longstanding phenomenon which regulators and platforms have tried, perhaps with not huge success but some awareness, to curtail, deepfake images, audio and video are a newer and less guard railed phenomenon, especially as generative AI has improved it beyond anything imagined before 2022 or so.
Have deepfakes turned out to the electoral game changer that was feared or , as some research seems to show, are voter minds cannier than we thought and/or more resistant to change? Is the real threat as before, really targeted political ads or grassroots disinfo? What evidence have we gathered in this time? What regulatory and self regulatory solutions have worked and what are still needed? And should we allow, or even encourage, candidates to campaign as AI avatars?
A group of panellists including Sam Stockwell (CETaS, The Alan Turing Institute), Dr Jonathan Bright (AI for Public Safety and Online Safety, The Alan Turing Institute) and Dr Ben Collier (Digital Methods, University of Edinburgh) will engage with these and other questions, and will share their expert perspectives on whether deepfakes will have had the extreme impact on elections that is being widely suggested.
Limited spots are available: Register to attend
Agenda
11:00 – 11:15 – Welcome and introduction, Professor Lilian Edwards
11:15 – 11:45 – Keynote: Professor Rasmus Nielsen, University of Copenhagen, formerly Oxford Internet Institute
11:45 – 12:35 – Panel discussion
12:35 – 12:55 – Q&A
12:55 – 13:00 – Closing remarks
13:00 – End of event and light lunch
Past events
‘Data Ethics Committees in Policing’ on Tuesday 12th March
This workshop was jointly hosted by Northumbria University, The Alan Turing Institute and the Royal Statistical Society’s Data Ethics and Governance Section.
The aim of this event was to focus on exploring the benefits and challenges of police data ethics committees and their effect on validity, legitimacy, and effectiveness of data-driven technologies in policing.
"The Unlearning Problem(s)" by Anvith Thudi’ on Friday 30th June
Anvith Thudi is a Computer Science Ph.D. student at the University of Toronto, advised by Nicolas Papernot and Chris Maddison. His research interests span Trustworthy Machine Learning, with a particular interest in unlearning and privacy and their connections to the performance of models. Anvith Thudi is supported by a Vanier Canada Graduate Scholarship in the Natural Sciences and Engineering and is currently an intern at Microsoft Research Cambridge. For more information see www.anvith.com
This event was chaired by Ali Shahin Shamsabadi in the Turing AI programme.
Abstract: The talk presents challenges facing the study of machine unlearning. The need for machine unlearning, i.e., obtaining a model one would get without training on a subset of data, arises from privacy legislation and as a potential solution to data poisoning. The first part of the talk discusses approximate unlearning and the metrics one might want to study. We highlight methods for two desirable (though often disparate) notions of approximate unlearning. The second part departs from this line of work by asking if we can verify unlearning. Here we show how an entity can claim plausible deniability, and conclude that at the level of model weights, being unlearnt is not always a well-defined property.
'How to regulate foundation models: can we do better than the EU AI Act?' 24th April: a presentation by Lilian Edwards followed by a panel session.
Read a short summary of the event at How to Regulate Foundation Models where you can also watch the recording.
Lilian Edwards is a leading academic and major author of Law, Policy and the Internet. Lilian discussed how large or “foundation” models could be governed, identifying their unique features and the legal issues in their deployment as well as the EU AI Act and its risk based approach. A panel followed the presentation chaired by Adrian Weller. The panellists were;-
- Arnav Joshi (Head of Clifford Chance’s working group on digital ethics)
- Carlos Muñoz Ferrandis (Chair of BigScience’s legal & ethical working group)
- William Isaac (Senior Research Scientist on DeepMind’s Ethics and Society Team focusing on fairness and governance of AI systems)
‘Is differential privacy a silver bullet for machine learning?’ – 8 July
A hybrid presentation held by Nicolas Papernot, Nicholas is Assistant Professor of Computer Engineering and Computer Science at the University of Toronto. He also holds a Canada CIFAR AI Chair at the Vector Institute. The event was chaired by Ali Shahin Shamsabadi (Research Associate, the Turing’s AI Programme).
Some machine learning applications involve training data that is sensitive, such as the medical histories of patients in a clinical trial. A model may inadvertently and implicitly store some of its training data; careful analysis of the model may therefore reveal sensitive information. To address this problem, algorithms for private machine learning have been proposed. In this talk, we first showed that training neural networks with rigorous privacy guarantees like differential privacy requires rethinking their architectures with the goals of privacy-preserving gradient descent in mind. Second, we explored how private aggregation surfaces the synergies between privacy and generalization in machine learning. Third, we presented recent work towards a form of collaborative machine learning that is both privacy preserving in the sense of differential privacy, and confidentiality-preserving in the sense of the cryptographic community. We motivate the need for this new approach by showing how existing paradigms like federated learning fail to preserve privacy in these settings.
Polygraphs and deception detection: past, present and AI-powered future? – 7 July
In this event, we introduced the history of the polygraph (or ‘lie detector’ as it is commonly known), its current use, and the issues surrounding technological approaches for the detection of deception, such as ‘iBorderCtrl’ and emotion AI. We heard from author and journalist, Amit Katwala, including lessons from his recent book about the polygraph’s early US history. Expert commentary was provided by Emeritus Professor Don Grubin, forensic psychiatrist and polygraph test expert and provider, and Professor Jennifer Brown, chartered forensic and chartered occupational psychologist, Mannheim Centre for Criminology, LSE. The event was chaired by Marion Oswald, Senior Research Associate with The Alan Turing Institute's AI programme, and Associate Professor of Law, Northumbria University. Marion has published articles on the history of the polygraph and analogies to AI, and with Kotsoglou on the use of the polygraph in the criminal justice system in England and Wales.
Regulating AI; discussion on the why, when and how’ – 5 May
The event was chaired by Professor Lilian Edwards, Turing Fellow and Professor of Law, Innovation and Society at Newcastle University and panellists were: Professor Simon Chesterman, Dean of the National University of Singapore Faculty of Law and Senior Director of AI Governance at AI Singapore, Dr James Stewart, University of Edinburgh Lecturer in the School of Social and Political Science and Janis Wong, Research Associate at the Turing. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including the European Union which is developing its AI Act, the first law on AI by a major regulator anywhere. From self-driving cars and high-speed trading to algorithmic decision-making, the way we live, work, and play is increasingly dependent on AI systems that operate with diminishing human intervention. These fast, autonomous, and opaque machines offer great benefits but also pose significant risks. The discussion explored how our laws are dealing with AI, as well as what additional rules and institutions are needed, including the role that AI might play in regulating itself.
‘A multidisciplinary study of predictive artificial intelligence technologies in criminal justice systems’ – 25 March
A presentation by Pamela Ugwudike (Associate Professor in Criminology, Director of Research) and Age Chapman (Professor of Computer Science, Co-Director of the Centre for Health Technologies). In this talk, Pamela and Age will describe the findings of their Turing-funded project which was implemented by a multidisciplinary team of researchers based at the University of Southampton. The project explored a classic predictive policing algorithm to investigate conduits of bias. Several police services across western and non-western jurisdictions are using spatiotemporal algorithms forecast crime risks. Known collectively as predictive policing algorithms, the systems direct police dispatch to predicted crime risk locations. While many studies on real data have shown that the algorithm creates biased feedback loops, few studies have systematically explored whether this is the result of legacy data, or the algorithmic model itself. To advance the empirical literature, this project designed a framework for testing predictive models for biases. With the framework, the project created and tested: (1) a computational model that replicates the published version of a predictive policing algorithm, and (2) statistically representative, biased and unbiased synthetic crime datasets, which were used to run large-scale tests of the computational model. The study found evidence of self-reinforcing properties: systematics in the model generated feedback loops which repeatedly predicted higher crime risks in the same locations. The study also found that multidisciplinary analysis of such systems is vital for uncovering these issues and shows that any study of equitable AI should involve a systematic and holistic analysis of their design rationalities.
Please click here to watch Paul Miller’s NSW Australia Ombudsman’s presentation about their recent report ‘The new machinery of government: using machine technology in administrative decision-making’ on Mar 8th Paul reflected on the increasing use of machine technologies in government decision-making processes in NSW, where agencies are known to be using such technologies in the areas of traffic and fines enforcement, policing, and assessment of child protection risk. The presentation illustrates why it is important to engage early with other disciplines such as lawyers, when automating processes that include human decisions which have legal implications.
In this paper, Bender and her co-authors take stock of the recent trend towards ever larger language models (especially for English), which the field of natural language processing has been using to extend the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks.
AI I research is routinely criticized for its real and potential impacts on society, and we lack adequate institutional responses to this criticism and to the responsibility that it reflects. AI research often falls outside the purview of existing feedback mechanisms such as the Institutional Review Board (IRB), which are designed to evaluate harms to human subjects rather than harms to human society. In response, we have developed the Ethics and Society Review board (ESR), a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research.
Turing Research Fellow Dr. Brent Mittelstadt's research addresses the ethics of algorithms, machine learning, artificial intelligence and data analytics. Over the past five years his focus has broadly been on the ethics and governance of emerging information technologies, including a special interest in medical applications.
George Danezis is a Reader in Security and Privacy Engineering at the Department of Computer Science of UCL, and Head of the Information Security Research Group. He has been working on anonymous communications, privacy enhancing technologies (PET), and traffic analysis since 2000.
Aims
Every day seems to bring news of another major breakthrough in the fields of data science and artificial intelligence, whether in the context of winning games, driving cars, or diagnosing disease. Yet many of these innovations also create novel risks by amplifying existing biases and discrimination in data, enhancing existing inequality, or increasing vulnerability to malfunction or manipulation.
There also are increasingly many examples where data collection and analysis risks oversharing personal information or giving unwelcome decisions without explanation or recourse.
The Turing is committed to ensuring that the benefits of data science and AI are enjoyed by society as a whole, and that the risks are mitigated so as not to disproportionately burden certain people or groups. This interest group plays an important role in this mission by exploring technical solutions to protecting fairness, accountability, and privacy, as increasingly sophisticated AI technologies are designed and deployed.
Talking points
Opening ‘black box’ systems to improve comprehension and explanation of algorithmic decision-making
Challenges: Algorithmic opacity, lack of public understanding, proprietary knowledge
Examples: Counterfactual explanation, local interpretable model-agnostic explanations (LIME)
Preserving protected characteristics like gender and ethnicity in automated systems
Challenges: Encoding human values into algorithmic systems, anticipating and mitigating potential harms
Examples: Mathematically provable methods to ensure those with protected characteristics are not discriminated against
Balancing innovation with privacy in analysis of personal data
Challenges: Ensuring that sensitive personal data remains private, while enabling the value of this data to be extracted on an aggregate basis
Examples: Differential privacy, privacy-preserving machine learning
How to get involved
Recent updates
Interest group leader and Turing Fellow Adrian Weller is also an Executive Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), where he leads their project on Trust and Transparency. The CFI’s mission for interdisciplinary researchers to work together to ensure that humanity makes the best of the opportunities of artificial intelligence as it develops over coming decades, closely allies with that of this interest group and the wider Institute.