Artificial intelligence (Safe and ethical)

Building the technical foundations for the safe and ethical deployment of algorithmic systems, working in partnership with other disciplines and stakeholders

Vision

A world-leading centre for the technical underpinnings of safe and ethical AI which is trustworthy to encourage responsible innovation and cutting-edge research breakthroughs. Our work connects broadly with inter-disciplinary experts, industry, government, regulators, civil society and other stakeholders to ensure that we build the right tools for society.

Aims

The key aim of this strategic challenge of the Turing’s AI programme is to establish a centre of excellence for the study of technical aspects of safe and ethical AI, in line with the government’s Industrial Strategy and in step with global demand for research and guidance in this domain.  This will be achieved by conducting deep theoretical research, looking for rigorous, quantifiable and verifiable guarantees, and pushing the state-of-the-art frontiers, to enable trustworthy deployment. Key themes include: 

  • Advancing appropriate AI transparency and explainability
  • Improving fairness of algorithmic systems, including ways to measure and mitigate bias
  • Developing robust systems which adapt well to new environments, secure from attack and respecting privacy
  • Developing systems that work effectively together with humans, maintaining appropriate human control and preventing undue influence

To achieve this, it will be vital to interface well with other disciplines, policy makers, industry and the public.
 

Programme challenges

AI systems are rapidly being developed and deployed across society. This creates tremendous opportunities, but also raises a pressing need to ensure that these systems are safe and ethical in order to function properly, grow public trust and avoid a potential backlash.

What is missing in the UK landscape is a major effort to build the necessary technical foundations for this endeavour. Knowing how to regulate in a way that fosters innovation, while providing optimal guard rails for society, will not be possible without this effort. We propose to establish a centre of excellence for the study of technical aspects of safe and ethical AI at the Turing, to build the technical underpinnings for trustworthy deployment, responsible innovation and appropriate governance.

This is not a one-off endeavour but will require continual upgrading as the technology evolves. There are trade-offs between desirable goals for society (eg privacy and transparency, individual vs societal benefit). Our aim is to enable the best possible frontier across these goals and communicate with policy makers and the public to help ensure that the right point on the frontier is enforced.

Impact

  • Cutting-edge research and knowledge generation disseminated widely and accessibly for maximum reach across research and user communities (industry, government, third sector and the public).
  • Development of engaged research and business leaders to stimulate long-term culture and adoption of positive AI technologies.
  • Upskilling of existing business and technical leaders to accelerate positive transformation.
  • Collaborative delivery of scalable open-source prototypes and demonstrators, which in turn are used to build trust, engage users, and facilitate adoption.
  • Informed policy makers who demonstrably refer to content generated by the new centre.

Highlights

Evaluating the use of facial recognition in UK policing by Malak Sadek, Sam Stockwell and Marion Oswald

This is a report about a workshop jointly hosted by the Alan Turing Institute and the Metropolitan Police on 25 October 2023 exploring the aims, methods and challenges of evaluating the use of facial recognition (‘FR’) systems by UK police forces. It was noted that there were many different purposes for facial recognition and thus a variety of potential benefits, risks and issues. The workshop presented different perspectives on the evaluation of facial recognition in the policing context and aimed to raise awareness of, and provoke engagement with, the ongoing debates in this area. 

Executive summary

  • Several potential benefits to the effectiveness of police deployments and activities from the use of FR were identified. 
  • The National Physical Laboratory (NPL) study commissioned by the Metropolitan Police to evaluate their FR system is a positive step towards a better understanding of FR and ensuring that it is equitable and effective from a technical perspective. 
  • It also allowed for a deeper understanding of how different factors (environmental, historical, geographical, etc.) can affect the effectiveness of FR, and especially Live FR (LFR).
  • However, additional considerations are needed regarding the wider ecosystem surrounding FR, including any potential biases that may be introduced by humans-in-the-loop, the accessibility of the topic to the wider public and the indirect impacts of FR on society regarding surveillance, trust in policing, etc. 
  • The assessment of proportionality cannot be limited either to a technical evaluation nor to a purely legal one. More work is needed to bring these aspects together within policies, procedures – including those relating to police decision-making - and evaluation mechanisms.
  • To identify these risks and limitations, other entities must support the police – including civil society organisations, legal experts and academics – while the police must also actively seek the input and involvement of these groups. 
  • The Structured Framework for Assessing Proportionality of Privacy Intrusion of Automated Analytics is one example of how relevant factors can be identified and assessed in a systematic way, encouraging the considerations of aspects that are not only technical (such as data used and algorithms designed), but also socio-technical (such as human inspection and data management practices).

Read the full report

The use of AI in sentencing and the management of offenders

A workshop on 27 February 2023 jointly hosted by The Alan Turing Institute, Northumbria University’s Centre for Evidence and Criminal Justice Studies, and the Sentencing Academy explored the role that artificial intelligence plays—and could play—in the sentencing and management of offenders.

Read the full summary

Data protection, AI, and fairness: What are the risks and how can we mitigate them?

From November 2021 to January 2022, The Alan Turing Institute and the Information Commissioner’s Office (ICO) hosted a series of workshops about fairness in AI. The aim of the series was to assist the ICO in the development of an update to the fairness component of its existing non-statutory guidance on AI and data protection, by convening key stakeholders from industry, policy, and academia to identify the key issues and challenges around fairness in AI, and to discuss possible solutions. The guidance is part of ICO25’s Action Plan for 2022-2023. The update to the guidance has now been published, and the ICO will ensure any further updates will reflect upcoming changes in relation to AI regulation and data protection.

Read more here

Assessing the compatibility of fairness metrics used by the EU Court of Justice

The paper published by Sandra Wachter, Brent Mittelstadt and Chris Russell in March 2020, 'Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI', explains which parts of AI fairness can, cannot and should not be automated and suggests ideas for legally compliant algorithmic bias audits.

Read the full paper

Forging new collaborative projects with global research communities

A workshop between UK, France and Canada, 'CIFAR-UKRI-CNRS AI and Society: From Principles to Practice' at the Turing, provided a forum for AI ethics researchers in the UK, France and Canada to meet and exchange notes on their recent work through a series of brief presentations. This utilised the Turing’s convening power to bolster the creation of cross-country research teams to apply for collaborative funding and at the same time raised the profile of the Institute in AI ethics across the UK, France and Canada.

Find out more about the workshop

Researching counterfactuals explanation for Google tools

Google's 'What-If Tool' as part of their TensorFlow™ machine learning framework, is an open source software library for high performance computation and a widely used deep learning framework. The tool enables non-technical individuals to easily understand what their machine learning is doing and allows the user to edit examples from datasets and show how the model’s predictions change as any single feature is changed. 

Read the Google blog to find out more

View the code and references to the Turing researcher's work

Eliminating race and gender discrimination from automated systems

Turing researchers from diverse fields have produced a new way of approaching fairness in algorithm-led decisions, by looking at the causes of certain factors that can often result in biased decision-making.

Read the impact story

Advice from Turing researchers, urging the need for individuals to have a legally binding right to have automated decisions made about them explained, is helping shape how the new EU data protection regulations will be implemented.

Read the impact story

Turing researchers have also been developing methods to train machine learning models which do not discriminate against gender or race – published in 'Blind Justice: Fairness with Encrypted Sensitive Attributes'.

Read the full paper

Promoting innovation and pioneering research

Yarin Gal, Group Leader on AI Programme and Turing AI Fellow, University of Oxford, has been named on MIT Technology Review's Innovators Under 35, Europe 2019 list in the 'pioneers' category. Professor Sandra Wachter, Turing Fellow and AI Programme Project Lead based at University of Oxford, has been featured in the Financial Times, reporting on her recent paper regarding online discrimination by association.

Read 'Algorithms drive online discrimination, academic warns'

Organisers

Researchers

Contact info

[email protected]