Turing AI Fellows

Exceptional researchers in artificial intelligence in the UK


As part of an ambitious skills and talent package set out by the UK Government, aimed at attracting and maintaining the best talent in artificial intelligence, the Turing AI Fellowships initiative was created in collaboration with The Alan Turing Institute. More information about the Fellowship programme can be found here.

For details of the current Turing AI Fellows and their research projects see below. Also see the press release about the Fellows here

Current Turing AI Fellows and projects

Neil Lawrence

Neil Lawrence, University of Cambridge, Senior Turing AI Fellow

Innovation to Deployment: Machine Learning Systems Design

The AI systems this project is developing and deploying are based on interconnected machine learning components. This proposal focuses on AI-assisted design and monitoring of these systems to ensure they perform robustly, safely and accurately in their deployed environment. This project addresses the entire pipeline of AI system development, from data acquisition to decision making, and proposes an ecosystem that includes system monitoring for performance, interpretability and fairness. This project places these ideas in a wider context that also considers the availability, quality and ethics of data.


Maria LiakataMaria Liakata, Queen Mary University of London, Turing AI Fellow

Creating time sensitive sensors from language & heterogeneous user generated content

Widespread use of digital technology has made it possible to obtain language data (e.g. social media, SMS) as well as heterogeneous data (e.g. mobile phone use, sensors) from users over time. Such data can provide useful behavioural cues both at the level of the individual and the wider population, enabling the creation of longitudinal digital phenotypes. While language use and expression is an important part of our digital behaviour, with many psychometric tests being language-based, automated language analysis does not currently feature in work on digital phenotyping.

Current methods in natural language processing (NLP) are not well suited to time sensitive, sparse and missing data, collected over time or personalised models of language use. More specifically, challenges involve: (a) how best to represent and combine asynchronous language, multi-modal and heterogeneous data and create models capturing subtle changes in language use over time ; (b) addressing data sparsity and privacy issues in real-world heterogeneous data; (c) the definition of meaningful tasks and the methods to support them, since most NLP work does not involve predictions over time; (d) appropriate evaluation settings for systems deployable in a real-world setting. The proposed work will address these challenges within NLP. It also provides an important application of AI to mental health: Generic apps geared to mental health and wellbeing fall short of meeting an individual's needs as different people suffer for a variety of reasons with recovery unique to them.

Major outputs of this project are novel tools for personalised monitoring behaviour through language use and user generated content over time (language sensors) and the co-creation with experts of new cost-effective tests to support monitoring and diagnosis based on the language sensors. To achieve these goals, the project will engage experts in NLP, statistics, mathematics, psychology, psychiatry and data ethics.


Tim Dodwell

Timothy Dodwell, University of Exeter, Turing AI Fellow

Intelligent Virtual Test Pyramids for High Value Manufacturing

There is a paradox in aerospace manufacturing. In this safety-critical industry, the aim is to design an aircraft that has a very small probability of failing. Yet to remain commercially viable, a manufacturer can afford only a few tests of the fully assembled system. How can engineers confidently predict the failure of a low-probability event from only a handful of tests? The engineering solution is to construct a hugely costly experimental test pyramid, which pulls together thousands of small-scale tests, hundreds of tests at the intermediate scale and a handful of tests of the full system.

Yet, this approach comes with significant uncertainty, so in practice ad hoc ‘engineering safety factors’ have to be applied at all length scales. As we seek a more sustainable aviation industry, the compound effect of these safety factors leads to significant overdesign of aerospace structures, severely limiting the efficiency of modern aircraft. The aim of Tim’s fellowship is to develop novel Artificial Intelligence (AI) methods which fuse high-performance mathematical simulations and traditional experimental data to build a virtual test pyramid. This virtual test pyramid will increase our confidence in making the ultimate engineering decision -`Is this plane safe to fly?’ - allowing the aerospace industry to build faster, lighter, more sustainable aircraft for the future.

Based between Exeter and the Data-Centric Engineering Programme at The Alan Turing Institute, Tim’s fellowship spans traditional research boundaries between applied mathematics and statistics to drive national research in AI forward. Tim’s vision is supported by significant matched investment and partnerships, including The Alan Turing Institute, GKN Aerospace, Boeing, University of Exeter, the Henry Royce Institute, National Composite Centre, Simple Ware, the Universities of Bristol, Cambridge, Florida, Heidelberg and MIT, and the Brilliant Club, a school outreach programme.


Anna ScaifeAnna Scaife, University of Manchester, Turing AI Fellow

AI4Astro: AI for Discovery in Data Intensive Astrophysics

As the data rates from modern scientific facilities become larger and larger, it is no longer possible for individual scientists to extract scientifically valuable information from those data by hand. This is particularly important for the Square Kilometre Array (SKA) telescopes, where the expected data rates are so large that the raw data cannot be stored and even using the compressed data products will require a super-computer. Consequently, in this era of big data astrophysics the use of machine learning to extract scientific information is essential to realise a timely scientific return from facilities such as the SKA. However, there are a number of issues that must be addressed in order to translate machine learning techniques from other disciplines.

In this fellowship, Anna will look at how existing techniques can be adapted and extended in order to achieve the key science goals of the SKA telescope. This project will target the development of new machine learning approaches which address particular aspects of SKA scientific processing, such as: how do we deal efficiently with such large data volumes and in real time? How do we incorporate information from archival astronomical data into our machine learning? How can we ensure the correct statistical treatment of biases introduced into our data by observational and astrophysical selection effects? How can we ensure that the rarest and most extreme astrophysical objects are not discarded or missed in our processing?


Yarin GalYarin Gal, University of Oxford, Turing AI Fellow

Democratizing Safe and Robust AI through Public Challenges in Bayesian Deep Learning

Probabilistic approaches to deep learning AI, such as Bayesian Deep Learning, are already in use in industry and academia. In medical applications they are used to solve problems of AI safety and robustness by identifying when an AI system is guessing at random. But major obstacles stand in the way of wide adoption: expert knowledge in AI is required for practitioners to build safe and robust AI tools into their applications; further, expert knowledge in downstream applications themselves is required for AI researchers to identify gaps in current methodology.

To solve these problems, Yarin proposes to build new AI challenges assessing specifically for safety and robustness, derived from real-world applications of AI in industry. With the community competing on our public challenges, contributed models will form an open-source toolbox of well-tested safe AI tools for practitioners to use, reducing domain-expertise barriers for using safe and robust AI in industry. The challenges identified will set the course for a community-driven effort leading to a self-sustained ecosystem: practitioners will use open-source tools, tested for safety and robustness, using metrics derived from industry downstream applications such as medical imaging; researchers in AI will publish new results using benchmarks the project builds based on these same metrics to compare new AI tools to old ones, and contribute their new tools as baselines to the toolbox for other researchers to compare to, and industry to use.

The public challenges will form a bridge between practitioners and AI researchers, which in turn will unravel new research opportunities for the AI community, pushing research forward to develop new safe and robust AI tools available to all. This effort to democratize safe and robust AI, in alignment with the UK's strategic plan set by Hall and Pesenti, will put the UK at the forefront of AI on the world stage.