What is your role at the Turing?
I have the pleasure of leading a research group focused on AI security in support of the UK’s Laboratory for AI Security Research (LASR). Announced in November 2024, this partnership between government, industry and academia has the primary aim of supporting the secure adoption of AI technologies in line with national security objectives. In my varied role as lead researcher, I work across technical research, external engagement and team leadership.
Tell us about your journey before you joined the Turing…
I have taken a winding path throughout my career. I started with a BA in Digital Art and Technology at the University of Plymouth where I learnt to come up with interesting ways to solve problems. I then decided to have a crack at creating a start-up (no millions for me unfortunately) where I learnt how to fail.
Following this, I moved into the defence and security sector, initially within government (the Ministry of Defence and the Defence Science and Technology Laboratory). I then progressed into industry, conducting national security-focused research for Roke and spending a short stint at a start-up before joining the Turing in 2024!
What are you currently working on?
I’m supporting my team to explore three key areas:
- What are the security implications of the design and deployment choices we make with AI?
There are countless ways to approach AI design and deployment, yet little research has focused on identifying the best practices for balancing security with performance. - What can we learn from existing security research and approaches?
Security has been heavily researched in other domains, and we believe AI systems can benefit from these insights. At the same time, we also want to pinpoint what’s truly unique about AI security. - How can interpretability techniques help us address questions about AI security?
So far, research on AI interpretability (i.e. techniques for understanding and explaining how AI models make decisions) has generally been geared towards explaining outputs to end-users, such as why an image of a cat is classified as a cat. Our goal is to apply these approaches specifically to security-related questions.
Our ultimate aim is to be able to not only determine whether an AI model is vulnerable to threats, but also to explain why, and how one could go about securing it. For example, we could answer “Is my AI chatbot vulnerable to a prompt injection attack?” with reasons for its vulnerability and advice on protecting the system. This is a tall ask but we’re looking forward to the challenge!
Alongside this work, I also write a weekly newsletter called Top 5 Security and AI Reads where I spotlight five research papers in this area and provide commentary on why they’re good reads and who may be interested. The paper topics are wide-ranging, so there should be something to suit most tastes!
What motivates your work?
I am driven by the prospect of helping government and wider industry to adopt AI. AI has huge potential, but we need to ensure it is secure.
What has been your highlight at the Turing so far?
The people. I am continually surprised by the variety of research happening at the Turing.
When not working, what can you be found doing?
You will usually find me spending time with my young family – either making Lego, playing a Lego-themed game or standing on Lego. I also enjoy writing, playing first-person shooters, and am studying part-time for a PhD.