In a respected UK university, a senior academic develops an AI tool that can accurately identify and replicate a person’s handwriting. As part of her funding agreement, she is required to publish her training data and source code in a public repository on GitHub.
A year later, she sees a story on her news app about a large-scale, targeted attack on UK government staff, who have had their personal details stolen and sold on the dark web. Sources say it is likely that a sophisticated AI tool was used to forge their signatures to access personal documents…
This is a fictional scenario, but it illustrates one of the ways in which academic AI research could be vulnerable to state threats – activity that falls short of direct armed conflict but which nevertheless harms or threatens our national security.
The UK government has signalled its intent to harness AI as a tool to grow the economy and improve public services, and the UK is already a key player in AI research and development (R&D).
But as nations race to develop their AI capabilities, MI5 and the Federal Bureau of Investigation have issued warnings to universities that state actors are using espionage, intellectual property theft and duplicitous collaboration to keep pace with R&D and undermine UK security.
As a ‘dual-use technology’, i.e. one that can be used for both civilian and military purposes, AI is especially vulnerable to being repurposed and applied to tasks that it was not originally intended for. Potential misuses of dual-use technology are difficult to predict, so AI researchers should be mindful that any work could be used for harmful purposes.
Tools designed to counter misuse of AI systems can also be used by attackers to help them evade detection, while datasets used in the development and deployment of AI models are often sensitive and of high value to malicious actors.
Calling for culture change
In a new report published by the Turing’s Centre for Emerging Technology and Security (CETaS), we call for an urgent, coordinated response from the UK government and the higher education sector to secure our AI research ecosystem against state threats.
While we recognise that openness and collaboration are an important part of R&D, we need to balance academic freedom with research security. It is important that research security is seen as something that empowers rather than hinders high-quality research.
Currently, there are a few barriers to reaching this viewpoint. Researchers can face huge professional pressure to make their work publicly available. At the same time, there is frustration at the large administrative burden associated with carrying out time-consuming due diligence processes. There is also often a lack of incentives for researchers to follow existing government guidance on research security, and researchers often have to make personal judgements on the risks of their work – difficult when the nature of these threats is in constant flux.
In our report, we outline a series of recommendations for the UK government, including that the Department for Science, Innovation and Technology, with support from the National Protective Security Authority (NPSA), provides regularly updated guidance to research-intensive universities on international institutions deemed high-risk for funding agreements and collaborations.
There should also be dedicated funding to grow the Research Collaboration Advice Team – a key conduit of information between the government and academia that should be empowered to further support academic due diligence – and the NPSA should declassify and publish case studies of relevant threats that have been intercepted or disrupted.
What can academic institutions do?
Importantly, the UK’s academic community also has a central role to play in building and maintaining a sector-wide culture of risk awareness and security-mindedness.
In particular, we recommend that:
- Academic institutions should deliver mandatory research security training (based on Trusted Research guidance) to new staff and postgraduate research students as a prerequisite of grant funding. This training should be accredited by the NPSA.
- The academic sector should develop a centralised due diligence repository to document risks and inform decision-making on AI research partnerships and collaboration. This repository should be hosted by a trusted partner, such as Universities UK or UK Research and Innovation.
- Research-intensive universities should set up research security committees to help academics conduct risk assessments of their work on AI (and other critical technologies).
- Major AI journals and academic publishing houses should standardise pre-publication risk assessment for AI research, in line with existing processes for reviewing research ethics.
We have also developed a tool aimed at helping academic institutions and AI researchers to assess the maturity of their risk mitigation measures in response to the threat landscape. The higher the level of maturity, the better the institution’s resilience to research theft, acquisition and interference.

AI innovation is moving at breakneck pace, so we can waste no time. We need to ensure that the UK’s research ecosystem grows in a resilient way, so that our good work does not fall into the wrong hands.
Read the full report here.
Top image: okskaz