Urgent action is needed to secure the UK’s AI research ecosystem against hostile state threats

Friday 07 Mar 2025

Filed under

Urgent action is needed to secure the UK’s AI research ecosystem against hostile threats such as espionage, theft and duplicitous collaboration according to a report published today by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS).

The research is the first study of its kind to focus on AI, with growing fears that the UK’s world-leading AI research is a particularly high-priority target for state threat actors seeking technological advantage.

Concern about hostile states acquiring AI research is heightened due to the use of sensitive datasets, the dual-use nature of the technology (which can be repurposed and applied to tasks that were not originally intended) and the possibility of reverse engineering (for example, tools designed to counter misuse of AI systems being converted to help attackers evade detection).

The report argues that awareness of security risks is not consistent across the academic sector and there is a lack of incentives for researchers to follow existing government guidance on research security. 

Culture change is therefore urgently needed, including balancing tension between research security and the pressures academics face to publish their research. 

The report also highlights difficulties for academics both in assessing risks of their research – for example future misuse – and the need to carry out time-consuming due diligence processes on international research partners, without a clear view of the current threats.

Megan Hughes, Research Associate at the Alan Turing institute and lead author on the report said: “Furthering AI research is rightly a top priority for the UK, but the accompanying security risks cannot be ignored as the world around us grows ever more volatile. Academia and the government must commit to and support this long overdue culture change to strike the right balance between academic freedom and protecting this vital asset.”

The report argues that an urgent, coordinated response between the UK Government and the higher education sector is needed and offers 13 recommendations to help government and academia to build the resilience of this research ecosystem.

Recommendations for Government include a need for regular guidance from DSIT, with support from the National Protective Security Authority (NPSA), on the international institutions deemed high-risk for funding agreements and collaborations, and more dedicated funding to grow the Research Collaboration Advice Team to support academic due diligence.

Authors want to see the NPSA and the National Cyber Security Centre engage more widely with UK-based publishing houses, academic journals, and other research bodies on threats we face, and offer tailored support to develop research security-minded policies, and the NPSA are urged to declassify and publish case studies of threats that have been intercepted or disrupted.

The report also urges UKRI to provide grant funding opportunities for research security activities.

Amongst recommendations for academia, the report’s authors believe that all academic institutions should be required to deliver NPSA accredited research security training to new staff and postgraduate research students as a prerequisite for grant funding. 

The academic sector should develop a centralised due diligence repository to inform research partnerships and collaboration, hosted by a trusted partner such as UUK or UKRI.

The report’s authors also want to see pre-publication risk assessment for AI research standardised across major AI journals and academic publishing houses, aligned with existing research ethics review processes.
 

You can read the full report here.