UK law enforcement lacks the tools to effectively tackle AI-enabled crime and must adopt a more proactive, AI-driven approach, according to a report published today by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS), which calls for the creation of a dedicated AI Crime Taskforce within the National Crime Agency.
While the use of AI by criminals remains at an early stage, researchers say there is widespread evidence emerging of a substantial acceleration in AI-enabled crime. It is particularly evident in areas like financial crime, child sexual abuse material, phishing and romance scams.
There are specific concerns regarding the role of Chinese innovation in frontier AI having a significant impact on the threat landscape, with criminals exploiting new open-weight systems with fewer guardrails to carry out more advanced tasks.
The research argues that the acceleration in AI-enabled crime is being driven by AI’s ability to automate, augment and rapidly scale the volume of criminal activity, with greater diffusion of AI between the state, private sector and criminal groups leading to more criminal innovation. Easy exploitation of AI systems and models by criminal groups means these technologies are becoming effective ‘partners’ to criminal groups in achieving their objectives.
The report provides the UK national security and law enforcement community with an understanding of how the proliferation of AI systems is reshaping the landscape of serious online criminality, and equips them with the tools to plan and better position themselves to respond to novel threats over the next 5 years.
It makes several recommendations for actions that government and law enforcement can take to counter AI-enabled crime, including through more effective coordination and targeting of resources, and more rapid adoption of AI itself. The authors argue for:
A new AI Crime Taskforce to be established within the National Crime Agency’s National Cyber Crime Unit to coordinate the national response to AI-enabled crime. The Taskforce should collate data from across UK law enforcement to monitor and log criminal groups’ use of AI, working with national security and industry partners on strategies to raise barriers to criminal adoption, and rapidly scale up adoption of AI tools for proactive disruption of criminal networks.
Closer cooperation with European and international law enforcement partners to ensure compatibility in approaches to deterring, disrupting and pursuing criminal groups leveraging AI, with a new working group in Europol focused on AI-enabled crime.
Law enforcement intelligence assessments to more systematically inform AI evaluation and testing in the objective of minimising AI models’ compliance with criminal requests.
Ardi Janjeva, Senior Research Associate at the Alan Turing Institute and an author on the report said: “As AI tools continue to advance, criminals and fraudsters will exploit them, challenging law enforcement and making it even more difficult for potential victims to distinguish between what’s real and what’s fake. It’s crucial that agencies fighting crime develop effective ways to mitigate this including combatting AI with AI.”
Joe Burton, Professor of International Security at Lancaster University and author of the report said: “AI-enabled crime is already causing serious personal and social harm and big financial losses. We need to get serious about our response and give law enforcement the necessary tools to actively disrupt criminal groups. If we don’t, we’re set to see the rapid expansion of criminal use of AI technologies.”
The report is based on research conducted by the project team over a 4-month period, involving extensive consultation and interviews with government, industry, and law enforcement in the UK and Europe.
Top image credit: Getty images