The US, UK and their allies could fall behind adversaries in our ability to predict and mitigate crises unless we maximise the ability of AI to inform intelligence assessments that inform strategic decision making, according to new research by the Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS) and the US based Special Competitive Studies Project.
Published today (Thursday, 27 March) the research analyses the potential for future advanced AI systems to provide the intelligence community with early warnings, against a backdrop of growing AI competition between western democracies and nations like China, Russia, North Korea and Iran.
Intelligence gathering is becoming ever more complex with human analysts dealing with huge quantities of information and data, leading to some early experimentation with AI within security services.
But the technology faces challenges from scarce and inconsistent data to the difficulty of modelling the decisions of unpredictable individuals.
While noting that developing new systems would be a “costly, time-consuming and politically sensitive project” the research authors say it could prove pivotal in maintaining advantage over aggressors in future conflicts or crises.
The report ‘Applying AI to Strategic Warning’ presents a three-phase plan which would tackle data challenges, leverage the best AI models and ultimately create an AI enabled strategic warning system:
- In phase one, substantial efforts are needed to improve how geopolitical event data is collected and standardised, including using non-traditional data sources to make sense of complex human behaviour
- In phase two, governments and industry must develop and refine advanced AI models to help the intelligence community analyse geopolitical events, which are cross checked by human analysts and refined over time.
- Phases one and two would pave the way for phase three; the development of an AI simulation platform to predict geopolitical risks and conflicts which can simulate different scenarios and understand how conflicts might erupt.
The research sets out the “price and payoff” of this work looking at the costs of carrying it out and the opportunity costs of not adopting AI for strategic warning. They note that given the scope of this challenge, no single government or organisation can drive the entire process alone.
While existing programs like the US’s IARPA and the UK’s ARIA offer pathways for funding and collaboration, the burden is simply too large for any one nation to bear. Consequently, a partnered approach is recommended, either bilaterally between the US and UK or even across the Five Eyes alliance.
Anna Knack, Senior Research Associate at the Alan Turing Institute said: “The stakes couldn’t be higher in the face of threats from both hostile states and technologically sophisticated non-state actors, at the same time as the emergence of Chinese AI models which could be closing the gap on US dominance of this technology.
“Despite the many challenges of creating AI for strategic warning, the cost to our societies of failing to seize this opportunity could be substantial.”
Dr. Nandita Balakrishnan, Director for Intelligence at Special Competitive Studies Project said: "Providing strategic warnings to policymakers is an essential role of the intelligence community. Just as AI has shifted the threat landscape, it will necessitate a shift in the tools in the intelligence analyst's toolbox. AI can help intelligence analysts provide crucial warnings earlier, and there are a lot of exciting developments underway in this space that the IC can leverage."
Top image credit: Martin Bergsma via Adobe Stock