The public sector stands to benefit hugely from AI, and defence and national security departments are no different. The problem is that the AI systems they need are often the product of industry partnerships, which come with important risks. At The Alan Turing Institute, we are exploring new ways to help security services safely navigate these risks.
Given the challenges they face, national security bodies are poised to make the most of what AI has to offer. Automated processing, for instance, can help tackle acute information overload, providing a faster way to sift through disparate, open-source and classified datasets for insights that might be crucial to public safety. Experts have identified numerous uses for AI in the sector, from defending critical national infrastructure from cyberattacks to identifying dangerous content online. Meanwhile, GCHQ’s chief data scientist has flagged several future uses for the large language models that power AI tools like ChatGPT – as soon as safety issues can be addressed.
Unfortunately, designing the latest AI systems from scratch requires time and money that the public sector simply does not have. For example, despite receiving initial funding of £100 million, the UK government’s Frontier AI Taskforce would require ten times this amount to match the annual budget of Google’s DeepMind.
The national security community recognises this problem, suggesting that it will instead have to turn to industry to meet its technology needs. GCHQ director, Jeremy Fleming, acknowledged that “companies not governments have rightly led the way” when it comes to new technology, while MI6 chief, Richard Moore, argued that “we cannot match the scale and resources of the global tech industry, so we shouldn’t try. Instead, we should seek their help.”
How, though, can we trust industry to provide AI systems that meet the stringent requirements of security services? The risks associated with AI are already heightened in high-stakes national security contexts and decision makers must be particularly vigilant in identifying how these technologies could go wrong. For example, how might an AI system hold up to malicious attack? How can we be sure that the system is free from unacceptable bias against already disadvantaged members of the public? And how might new data processing methods affect people’s privacy, potentially exacerbating existing public concern about the normalisation of surveillance?
Answering these questions becomes even harder when the public sector customer doesn’t have easy access to information and must instead assess a “black-box” AI system designed by an unknown development team, which guards trade secrets closely and which may itself have purchased data and hardware from elsewhere.
National security bodies are not short of options for private sector developers wanting to work with them. Time and again, however, public-private partnerships on AI have resulted in media controversy. This controversy has sometimes centred around claims that tech companies have exaggerated the capabilities of their AI products – an accusation levelled at AI start-up Rebellion Defence. In other cases, the focus has rested on what might happen to sensitive data when it is made available to the private sector. For example, in the wake of the NHS’s deal with the data analytics company Palantir, journalists asked “are we handing our health data to Big Brother?” But we are right to hold governments to high standards when it comes to outsourcing on AI – especially when sensitive data is involved.
The difficulty for governments is building the skills required to effectively decipher industry pitches and ensure public trust in their procurement decisions is warranted. If security services want to harness industry AI, they must find ways to gain more robust evidence from companies about their AI products and then assess this evidence thoroughly against their requirements. It is this problem that our research at the Turing’s Centre for Emerging Technology and Security (CETaS) is designed to address.
This month, we published a new report that lays out an assurance framework to help national security bodies assess the risks associated with AI industry partners. The report calls on security services to adopt a newly structured “system card” – a template that guides users through best practice processes for documenting the ethical, legal, performance and security characteristics of AI systems. On top of this, we call on national security bodies to demand more transparency from industry during contract negotiations and to invest heavily in internal skills to effectively review evidence and identify risks early on – before AI systems are purchased, money is wasted and, at worst, systems are deployed that don’t function as they should.
If national security bodies can identify potential issues with industry-designed AI before it is too late, they will be well-positioned to harness privately developed AI systems for public good – whether through more efficient administration that saves taxpayers’ money, autonomous defence systems that can better protect critical national infrastructure from cyberattacks, or better and faster predictions that identify public safety risks earlier.
Read the report:
Assurance of third-party AI systems for UK national security
Top image: Nitiphol