Emerging technologies have the potential to create new national security challenges, as well as new opportunities. AI-based large language models (LLMs), for instance – which power chatbots such as ChatGPT – could present security risks if they are exploited by those seeking to cause harm, for example by tricking the LLM into generating malicious content (‘prompt-hacking’). Misuse of LLMs is likely to grow as this technology becomes more embedded in our lives, and is just one security risk currently being researched by the Turing’s Centre for Emerging Technology and Security (CETaS).
Launched in summer 2022, CETaS carries out research to help policy makers understand and respond to the risks and opportunities posed by AI and other emerging technologies. Other risks presented by AI include its potential to strengthen adversaries’ cyber offensive capabilities, and – as synthetic, AI-generated media becomes ever more sophisticated – its ability to exacerbate disinformation, fuelling division and conflict.
This is a challenging area for policy makers, not only because of the fast-moving nature of these complex technologies, but also because policies need to effectively balance mitigating security risks while at the same time maximising the many benefits these technologies offer for society.
The benefits of AI for the security community include opportunities to tackle emerging security risks more effectively, for example by enhancing cyber defensive capabilities, automating software development for use within the intelligence and national security community, supporting intelligence analysts by triaging vast amounts of data and identifying patterns, and collating large volumes of open-source information to provide insight to officials and decision makers.
Policy makers need reliable recommendations informed by a diverse range of perspectives, so our approach at CETaS is both multidisciplinary and evidence-based. This approach is critical for the security community to continue operating effectively in a rapidly changing technological and threat environment.
CETaS is already having real-world impact. For example, we recently developed a new assessment framework that allows national security stakeholders and oversight bodies to determine whether the use of AI by national security and law enforcement agencies is proportionate and justified. Our report’s proposed new framework was described by Lord Anderson, the independent reviewer of the Investigatory Powers Act 2016, as a “significant contribution” to the debate, and was subsequently cited in his independent review and accompanying speech in Parliament.
Producing research and guidance is only one strand of the Centre’s work. Convening and stakeholder engagement both play vital functions in informing policy. As an academic, non-government research centre working closely with government, CETaS is uniquely placed to bring together stakeholders from across sectors to explore topics of common concern, including stakeholders who may not otherwise engage with government. These interactions are crucial for policy makers to gain an understanding of different perspectives on emerging security issues.
For example, CETaS’s forthcoming report on ‘The future of privacy-by-design technology’ is based on in-depth consultation with representatives from across internet standards development organisations, civil society organisations and academia, and explores the balance between privacy and security in the development of future internet and encryption protocols: one of the most contentious topics in the technology and security landscape. CETaS approaches these difficult topics with impartiality and academic rigour, ensuring an inclusive outlook throughout all research and engagement activities.
Lastly, interdisciplinarity and diversity are core aspects of the Centre’s work. Our interdisciplinary way of working enriches our research, allowing us to have wider applicability. Our approach to diversity and inclusion is guided not only by the Turing’s EDI values, but also by the fact that better diversity ultimately leads to better policy development. This is particularly important in national security because it enables innovative thinking that allows decision makers to stay one step ahead of adversaries who intend to cause harm to our societies and democracies.
In a fast-changing world, where emerging technologies have the potential to fundamentally transform many aspects of the security landscape, the Centre’s work is helping policy makers, researchers and practitioners to navigate this complex environment in an evidence-based and inclusive way.
Ongoing and upcoming CETaS projects include the most comprehensive study to date of the security implications of generative AI systems like ChatGPT, developing a policy roadmap for future biometric technologies, and research into how to communicate the use of AI-enriched intelligence to strategic decision makers. We would welcome your engagement in our work, to help us deliver cutting-edge research that helps to keep the UK safe.
Top image: your123