Defence Artificial Intelligence Research (DARe)

The Defence AI Research Centre (DARe) provides strategic advantage for UK Defence and National Security by partnering with government, industry and academia to develop novel AI technologies.

We accelerate capability development, from concept to adoption, through co-creating tools, tactics and novel AI techniques with our partners.

Our research is varied, training AI models on key defence datasets, developing novel and state-of-the-art algorithms, and building open-source software stacks and toolboxes to enhance the development, deployment and uptake of machine learning and AI models in defence. 

Through this work, we develop a deep practical understanding of the state-of-the-art, identifying gaps in capability, challenges in deployment, and emerging technological developments at the rapidly evolving intersection of AI and defence.

Explaining the science

Our work spans the full range of defence domains, from the bottom of the oceans – where we are enhancing arctic security by optimising sensor placement for monitoring the High North – to the air – developing autonomous replanning for asymmetric swarming of drones – to space – where we are automating space object behavioural analysis, characterising spacecraft motion, cataloguing objects, managing space traffic, and inferring satellite operator intent.

Underpinning our research is an ethos to enable the users of tools to make responsible, trustworthy decisions in high consequence time-pressured scenarios. To ensure this we develop explainable and interpretable AI approaches, including recommendations of which data should be prioritised for human review, and what new data should be acquired to increase decision confidence. In doing so, we combine the advantages of automated AI analysis with thorough technical subject matter expertise. 

AI for Space and Aeronautics

The strategic importance of space as an increasingly critical and contested domain is starkly underscored by the Ukrainian conflict, revealing the major advantages gained through space-based superiority. To maintain strategic advantage for the UK and its allies, our AI for Space and Aeronautics team spearheads the development of capabilities for AI-enabled missions and safeguards essential space infrastructure against evolving threats. Their work enhances mission effectiveness through AI for greater spacecraft autonomy, including autonomous replanning, optimisation of data downlink schedules, and automating analysis of satellite payloads e.g., cameras, sensors, communication systems etc.

With over 30,000 tracked objects in orbit and growing commercial presence alongside state actors, the congested space environment generates huge volumes of data daily, making manual analysis impractical. Beyond congestion, space assets face environmental threats, for example a recent geomagnetic storm deorbited 38/49 Starlink satellites shortly after launch, highlighting how space weather can increase atmospheric drag, disrupt communications, and cause satellite anomalies.

To enhance situational awareness and security, the team is addressing these challenges through developing AI to automate the analysis of space object behaviour - vital for cataloguing objects, managing space traffic, understanding intent, and enabling rapid response during asset tasking. The team also advance predictive capabilities across multiple scales, from short-term predictions crucial for satellite operations and manoeuvres, to long-term forecasts essential for mission planning.

Counter AI

Our Counter AI research builds understanding of how adversaries might target AI systems and how we can develop effective countermeasures, conducting essential research on subthreshold threats and strengthening resilience against them. 

We study machine learning attack vectors in a defence context, encompassing a thorough analysis of methods used by adversaries to deceive, obfuscate, deflect or disrupt AI, including physical attacks, malicious input data, model manipulation and attacks on the model’s architecture and training.

AI for Arctic Security

Climate change is rapidly transforming the Arctic into a region of increasing competition for resources and territory. Our research into AI for Arctic Security explores how AI can be used to enhance security in the High north by optimising the deployment of limited resources to detect hostile activity and protect key locations and infrastructure.

To effectively protect UK and allies’ interest and international law in the Arctic regions, requires multi-domain operations that enable interoperability between international organisations who have varied expertise across the sea, air and space echelons. To achieve this, our team devises methods that fuse heterogeneous and multimodal data sources into AI analyses across multiple scales and resolutions, whilst accommodating the extreme weather and interference inherent to operating in the Arctic environment.

Multi Agent and Autonomous Systems

AI is increasingly being deployed on edge devices like drones and satellites to enable rapid real time acquisition and analysis of data. This use of AI is essential for automating remote sensing, monitoring of systems health and enabling autonomous sub-system operations. 

Our Multi Agent and Autonomous Systems team is advancing capabilities for rapid, resilient and performant data processing and decision making at the edge. We are creating distributed computing approaches to optimise resource allocation and computation to enhance the robustness and efficiency of AI systems. Our expertise in embedded hardware and software for multi-purpose, multi-sensor autonomous systems is paving the way for deployment of next-generation edge AI.

The Electromagnetic Environment

The electromagnetic environment is critically important in defence for communication, navigation and situational awareness. The research of our Electromagnetic Environment team enables rapid and intelligent analysis of radio frequency data in both the radar and communications domains. Improving our understanding and management of the electromagnetic environment is crucial to enable communication in congested and contested environments, helping us to determine adversaries’ capabilities and intentions, predict tactical actions and detect weapons systems. 

Our team develops AI methods to understand pattern of life and automatically detect anomalies, outliers, and edge cases, providing enhanced threat assessments.

Synthetic Environments

Through the development of digital twins of defence systems and the production of multi-perspective synthetic environments we improve our AI models’ ability for detection and classification tasks from multi-modal, intermittent, and noisy datasets. 

The Synthetic Environments team support our US partners in the DEVCOM ITC and ARL through projects focused on improving robustness to unseen scenarios for use in vehicle detection and classification in infrared data and improving accuracy on degraded data for human action recognition in electrooptical scenarios. We are advancing approaches for handling sparse, irregularly sampled and missing data, enhancing uncertainty quantification, and augmenting visualisation and communication of analyses.

Publications

A Self-Supervised Framework for Space Object Behaviour Characterisation

Here, we demonstrate how self-supervised learning can simultaneously enable anomaly detection, motion prediction, and synthetic data generation from rich representations learned in pre-training. Our work therefore supports space safety and sustainability through automated monitoring and simulation capabilities.

Learn more

Improving Object Detection by Modifying Synthetic Data with Explainable AI

Our proof-of-concept results pave the way for fine, XAI-controlled curation of synthetic datasets tailored to improve object detection performance, whilst simultaneously reducing the burden on human operators in designing and optimising these datasets.

Learn more

An AI blue team playbook

In a fiercely competitive landscape, we are deploying AI systems faster than they can be security tested and defended. Our playbook contains the blue teaming historical context, process, lessons learned and hypothetical examples, serving as a starting point for embedding security at the heart of AI-enabled systems.

Learn more

An AI red team playbook

Complementing An AI Blue Team Playbook, this paper contains the red teaming historical context, process, and lessons learned, serving as a starting point for proactively identifying weaknesses, enhancing the overall performance, security, and resilience of AI-enabled systems.

Learn more

Radar Pulse Deinterleaving with Transformer Based Deep Metric Learning

When receiving radar pulses it is common for a recorded pulse train to contain pulses from many different emitters. In this paper, we define the problem and present metrics that can be used to measure model performance. 

This model achieves strong results in comparison with other deep learning models with an adjusted mutual information score of 0.882.

Learn more

2DSig-Detect: a semi-supervised framework for anomaly detection on image data using 2D-signatures

The rapid advancement of machine learning technologies raises questions about the security of machine learning models.

Using 2DSig-Detect for anomaly detection, we show both superior performance and a reduction in the computation time to detect the presence of adversarial perturbations in images.

Learn more

Artificial Intelligence in Wargaming

This report explores two possible investment pathways for AI in wargaming: 1) narrow, specialised AI applications for the near-term, and 2) high-risk, high-reward AI investments.

We conclude that the benefits AI can bring to wargaming could be significant, but there would be benefit in first introducing automation in specifically tactical or abductive wargames in the near term to manage risks.

Learn more

SHARDeg: A Benchmark for Skeletal Human Action Recognition in Degraded Scenarios

Computer vision (CV) models for detection, prediction or classification tasks operate on video data-streams that are often degraded in the real world, due to deployment in real-time or on resource-constrained hardware.

Here we address this issue for SHAR by providing an important first data degradation benchmark on the most detailed and largest 3D open dataset, NTU-RGB+D-120, and assess the robustness of five leading SHAR models to three forms of degradation that represent real-world issues. 

Learn more

On-board Mission Replanning for Adaptive Cooperative Multi-Robot Systems

Cooperative autonomous robotic systems have significant potential for executing complex multi-task missions across space, air, ground and maritime domains. But they commonly operate in remote, dynamic and hazardous environments. Fast, on-board replanning algorithms are therefore needed to enhance resilience. 

Here we define the Cooperative Mission Replanning Problem as a novel variant of multiple TSP with adaptations to overcome these issues, and develop a new encoder/decoder-based model using Graph Attention Networks and Attention Models to solve it effectively and efficiently. This work paves the way for increased resilience in autonomous multi-agent systems.

Learn more