Research centre on AI for cyber defence

Ensuring the security and privacy of computer networks and systems through fundamental and applied advances in intelligent agents

Project status


Explaining the science

Advances in intelligent autonomous agents, such as those based on Deep Reinforcement Learning (DRL), have demonstrated super-human capabilities across a range of simulated and game-based tasks. Recent ground-breaking performances include enhancing the speed of fundamental mathematical operations, mastering Stratego with its $10^{535}$ game-tree and defeating the world champion of the popular multiplayer real-time strategy game Dota 2.

These breakthroughs have been made possible by new developments in DRL, allowing intelligent agents (IAs) to identify winning strategies despite imperfect information, highly complex action and observation spaces, and immense game trees. Until now, the computer security and privacy research communities have been largely focussed on conventional (un)supervised machine learning. While this type of AI is great at classification, for example identifying malicious computer binaries or anomalous network traffic, it does not natively support learning from interaction. DRL and related techniques offer a mechanism for planning strategically that we intend to show, through this project, can transform our understanding of, and capacity to attack and defend, computer systems and networks.

Project aims

The centre is led by principal investigators Vasilios Mavroudis and Chris Hicks who are computer security researchers seeking to fundamentally transform the way in which we secure digital systems through the development and application of cutting edge, deep-learning based approaches to intelligent agents. Our current focus areas are as follows:

Autonomous cyber operations and network defence

  • To what extent can a computer network be actively managed and defended by intelligent autonomous agents?

AI for Systems Security

  • Can your attacker model resist an autonomous adversary?

Adaptive fuzzing and state-machine learning

  • Can IAs find vulnerabilities in mainstream applications?

Cryptographic ciphers, protocols and their implementations

  • Does an RL agent make a credible cryptanalyst (or cryptanalysts assistant)?
  • Can IAs improve protocol fuzzing results?
  • Can RL provide a way to maximise vulnerabilities in anonymity networks?

Although dedicated to solving security and privacy problems rather than any one method (i.e., DRL) for doing so, we are currently researching the following DRL techniques:

Multi-agent approaches including swarms of specialised agents

Curiosity and related techniques for self-generated reward signals

Meta-learning and generalisability to novel environments

Transformers and attention techniques for both episodic process memory and reduced action and observation spaces

Genetic techniques for improving RL algorithms for particular environments and tasks

Adversarial approaches to RL policies as well as other AI systems (MLSec)

Explainability (e.g., Bayesian Networks)

Privacy-preserving RL


Selected publications

Foley, M., Wang, M., Zoe M., Hicks C., and Mavroudis, V., 2022, October. Inroads into Autonomous Network Defence using Explained Reinforcement Learning, Conference on Applied Machine Learning for Information Security

Foley, M., Hicks, C., Highnam, K. and Mavroudis, V., 2022, May. Autonomous Network Defence using Reinforcement Learning. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security


Contact info

[email protected]