AI for Cyber Defence Research Centre

Ensuring the security and privacy of computer networks and systems through fundamental and applied advances in autonomous defence agents.

Explaining the science

Advances in intelligent autonomous agents, such as those based on Deep Reinforcement Learning (DRL), have demonstrated super-human capabilities across a range of simulated and game-based tasks. Recent ground-breaking advancements include enhancing the speed of fundamental mathematical operations, and defeating the world champion of the popular multiplayer real-time strategy game Dota 2.

These breakthroughs have been made possible by new developments in DRL, allowing intelligent agents (IAs) to identify winning strategies despite imperfect information, highly complex action and observation spaces, and immense game trees. Until now, the computer security and privacy research communities have been largely focussed on conventional (un)supervised machine learning. While this type of AI is great at classification, for example identifying malicious computer binaries or anomalous network traffic, it does not natively support learning from interaction. DRL and related techniques offer a mechanism for planning strategically that we intend to show, through this project, can transform our understanding of, and capacity to attack and defend, computer systems and networks.

Centre aims

The centre is led by principal investigators Vasilios Mavroudis and Chris Hicks who are computer security researchers seeking to fundamentally transform the way in which we secure digital systems through the development and application of cutting edge, deep-learning based approaches to intelligent agents. Our current focus areas are as follows:

Autonomous cyber operations and network defence

  • To what extent can a computer network be actively managed and defended by intelligent autonomous agents?

AI for Systems Security

  • Can your attacker model resist an autonomous adversary?

Adaptive fuzzing and state-machine learning

  • Can IAs find vulnerabilities in mainstream applications?

Cryptographic ciphers, protocols and their implementations

  • Does an RL agent make a credible cryptanalyst (or cryptanalysts assistant)?
  • Can IAs improve protocol fuzzing results?
  • Can RL provide a way to maximise vulnerabilities in anonymity networks?

Although dedicated to solving security and privacy problems rather than any one method (i.e., DRL) for doing so, we are currently researching the following DRL techniques:

  • Multi-agent approaches including swarms of specialised agents
  • Curiosity and related techniques for self-generated reward signals
  • Meta-learning and generalisability to novel environments
  • Transformers and attention techniques for both episodic process memory and reduced action and observation spaces
  • Genetic techniques for improving RL algorithms for particular environments and tasks
  • Adversarial approaches to RL policies as well as other AI systems (MLSec)
  • Explainability (e.g., Bayesian Networks)
  • Privacy-preserving RL

 

Mailing List

We maintain a mailing list bringing together a community of people with an interest in AI and its applications to cyber defence. 
Sign up to stay up to date with news, research publications, job openings, prototypes & demos and event announcements!

Mailing List Sign up

 

Internships

Our team welcomes applications from PhD and undergraduate students. We accept applications through the Turing Internship Network (TIN). Please consult the TIN page for dates and prerequisites. 
 

TIN: Internship Applications

 

 

 

Selected publications

 

Entity-based Reinforcement Learning for Autonomous Cyber Defence
Isaac Symes Thompson, Alberto Caron, Chris Hicks, Vasilis Mavroudis, Workshop on Autonomous Cybersecurity (AutonomousCyber), 2024

Environment Complexity and Nash Equilibria in a Sequential Social Dilemma
Mustafa Yasir, Andrew Howes, Vasilios Mavroudis, Chris Hicks, 17th European Workshop on Reinforcement Learning (EWRL), 2024

Autonomous cyber defence: Beyond games? 
Chris Hicks, Vasilios Mavroudis, 2024

International Scientific Report on the Safety of Advanced AI 
Bengio, Yoshua; Privitera, Daniel; Besiroglu, Tamay; Bommasani, Rishi; Casper, Stephen; Choi, Yejin; Goldfarb, Danielle; Heidari, Hoda; Khalatbari, Leila; Longpre, Shayne et al., Department for Science, Innovation and Technology, 2024

Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space
Vyas S., Hicks C., Mavroudis V., Deep Learning Security and Privacy Workshop (DLSP), 2024

Deep Reinforcement Learning for Denial-of-Service Query Discovery in GraphQL
McFadden S., Maugeri M., Hicks C., Mavroudis V., Pierazzi F., Deep Learning Security and Privacy Workshop (DLSP), 2024

Nearest Neighbour with Bandit Feedback
Pasteris S., Hicks C., Mavroudis V., Annual Conference on Neural Information Processing Systems (NeurIPS), 2023

Adaptive Webpage Fingerprinting from TLS Traces
Mavroudis V., Hayes J., 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2023

Reward Shaping for Happier Autonomous Cyber Security Agents
Bates E., Mavroudis V., Hicks C., 16th ACM Workshop on Artificial Intelligence and Security (AISec), 2023

Canaries and Whistles: Resilient Drone Communication Networks with (or without) Deep Reinforcement Learning
Hicks C., Mavroudis V., Foley M., Davies T., Highnam K., Watson T., 16th ACM Workshop on Artificial Intelligence and Security (AISec), 2023

Inroads into Autonomous Network Defence using Explained Reinforcement Learning, Conference on Applied Machine Learning for Information Security, Foley, M., Wang, M., Zoe M., Hicks C., and Mavroudis, V., 2022, October.

Autonomous Network Defence using Reinforcement Learning. In Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security, Foley, M., Hicks, C., Highnam, K. and Mavroudis, V., 2022, May.


 

Funders

We gratefully acknowledge the generous support of our funders:Security and Policing 2024 - Defence Science and Technology Laboratory (dstl)  - Security and Policing 2024

Contact info

[email protected]