The defence and security (D&S) community – represented by GCHQ and the Ministry of Defence (Defence Science and Technology Laboratory [Dstl] and Joint Forces Command) – are collaborating with The Alan Turing Institute to deliver an ambitious programme of data science and artificial intelligence (AI) research to deliver impact in real world scenarios.
Following the signing of a collaboration agreement and a year of knowledge exchange across 2015-2016, the D&S programme was launched in 2017 with the appointment of Dr Mark Briers as programme director. Since then, programme personnel has grown to over 20 members across research, leadership, and support.
The programme’s vision is to undertake multidisciplinary data science and AI research to ensure a safe, secure, and prosperous society:
- Safe – supporting defense and national security agencies to keep societies and citizens safe
- Secure – protecting the privacy and security of citizens, institutions and industry
- Prosperous – contributing to global good by enabling societies around the world to derive benefit from strategy technological advances
The programme carries out this vision through academic research (e.g. Digital Identity: ensuring that systems are trustworthy), applied research (e.g. Applied Research Centre for Defence and Security) and thought leadership (e.g. Technical advice for the NHS COVID-19 app).
The D&S programme is solving challenges across four key areas:
Cyber, privacy, trust and identity
- Developing novel solutions for human-machine collaboration
and autonomous defence against a range of cyber incursions;
- testing and building capabilities to generate synthetic datasets;
- and releasing new secure machine learning models and protocols (including new anonymisation techniques).
- Improving the representation of uncertainty for decision makers;
- the use of complex networks and spatial interaction theory to model the effect of regional global interactions; and
- benchmarking the efficacy of using reinforcement learning techniques for wargaming.
- Establishing a deeper understanding on how AI ethics can be applied across a range of D&S contexts;
- providing a detailed background on the risks that climate change poses to UK security; and
- understanding the spatial and temporal processes of conflict, climate change and migration, and how these lead to potential risks of exploitation.
- Better detection and classification of patterns and anomalies within datasets of cyber defence relevance;
- the development of Bayesian deep learning techniques; and
- new topological data analysis techniques.
Starting with the appointment of Dr Mark Briers as Programme Director and the identification of three core research projects, the programme’s activity rapidly expanded during 2017.
Additional research projects in our remaining two key challenge areas will begin before the end of the year, and several interdisciplinary research themes have evolved that cut across the Turing’s strategic partners.
Our activities are driven by the programme’s goals of delivering world-leading research with real-world impact.
Case Study: Data Study Groups
The defence and security programme has taken part in several Turing Data Study Groups, week-long events which bring together academics from around the UK as well as internationally, to work on data science problems provided by industry partners.
Firstly in May 2017, with industry participants including Siemens, HSBC, Samsung, and Thomson Reuters. The two challenges from the defence and security programme were focussed around machine learning for location prediction, and methodologies for analysis of a cyber-attack. Read our blog piece from one of the researchers involved.
And secondly in December 2017, where Dstl led a challenge for researchers to use machine-learning to help improve code quality analysis tools. Read the blog by Dstl representative John and piece on the Government website to find out more.
A series of scientific reports generated by the groups will be published soon.
House of Lords Select Committee on Artificial Intelligence
Defence and Security Programme Director, Mark Briers, was invited to speak to the House of Lords Select Committee on AI on 27 November 2017, during the first panel of a session entitled ‘What are the dangers of artificial intelligence?’.
Questions were around the UK’s capability to protect against the impact of AI on cyber security, and whether the law is sufficient to prosecute those who misuse AI for criminal purposes. The other panel member was Professor Christopher Hankin, Director, Institute for Security Science and Technology, Imperial College London.
Short-term projects aim to demonstrate immediate, meaningful impact
An announcement was made that the programme’s long-term work has been bolstered by a number of shorter, strategically important projects supported by funding from GCHQ. Each up to six months in duration, these projects aim to demonstrate immediate, meaningful impact, and address the key challenges that frame the defence and security programme.
The projects are focusing on a diverse range of applications including understanding hacker communities, adversarial machine learning, encryption, modelling of civil conflict, topological data analysis, and utilising game theory in cyber security. The projects are expected to yield academic impact through publications, and real-world impact through software, which will be released for use and further development.
The Manufacturer, the UK’s premier industry publication for providing manufacturing news, articles, and insights, published an article about the announcement.
Mapping conflict data to explain, predict, and prevent violence
Work produced by Turing Fellow Dr Weisi Guo is aiming to understand the mechanics that cause conflict and identify multi-scale population areas that are at risk of conflict. The research utilises the latest developments in complex networks and spatial interaction theory to model the effect of multiplexed regional-global interactions.
Findings have shown that ‘crossroad’ towns and cities where there are few other routes correlate strongly with data on violence; including terrorism, war between states, and gang violence. The work is building an evidence base which aims to help sustainable global development of infrastructure in order to reduce conflict.
Dr Guo was interviewed by BBC News about his work, the article going into depth about the potential ramifications of the work and the role of the Turing in the research.
Retool AI to forecast and limit wars
Turing Fellow Dr Weisi Guo and Director of Special Projects Sir Alan Wilson have written a comment piece for Nature, explaining how using artificial intelligence to predict outbursts of violence and probe their causes could save lives.
The piece details existing research being conducted into the forecasting of conflict, including the work of Guo and Wilson at the Turing. The piece identifies three things that will improve conflict forecasting: new machine-learning techniques; more information about the wider causes of conflicts and their resolution; and theoretical models that better reflect the complexity of social interactions and human decision-making. The piece goes on to propose that an international consortium be set up to develop formal methods to model the steps society takes to wage war.
For more information, please contact the programme team [email protected]