The Alan Turing Institute’s defence and security programme in partnership with the National Cyber Security Centre (NCSC) is inviting proposals from academic researchers for research to improve our understanding of artificial intelligence (AI) security, particularly with a view to practical risks and implications.

This call focuses on the security of AI. As AI is being used to make a wider range of decisions, including those of high significance, it becomes likely that these systems will become the direct target of attackers. Being able to extract important information or manipulate the processes and outputs of these intelligent tools could have serious impacts, not just in the field of security, but much wider.

Context for the research

For this call we are focussing on the advancement in securing artificially intelligent systems. While there is significant interest in the usage of artificial intelligence, the end-to-end understanding of what is needed to ensure that AI-based tools are secure is still limited.

Secure algorithmic design provides only one component for providing security for artificial intelligence. There are security risks throughout the machine learning (ML) development lifecycle, from requirements scoping and data collection to implementation, maintenance, and decommissioning. There is a need to understand the prevalence of these risks in a real-world context and evaluate the methods and tools available. This needs to extend to future solutions which are needed to provide a solution to security of AI. A part of this is understanding where current ML systems are being exploited and what data exists or needs to be created within systems that could send an alert to make system owners and users aware that an ML system has been targeted. The long term aim of this research is to enable further AI security research and tool development to assist anyone involved in ML deployment mitigate the risks. Evidence showing real-world use cases and limitations in detecting compromises in current ML algorithms and security tooling is a key output of this research.

Research challenges

The research proposals should clearly address at least one aspect of these principal challenge areas:

  • Assessing the transferability of attacks. This could be from one ML algorithm to another or from a larger model compacted to be deployed on the edge or in resource-constrained environments.
  • Detecting security vulnerabilities in AI models and ML systems. What approaches and measures should be used to detect when our models/systems have been compromised? This also includes automated detection of vulnerabilities in AI models (perhaps using AI itself for the automation!).
  • Assessing the behaviour of intelligent systems and understanding how to detect degradation in their behaviour due to malicious activity. This includes identifying effective mechanisms for correcting or compensating for malicious behaviour.
  • Best practices for mitigating model inversion, including model training and deployment for minimising the amount of insight an attacker might be able to gain from the model using outputs and statistics returned or test data classified.
  • Best practices for building secure intelligent systems. Understanding which algorithms and decision models are most suitable for a range of problem classes and use cases and providing guidance into selection criteria. This additionally includes requirements for the design of user interfaces and system architecture for deploying secure AI.

The importance and impact of secure artificial intelligence is wider than just those of the security application.  Any context where there is a viable need to secure the intelligent system would be considered a valid use case for this research.

Who can apply

We invite researchers from any UK-based universities and research institutes to submit an application (please note non-Turing university partners researchers are eligible to apply too).

The lead applicant must be based in a UK university or research institute.

How to apply

Applications must be submitted via the online portal.

If you have not already done so you must first register on the system. If you have any questions regarding the application form or call process, please contact the Programme Manager: Alaric Williams, [email protected]

Funding available

The project should not exceed a duration of three months. Each application can be for a maximum of £40,000 (not including VAT). The proposal should be defined such that the research can be completed by mid-March 2023.

Contact

If you have queries, please contact the Programme Manager, Alaric Williams [email protected]