Common Regulatory Capacity for AI


The use of artificial intelligence (AI) is increasing across all sectors of the economy, which raises important and pressing questions for regulators. This report presents the results of research into how regulators can meet the challenge of regulating activities transformed by AI and maximise the potential of AI for regulatory innovation. The report also investigates whether regulators perceive a need for common capacity in AI — mechanisms and structures that enable coordination, knowledge sharing, and resource pooling — to advance AI readiness across the UK’s regulatory landscape.

The study was commissioned by the Office for AI and produced by The Alan Turing Institute’s public policy programme. The Regulators and AI Working Group, convened by the Information Commissioner’s Office, provided essential input and feedback throughout all stages of the research. The report draws on interviews with staff across regulatory bodies of different sizes and sectoral remits.

Key findings

1. AI technologies are expanding in scale, scope, and complexity, resulting in a diverse range of applications with relevance to all areas of social and economic life. This has major implications for regulators along two dimensions:

  • The Regulation of AI. Regulators need to understand the nature and implications of AI uses that fall within their regulatory remit and to assess the adequacy of regulatory arrangements in relation to them. Ensuring that regulatory regimes are “fit for AI” is essential to address AI-related risks and to maintain an environment that encourages innovation. Certainty about regulatory expectations, public trust in AI technologies, and the avoidance of undue regulatory obstacles are crucial pre-conditions for the uptake of AI technologies.
  • AI for Regulation. Regulators might also turn to AI themselves, in order to make their work more innovative, effective, and efficient. International evidence illustrates the large and diverse number of AI-based innovations that can transform the ways in which regulatory bodies pursue their missions.

2. Interviews with representatives from across the UK’s regulatory landscape show that there are significant readiness gaps in both the Regulation of AI and AI for Regulation. The gaps exist at three levels: system-level readiness, organisational readiness, and participant readiness.

Despite an increase in AI-related initiatives across the regulatory landscape, many regulatory bodies are at an early stage in their “AI journey” and all face shared difficulties in making progress towards AI readiness. Common obstacles include limitations in knowledge and skills, insufficient coordination between regulators, issues of leadership and management of organisational and attitudinal change, and resource constraints.

3. The shared nature of obstacles faced by regulators calls for a joined-up approach to increasing AI readiness that enables coordination, knowledge generation and sharing, and resource pooling.

Echoing the Government’s recently published National AI Strategy and Plan for Digital Regulation, our interviews revealed an urgent need for increased and sustainable forms of coordination on AI-related questions across the regulatory landscape. Such coordination is essential for ensuring that regulatory regimes and interventions are coherent, effective, proportionate, efficient, and informed by developments at the international level. Our research findings also highlight that joined-up approaches to developing and sharing knowledge and resources can play a transformative role by enabling regulators to learn from each other and increase their collective capacities in ways that leverage synergies and efficiencies.

4. The research identified common challenges and opportunities presented by AI which show that the areas of Regulation of AI and AI for Regulation are critically linked. Any strategy to build capacity for regulation and AI should cover both.

Interviewees stressed the need to capitalise on the synergies between the Regulation of AI and AI for Regulation. They perceived a need for the development of a shared vocabulary in relation to AI technologies. They outlined the usefulness of a mapping exercise to identify the uses of AI across regulators, the risks posed by the use of AI in different sectors, and any regulatory gaps. They considered that there is a need to determine ways to address regulatory gaps, to anticipate future risks, and to adapt to the speed of technological change. They saw the value of sharing knowledge and best practice in the use and management of AI. They noted the difficulties in attracting and retaining talent, and the usefulness of shared training and skills development programmes as well as AI tools.

5. The research highlighted the need for access to new sources of shared AI expertise. A common pool of expertise would stimulate and maintain AI readiness across regulators, while avoiding duplication in a crowded landscape.

Interview participants highlighted the significance of existing relationships and fora for collaboration and exchange between regulators, but also noted their limitations. They cater only to a subset of the needs identified, cover only parts of the UK’s regulatory landscape, and are constrained by a lack of robust and sustainable resourcing. The research pointed to the need for new sources of expertise to fill gaps and act as a catalyst for developing regulatory readiness in AI. The solution should avoid unnecessary duplication by capitalising on existing structures. It should involve strong incentives for regulatory bodies to participate but operate on a voluntary basis. It should take account of differences in requirements between larger and smaller regulators and ensure that shared resources are accessible and beneficial to regulators of all sizes and sectors. The solution should be politically independent and facilitated by a neutral, but respected, authoritative, and well-established organisation with recognised expertise in both technical and non-technical dimensions of AI.

6. The most promising avenue towards building common capacity emerged as the creation of an AI and Regulation Common Capacity Hub (ARCCH), convened by an independent and authoritative body in AI. The Hub would provide a trusted platform for the collaborative pursuit of common capacity while consolidating existing initiatives and avoiding unnecessary additional crowding of the landscape.

The proposed ARCCH represents the only approach to developing common capacity that is aligned with all the considerations raised by our interviewees. To act as a trusted partner for regulatory bodies, ARRCH would have its home at a politically independent institution, established as a centre of excellence in AI, drawing on multidisciplinary knowledge and expertise from across the national and international research community.

The newly created AI and Regulation Common Capacity Hub would:

  • Convene, facilitate, and incentivise regulatory collaborations around key AI issues;
  • Cultivate state-of-the-art knowledge on the use of AI by regulated entities;
  • Conduct risk mapping, regulatory gap analysis, and horizon scanning;
  • Provide thought leadership on regulatory solutions and innovations;
  • Develop proofs of concept and build shared AI tools for regulators;
  • Supply training and skills development;
  • Build up and facilitate sharing of human and technical resources across the regulatory landscape;
  • Act as an interface for regulators to interact with relevant stakeholders including industry and civil society.

7. Realising the full potential of common regulatory capacity for AI requires support and commitment.

Achieving common capacity will require action from across the regulatory landscape. Government will need to resource and support the establishment of the new hub, as well as other forms of cross-regulator initiatives. Regulatory bodies will need to evaluate, strengthen, and renew regulatory collaborations; commit organisational resources to engaging with ARCCH; promote strategies to increase organisational agility, adaptivity, and ingenuity; and pursue an inclusive and participatory approach that includes civil society.

Citation information

Aitken, M., Leslie, D., Ostmann, F., Pratt, J., Margetts, H., & Dorobantu, C. (2022). Common Regulatory Capacity for AI. The Alan Turing Institute.

Turing affiliated authors

Professor David Leslie

Director of Ethics and Responsible Innovation Research at The Alan Turing Institute and Professor of Ethics, Technology and Society, Queen Mary University of London