Since the launch of ChatGPT in late 2022, the meteoric rise of generative AI applications has captured the attention of the public sector, bringing increased interest in how AI technologies can help government bodies better serve members of the public. However, whilst many aspects of public administration stand to benefit from an AI-enabled technological transformation, there are also important ethical issues to consider around the production, procurement and use of AI systems in the public sector, across areas including healthcare, education, defence and law enforcement.
For several years, The Alan Turing Institute has been steadily working to drive forward the development of standards and guidelines for responsible AI. Now, as the subject of AI safety comes under the spotlight at the international AI Safety Summit, we are launching new, practical tools to help public sector bodies put existing, world-leading guidance on AI ethics and governance into practice. We hope that these tools will provide the foundations for safe, ethical, equitable and sustainable AI in the civil service and beyond.
Even in the relatively short history of generative AI applications, numerous ethical concerns have already arisen around the world – from the spread of disinformation, misinformation and propaganda, to the possibility of irresponsibly designed systems behaving destructively, toxically and abusively in the wild. Across the public sector, the risks associated with new, emerging and existing AI technologies abound as they do elsewhere. Irresponsibly designed, misused or abused systems may lead to poor quality or hazardous outcomes that harm the wellbeing of people and communities. For instance, when automated systems (like risk prediction models or facial recognition technologies) are integrated into public services without sufficient attention paid to potential adverse impacts, they may undermine individuals’ sense of autonomy, their ability to make decisions about their own lives, and the integrity of their interpersonal relationships and interactions. Meanwhile, inequitable or discriminatory patterns baked into the data used to train AI models can become entrenched when these models are implemented, leading to unfair outcomes and discrimination.
The wide spectrum of AI risks enjoins public sector bodies to prioritise responsible and trustworthy AI innovation. Acting on this priority is not an easy task, but it is a task that the Turing has dedicated much energy to in recent years. In 2019, in close partnership with the Office for AI and Government Digital Service, we published the UK’s (and the world’s) first official national public sector guidance on AI ethics and safety. This guidance is now the most accessed and cited globally, credited with motivating the move ‘from principles to practice’ in AI policy thinking. It has been put into practice across a wide range of public sector bodies, including the Ministry of Justice, Ministry of Defence, NHS, Financial Conduct Authority and GCHQ, as well as numerous local authorities.
Due to the change-making role that the AI ethics guidance played in the UK’s AI innovation ecosystem, in 2021, the National AI Strategy specified that it should be updated and expanded. This mandate entailed the development of a series of practice-based workbooks building on the content in the original guidance. Turing ethics researchers undertook this work with the crucial assistance of public sector bodies, supported by funding from the Office for AI and the Engineering and Physical Sciences Research Council.
The result is the AI Ethics and Governance in Practice Programme, a series of eight workbooks and a forthcoming digital platform designed to equip the public sector with the tools, training and support it needs to apply principles of AI ethics and safety to the design, development, deployment and maintenance of its AI systems. Today, we launch the first four workbooks of the series at an AI Fringe event (AI at a Turning Point: How Can We Create Equitable AI Governance Futures?), as part of efforts to broaden the conversation around safe and responsible AI taking place at the AI Safety Summit.
The Programme provides a framework (see below) enabling public sector teams to reflectively integrate ethical values and practical principles into their innovation practices, and to demonstrate and document that they have done so.
The activities presented in the workbook series are organised around what we call a ‘process-based governance’ (PBG) framework, designed to assist AI project teams in ensuring that the AI technologies they build, procure or use are ethical, safe, responsible and trustworthy. Through this framework, we aim to ensure that the ‘SSAFE-D Principles’ (sustainability, safety, accountability, fairness, explainability and data stewardship) detailed in the original guidance are successfully operationalised and evidenced, in full, throughout the life cycles of public sector AI projects. The framework puts in place end-to-end governance mechanisms – like stakeholder impact assessments, bias self-assessment, risk management tools and data factsheets – enabling AI project teams to transparently manage the social and ethical implications of their AI systems at every stage of project delivery.
Creating responsible and trustworthy public sector AI futures is crucial for realising the immense potential of AI technologies to serve the public interest and to advance the common good. At a time when the development of advanced AI technologies could play a critical role in helping governments tackle difficult global problems like climate change, biodiversity loss, and biomedical and public health challenges, it is essential to have in place governance protocols and training regimes that lay the foundations for a culture of responsible AI innovation. The Turing’s new AI ethics programme is a small step in this challenging process.
The first four workbooks from the AI Ethics and Governance in Practice Programme can now be accessed here.
Images and video: Conor Rigby