Human Rights, Democracy, and the Rule of Law Impact Assessment for AI Systems (HUDERIA)

A risk-based approach to assessing and mitigating adverse impacts developed for the Council of Europe’s Framework Convention

Project status

Ongoing

Introduction

Although AI technologies may provide a range of opportunities for the improvement of human lives and the functioning of government, they also have the potential to negatively impact human rights, democracy, and the rule of law. Since 2020, the Turing's Ethics and Responsible Innovation team has been working alongside the Council of Europe to advance these areas as they relate to the design, development, and use of AI systems.

The team is currently collaborating with the Council of Europe’s Committee on AI to produce the first of its kind Human Rights, Democracy, and the Rule of Law Impact Assessment for AI Systems (HUDERIA) to support their Framework Convention. HUDERIA offers a cohesive end-to-end process for identifying contexts and applications in which the deployment of AI systems would likely pose significant levels of risk to human rights, the functioning of democracy, and the observance of the rule of law. It follows an algorithm-neutral and practice-based approach that is inclusive of various AI applications, ensuring that HUDERIA remains future-proof.
 

Project aims

HUDERIA provides clear, concrete, and objective criteria for project teams to assess and mitigate impacts to human rights, democracy, and the rule of law. It adopts a holistic approach for responsible AI governance by capturing both the technical aspects of AI systems and the sociotechnical context of their development and application.

HUDERIA achieves this integrated approach through a detailed four-phase interlinked process. Undergirding each of these phases is the overarching Project Summary Report (PS Report), which provides the key source of documentation required for continuous accountability and deliberation. The first phase of HUDERIA is the Context-Based Risk Analysis (COBRA), which provides a preliminary indication of the risks that AI systems could pose throughout their lifecycle through a risk calibration mechanism. This phase is followed by the Stakeholder Engagement Process (SEP) to help project teams identify salient stakeholders and facilitate meaningful stakeholder involvement throughout HUDERIA. The core of HUDERIA is the actual Impact Assessment (IA), which requires identifying potential adverse impacts to human rights, democracy, and the rule of law and assessing their severity. This phase also comprises the Impact Mitigation Plan (IMP), which consists of a series of documented actions and processes that are designed to prevent or mitigate the adverse impacts identified in the IA as well as any unidentified harms that could arise once the AI system has been deployed. Finally, HUDERIA ensures responsive evaluation through the Iterative Revisitation (IR) phase, which is an ongoing process of re-assessment to account for the shifting conditions in which the AI system is embedded. Although these phases are presented in a seemingly linear manner, HUDERIA is in fact a highly dynamic process that requires ongoing engagement and review, as illustrated in the workflow diagram below. 

Diagram illustrating the HUDERIA workflow

As a non-legally binding instrument intended to support the Council of Europe’s Framework Convention, HUDERIA is based on the assumption that domestic authorities are better placed to make relevant policy and regulatory choices, taking into account their country-specific political, economic, social, cultural, and technological contexts. This means that some components of the process are left to the discretion of the relevant authorities and project teams to decide the exact modalities and thresholds for the phases, provided that the main requirements outlined by HUDERIA are applied.

This first of its kind methodology offers a robust model for directly integrating both rights-based and risk-based practices with AI-centred approaches to algorithmic impact assessment and the assurance of equitable AI innovation practices.

Recent updates

Collaboration Timeline

The Ethics and Responsible Innovation team’s collaboration with the Council of Europe began with a co-produced primer that introduced the main concepts and principles presented in the Council of Europe’s Ad Hoc Committee on AI’s (CAHAI) Feasibility Study for a general, non-technical audience. The Feasibility Study, adopted by its plenary in December 2020, explored options for an international legal response that fills existing gaps in legislation and tailors the use of binding and non-binding legal instruments to the specific risks and opportunities presented by AI systems. To support the Study, the primer aimed to provide some background information on the areas of AI innovation, human rights law, technology policy, and compliance mechanisms covered therein. In keeping with the Council of Europe's commitment to broad multi-stakeholder consultations, outreach, and engagement, the primer was designed to help facilitate the meaningful and informed participation of an inclusive group of stakeholders, and was translated into French and Dutch.

Following the publication of its Feasibility Study in December 2020, the Council of Europe’s Ad Hoc Committee on Artificial Intelligence and its subgroups initiated efforts to formulate and draft its Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy, and the rule of law. This document was ultimately adopted by the CAHAI plenary in December 2021. To support this effort, the Turing undertook a programme of research that explored the governance processes and practical tools needed to operationalise the integration of human right due diligence with the assurance of trustworthy AI innovation practices.

The resulting output, Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A proposal, was completed and submitted to the Council of Europe in September 2021. It presents an end-to-end approach to the assurance of AI project lifecycles that integrates context-based risk analysis and appropriate stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices.

Since the completion of this proposal, the team has been working alongside the Secretariat of the Committee on AI to develop the first of its kind Human Rights, Democracy, and the Rule of Law Impact Assessment (HUDERIA) to support the Framework Convention. 

Organisers

Researchers and collaborators