Trustworthy and Ethical Assurance of Digital Healthcare

Meeting ethical and regulatory best practices in health research and healthcare for a range of digital and data-driven technologies.

Project status

Finished

Introduction

Assurance is a process of building trust and justified confidence in a system or technology through engagement and communication. Trustworthy and ethical assurance is a process and structured methodology for communicating the basis for how ethical goals, such as fairness and explainability, have been operationalised and implemented throughout the project lifecycle of a digital and data-driven technology—specifically, those that use some form of machine learning or artificial intelligence.

This collaborative project between the Assuring Autonomy International Programme (University of York) and The Alan Turing Institute seeks to build on and harmonise existing research and work in trustworthy and ethical assurance, including the development of open and reproducible tools that help project teams meet ethical and regulatory best practices in health research and healthcare for a range of digital and data-driven technologies.

In particular, the project has driven the development of the Trustworthy and Ethical Assurance platform (see below), which helps the creators of new digital healthcare technologies to justify and assure ethical claims about their technologies. In June 2023, the project team published a report introducing this platform and demonstrating its utility through key case studies.

Explaining the science

Argument-based assurance

Trustworthy and ethical assurance is a procedure for developing a structured argument, which provides reviewable (and contestable) assurance that a set of claims about the ethical properties of a data-driven technology are warranted given the available evidence.

This definition captures three important and interlocking components of trustworthy assurance:

  1. A structured argument comprising linked claims and evidence that collectively justify a top-level goal (e.g. explainability)
  2. A procedure for developing an assurance case, which represents the argument either formally and/or visually
  3. Agreed upon standards for reviewing and evaluating the argument (e.g. model validation)

A schematic showing the three interlocking components that support trustworthy and ethical assurance.

Figure 1: a schematic showing the three interlocking components that support trustworthy and ethical assurance.

SAFE-D principles

The SAFE-D Principles are a set of ethical principles developed within the Ethics Team (Public Policy Programme) to serve as starting points for reflection and deliberation about possible harms and benefits associated with data-driven technologies. The acronym, ‘SAFE-D’ stands for the following ethical principles:

  • Sustainability: requires the outputs of a project to be safe, secure, robust, and reliable. For example, for a system that supports decision making in courts, prisons, or probation, sustainability as reliability may depend on the availability, relevance, and quality of data.
  • Accountability: requires transparency of processes and associated outcomes coupled with processes of clear communication that enable relevant stakeholders to understand how a project was conducted or why a specific decision was reached (e.g. project documentation).
  • Fairness: determining whether the design, development, and deployment of data-driven technologies is fair begins with recognising the full range of rights and interests likely to be affected by a particular system or practice. From a legal or technical perspective, projects outcomes should not create impermissible forms of discrimination (e.g. profiling of people based on protected characteristics) or give rise to other forms of adverse impact (e.g. negative effects on social equality). Statistical metrics of fairness may be relevant here. Second, there are implications that fall within broader conceptions of justice, such as whether the deployment of a technology (or use of data) is viewed by impacted communities as disproportionately harmful (e.g. contributing to or exacerbating harmful stereotypes).
  • Explainability: refers to a property of a data-driven technology (e.g. AI system) to support or augment an individual’s ability to explain the behaviour of the respective system. It is related to but separate from interpretability. For instance, whereas a ML algorithm may be more or less interpretable based on underlying aspects of its architecture (e.g. simple to understand decision trees versus a complex convolutional neural network), the ability to explain how an algorithm works depends in part on properties of the wider system in which an algorithm is deployed.
  • Data Stewardship: intended to focus an ethical gaze onto the data that undergirds AI/ML projects, including consideration of ‘data quality’ (e.g. whether the contents of a data set are relevant to and representative of the domain and use context), ‘data Integrity’ (e.g. how a dataset evolves over the course of a project lifecycle) and legal obligations, including adherence to data privacy and protection and human rights compliance.

The SAFE-D principles provide high-level normative goals, which can be specified and operationalised throughout a project’s lifecycle to support the identification and evaluation of core attributes that require assurance - find out more about the SAFE-D principles. Argument patterns can be developed for each of the SAFE-D principles to support context-specific development of assurance cases (e.g. fair digital twins used in healthcare).

Trustworthy and ethical assurance platform

Developing trustworthy and ethical assurance cases can be complex. Identifying and evaluating the practical actions undertaken throughout the design, development, and deployment of a project’s lifecycle requires wide-ranging expertise and diverse stakeholder engagement.

The development of our assurance platform seeks to improve the accessibility and usability of the methodology to drive impact and use, especially in digital healthcare where emerging regulation around the use of data-driven technologies can often lead to confusion regarding how to demonstrate and communicate that goals such as fairness have been sufficiently established.

Project aims

This project has the following objectives:

  1. Development of existing methodology and platform: development of an existing methodology and platform will occur across three interrelated work packages, including
    • the theoretical and methodological development of the trustworthy and ethical assurance framework by grounding the assurance methodology in an established notation and standardised approach, such as Goal Structuring Notation, to develop a hierarchical approach tailored to different stakeholder needs (e.g. regulators and auditors, research and developers, patients and patient advocacy groups),
    • validation and tool prototyping for the various components of the framework (e.g. exemplary assurance cases and argument patterns), building on existing software that has already been developed; and
    • incorporation and reference to existing and developing AI standards, with support from the AI Standards Hub and involvement of the standards community, for grounding assurance case claims in robust and reliable forms of evidence and best practices.
  2. Improved impact: the trustworthy and ethical assurance methodolgy and platform has already been evaluated and tested with diverse stakeholder groups. This project will build on this exploratory research and participatory co-design to drive further impact of the methodology and platform, focusing on the context of digital healthcare. In addition, the development of skills and training resources will help drive use and usability of our resources in conjunction with the AI Standards Hub and Tools, Practices, and Systems Programme.
  3. User experience enhancements and validation: while testing the platform (to date) we have prioritised functionality and features over UX/UI considerations. Having additional support to improve the usability and accessibility of the platform, and interoperability with other assurance tools (including those planned by the AAIP), is important for long-term sustainability.

Applications

Key to this project will be the application of the methodology and platform to the specific context of digital healthcare. The project team will use a case study approach to identify exemplary projects to help a) develop the methodology and platform through stakeholder engagement, and b) demonstrate the benefits of the methodology.

This application will support the creation of reusable argument patterns to help researchers and developers know how to communicate and justify how ethical goals have been established in their work, and also help support the emergence of best practices for regulatory compliance.

The project team will work closely with researchers in the Turing Research and Innovation Cluster in Digital Twins and the Health programme to identify salient use cases (e.g. digital twins for cardiology).

Organisers

Dr Christopher Burr

Innovation and Impact Hub Lead (TRIC-DT), Senior Researcher in Trustworthy Systems (Tools, Practices and Systems)

Researchers and collaborators

Contact info

If you have any queries about the project or would like to get involved, please send the project team an email at [email protected]

Funders

This project is supported by the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York.