Trustworthy and Ethical Assurance of Digital Twins (TEA-DT)

Through multidisciplinary research and an open-source platform, the TEA-DT project aims to empower teams and organisations to navigate ethical challenges in AI, fostering community engagement and sustainable practices.

Project status

Ongoing

Introduction

In recent years, considerable effort has gone into defining principles such as 'responsible', 'safe', and 'fair' in the context of data science research and AI innovation. Although progress has been made in translating these principles into practice, many sectors still lack the tools and capabilities for operationalising and implementing trustworthy and ethical guidance. Moreover, project teams still find it challenging to know how best to achieve ethical goals alongside existing demands or requirements, and then communicate that these goals have been realised to other stakeholders of affected users. If ignored, these gaps could hamper efforts to build public trust or amplify existing societal harms and inequalities caused by biased and non-transparent sociotechnical systems.

The Trustworthy and Ethical Assurance for Digital Twins (TEA-DT) project will continue development and validation of an existing open-source platform, known as the Trustworthy and Ethical Assurance (TEA) Platform, which has been designed by researchers at the Alan Turing Institute and the University of York to help users navigate the process of addressing the aforementioned challenges. This work has also been supported by the UK's Responsible Technology Adoption Unit (Department for Science, Innovation and Technology) and the AI Standards Hub.

The TEA platform helps users and project teams define, operationalise, and implement ethical principles as goals to be assured, and also provides means for communicating how these goals have been realised. It achieves this by guiding individuals and project teams to identify the relevant set of claims and evidence that justify their chosen ethical principles, using a participatory approach that can be embedded throughout a project's lifecycle. The output of the platform—a user-generated assurance case—can be co-designed and vetted by various stakeholders, fostering trust through open, clear, and accessible communication.

The TEA platform consists of three main elements:

  1. an online tool for crafting well-reasoned arguments about ethical goals,
  2. user-friendly guidance to foster critical thinking among teams and organisations
  3. a supportive community infrastructure for sharing and discussing best practices.
     

Although the platform is designed for a wide range of applications, the TEA-DT project will specifically focus on digital twins—virtual duplicates that are closely coupled to their physical counterpart to enable access to data and insights that can improve and optimise the way their real-world versions operate. More specifically, the project team will carry out scoping research on the assurance of digital twins within three different contexts: health, natural environment, and infrastructure. This research will be located within and supported by the Turing's Research and Innovation Cluster in Digital Twins.

Although digital twins promise vast societal benefit in these areas, the fact that they increasingly rely on various forms of AI and often operate in safety-critical settings, means that several challenges must be addressed to ensure their ethical and trustworthy development. For instance, in health, questions about data privacy and ownership arise; environmental applications must tackle bias and fairness issues, complicated by global scales and differing laws; and in infrastructure, technical challenges concerning uncertainty communication give rise to additional needs for transparency and explainability.

In collaboration with key partners and stakeholders, the TEA-DT project will carry out scoping research to co-develop exemplary assurance cases and enhance the platform's features to make it more user-friendly and integrated into workflows. By committing to open research and community-building principles, championed by the Tools, Practices, and Systems Programme and the Turing Way community, the project aims to a) systematically share best practices and standards, b) make the operationalisation of ethical principles more accessible and inclusive, and c) integrate the project sustainably with existing networks and communities.

Project aims

Our vision comprises three interconnected goals:

  1. To conduct multi-disciplinary scoping research that identifies how a novel RRI tool, known as the Trustworthy and Ethical Assurance (TEA) Platform, can be used by project teams to guide the development of structured arguments that demonstrate how ethical principles and practices have been assured within digital twins research and innovation.
  2. To co-create accessible and reproducible standards for assuring digital twin technologies (including ML- or AI-enabled components).
  3. To cultivate an inclusive and fit-for-purpose assurance ecosystem.

Applications

Realising these goals will lead to the following outcomes:

  • Demonstrable evidence on the impact and barriers of the TEA platform, gleaned from scoping research and engagement with digital twin researchers and practitioners, legal experts, and wider stakeholders (e.g. policy-makers, members of the public).
  • Exemplary assurance cases that can be used to identify commonalities and gaps across different use contexts for digital twins, and show how different method for responsible research and innovation (e.g., bias audits, algorithmic impact assessments) are used as evidence within diverse projects to justify specific claims in service of a more general goal or principle.
  • Application of these findings to:
    • improve the open-source platform's accessibility and usability, focusing on sustainable community-building; and
    • identify further research or translational opportunities, in collaboration with research application managers and partners (e.g., community bridging, integration with existing standards).
  • Collaborative efforts with partners and stakeholders who are committed to the success of the TEA-project (see letters of support) to establish a durable research infrastructure, ensuring the project's long-term impact.

 

Funders

The Trustworthy and Ethical Assurance of Digital Twins (TEA-DT) project is funded by an award from the UKRI’s Arts and Humanities Research Council to Dr Christopher Burr, as part of the BRAID programme.

UKRI - Arts and Humanities Research CouncilBRAID - Bridging Responsible AI DividesAssuring Autonomy - International Programme


Organisers

Dr Christopher Burr

Innovation and Impact Hub Lead (TRIC-DT), Senior Researcher in Trustworthy Systems (Tools, Practices and Systems)

Karen de Cesare

Research Project Manager, Turing Research and Innovation Cluster in Digital Twins (TRIC-DT)

Researchers and collaborators

Dr Sophie Arana

Research Application Manager, Turing Research and Innovation Cluster in Digital Twins (TRIC-DT)

Dr Kalle Westerling

Research Application Manager, Turing Research and Innovation Cluster in Digital Twins (TRIC-DT)

Nuala Polo

Senior Policy Advisor and AI Assurance Lead at the Responsible Technology Adoption Unit (RTA)