Exploring over-reliance on blind trust in digital IDs

Wednesday 26 May 2021

If you ask someone to trust something without showing evidence of why they should trust it, that can be problematic: Civil society groups in Switzerland recently defeated their government’s plans to create a legal basis for digital identity. They did not contest the need for digital identity (ID); they expressed a lack of trust in the proposal to develop a national digital ID programme using commercial providers, citing risks of data abuse. 

Following a long history of opposition to a national ID system, The United Kingdom (UK), recently followed, Australia, New Zealand and Canada in introducing a Trust Framework to define ‘what good looks like’ in privately-run services and schemes. Such an approach acknowledges that these systems are largely provided by private companies, many of them international. The 64.4% of voters that rejected Switzerland’s plan signals growing demand for these providers to demonstrate their trustworthiness.  A new Technical Briefing published by the Turing today – Facets of Trustworthiness in Digital Identity Systems – outlines how such an assessment could be done, tackling the reliance on blind trust that people are currently being asked to make.  

As digital systems become a fundamental part of identity verification, governments around the world are looking at the impact, and opportunities being afforded. These systems, particularly national ID systems, bring together an ecosystem of technologies, databases, networks and other infrastructure.  

In its Technical Briefing, researchers behind the Turing’s Trustworthy Digital Identity Systems project detail varied features and mechanisms for determining trustworthiness assurance levels (TAL) of a digital identity system. 

The approach looks at six pillars of trustworthiness (security, privacy, robustness, ethics, reliability and resiliency) to define aspects that determine predictability of outputs, the appropriateness of information collected, alongside the sustainability of design in terms of the technology, social and economic environments in which they operate. It also identifies gaps, such as those between digital and physical infrastructure(s), that impact access to resources and services, in determining the ethical capacity for fair, unbiased and inclusive outcomes.  

Defining Metrics

Turing’s project team drew from this work to offer varied practical considerations in response to recent consultations in the UK’s Trust Framework. Such frameworks acknowledge that many organisations provide ID systems as they create log-ins and collect the personal attributes (date of birth, citizenship, employment or professional status, credit rating etc.) used to verify a right to claim access to resources and services. The UK’s principles-driven approach takes a significant step toward defining accountabilities for protecting users that go beyond the basics of securing systems and data.  The definition of metrics for measuring and auditing systems’ alignment to principles would be another significant step.  

The transparency principle, for example, defined in the UK proposals as “being able to understand by who, why and when a citizen’s identity data is used” presents complex challenges. It aims to encourage development that can underpin fairness and explainability of operations and outcomes. Technically, transparency requires information flows be evident to the user and proofs that any collected and retained personal attributes are only processed for a defined and consented purpose. 

The pursuit of transparency also presents tensions with other systems requirements, including security and privacy goals. Measuring aspects of each allows priorities to be set that reflect cultural considerations and the differing risk profiles of use cases, such as the distribution of aid, public services, banking or commercial storefronts. 

Common approaches

ID systems processes typically facilitate access to data across organisations, social records or credit card information for example, and they serve as gateways to many organisations providing services. Machine learning technologies are facilitating automation of many processes and seamless use of an ID across multiple services, including security-sensitive ones such as online banking and mobile money.  

Such development calls for mechanisms and common approaches that let each party know their accountabilities for assuring data is being used and protected appropriately. These should go beyond adherence to the established standards that are cited within many trust frameworks to include specific requirements such as threat modelling, assessment and risk analysis. These measures highlight the importance of coordinated planning for the safety of information that is processed across different entities, supported by evidence of multilateral information flow control that is technically enforced. In the management of a data breach, for example, managing indicators of threat would entail structured approaches to: 

  • send and receive relevant ID data and intelligence 
  • notify all relevant parties, including the victim of any fraud 
  • have an information sharing process for detection and mitigation of threats 
  • and the reporting of security incidents 

In its review the Turing team highlighted a need for clarity on how to define privacy so that metrics that can be applied across the providers that make up a system. They also highlighted opportunities for the application of “conditional privacy” with safeguards afforded by privacy-enhancing and data-loss prevention technologies to prevent unauthorised exposure in the management of third-party processing.   

Vaccine passports 

Current proposals for the introduction of vaccine passports or COVID-19 status certificates are surfacing many of the challenges that speak to these requirements. They are a form of ID based on the collection and processing of personal information (attributes) and have a significant role in providing access to freedoms and privileges on the route out of lockdowns. 

They are being designed for use in digitally supported environments: The World Health Organisation, currently advises against their use, but has opened consultation on interim guidance for developing smart vaccination certificates and plans a trust framework for assuring such documents are authentic and have not been tampered with. The European Commission has also outlined a trust framework in support of its proposals for a Digital Green Certificate confirming COVID-19 status.  

Their development is also fuelling varied approaches. The UK government has proposed using the app provided by the NHS which facilitates access to health records maintained by local GPs, while some travel companies have also advanced their own solutions.   

All scenarios present a particular need for guidance that goes beyond the requirement of authenticity. Solutions, including the EU Digital Green Certificate, bring together personal attributes – passport numbers, date of birth, name and citizenship, vaccination type with a person’s immunity status – that have the potential to be widely collected and stored as individuals and organisations use them. Strong guidance is needed: to provide assurances that only the attributes required for the purpose will be collected and processed; to underpin protocols that optimise opportunities for preserving privacy and defend against threats of data or identity theft; and to avoid biased or inaccurate outcomes.  

Overall, digital ID is fast progressing as a foundation for facilitating new efficiencies in public service and governance, economic growth, and the opening up of society coping in a global pandemic. As ID systems evolve with this progress, the need for evidence that they function as intended grows. Establishing trustworthiness assurance levels offers such evidence.  

The report – Facets of Trustworthiness in Digital Identity Systems is available on the project web pages where you can also learn more about opportunities for engagement.