Action needed to reduce bias in medical devices, review finds

Monday 11 Mar 2024

The Independent Review of Equity in Medical Devices has called for urgent action on Artificial Intelligence (AI)-enabled medical devices to reduce bias and prevent possible harm to patients, according to a new report published today (Monday 11 March 2024).

The review, which was set up in 2022 to establish the extent and impact of bias in the performance of medical devices used in the NHS, focused on three types of medical device where evidence suggested that the potential for harm was substantial.

These were optical devices such as pulse oximeters, widely used during the Covid-19 pandemic to monitor blood oxygen levels, AI-enabled devices which could help the diagnosis or treatment of diseases, and certain genomics applications, such as polygenic risk scores, which predict disease based on genetics.

They identified that AI-enabled devices could lead to an under-diagnosis of skin cancers in people with darker skin  as a result of machines being trained predominantly on images of lighter skin.

And they found that that the data used in polygenic risk scores was subject to bias against people with non-European genetic ancestry.

The report also outlines how women, ethnic minorities and those in disadvantaged socio-economic conditions are disproportionally affected by inequities. During testing of new medical devices, these groups are often underrepresented in recruitment into clinical trials and device evaluations which can lead to underdiagnosis of medical conditions.

The Turing’s ethics team, led by Professor David Leslie and Dr. Michael Katell, prepared a rapid review of health equity in AI-enabled medical devices that formed the evidence base for the work of the Independent Review Panel. The report they submitted concluded that issues around AI-related bias, discrimination, health inequity need to be examined and addressed through a wider-angled, society-centred lens. This research included interviews with people across the healthcare and medical sector. The researchers argue that to fully understand health inequity in the innovation ecosystem of medical devices in the UK, decentralising  the technology is crucial. 

Professor David Leslie, Director of Ethics and Responsible Innovation Research at The Alan Turing Institute, said:

“AI systems have enormous potential to bring benefits to patients across the healthcare and medical sectors. However, if the technology is created without sufficient consideration of under-represented groups and of historical legacies of systemic bias and structural discrimination, then there is risk that these technologies could ultimately cause harm to patients.  

“It’s vital that the people and companies developing such devices ensure that these dimensions of balanced representation and historical inequity are considered from the beginning of their creation, to ensure the best outcomes for everyone.” 

The University of Liverpool’s Professor Dame Margaret Whitehead, Chair of the Review, said:

 "Our review reveals how existing biases and injustices in society can unwittingly be incorporated at every stage of the lifecycle of AI-enabled medical devices, and then magnified in algorithm development and machine learning.”

The review recommends that the Government should start preparing now for the disruption to healthcare from the next generation of AI-enabled machines if it is to minimise the risk of patient harm.

Panel member Professor Chris Holmes, formerly Programme Director for Health and Medical Sciences at The Alan Turing Institute, said: 

"We are calling on the government to appoint an expert panel including clinical, technology and healthcare leaders, patient and public representatives and industry to assess the potential unintended consequences arising from the AI revolution in healthcare. Now is the time to seize the opportunity to incorporate action on equity in medical devices into the overarching global strategies on AI safety.”

The Expert Review Panel led by Professor Dame Margaret Whitehead included Professors Raghib Ali (Cambridge University), Enitan Carrol (The University of Liverpool and North West Clinical Research Network), Chris Holmes (Formerly The Alan Turing Institute and Oxford University) and Frank Kee (Queens University, Belfast).

Read the full report and read the full news story.