Exciting new collaboration between The Alan Turing Institute and the MRC Clinical Trials Unit (MRC CTU) at UCL, exploring the potential impact of statistical machine learning on the design, conduct and analysis of randomised clinical trials.
Explaining the science
Clinical trials are currently the gold standard for testing the safety and effectiveness of treatments (drugs, surgical procedures, or other health interventions) in clinical care.
These are tightly regulated, with emphasis on trial oversight to ensure the safety of participants and the quality of the data generated.
Despite their success, clinical trials are often slow and expensive. The importance of improving the performance of RCTs has prompted the UK government to earmark this particular area of medicine as one in which the UK aspires to be world-leading.
Machine learning and AI could help to improve clinical trial processes, especially in relation to managing the large quantities of data they process, from participant identification, to monitoring of data quality and protocol adherence, to subgroup effects discovery, thus helping bring new and effective treatments to the right population faster.
This project has the overarching aim to explore the potential impact of statistical machine learning and AI on the design, conduct and analysis of randomised clinical trials. By enhancing human expertise, making better use of data, AI can make predictions about risk of trial or site failure, and clinical patient outcomes.
Two initial challenges have been identified.
1. Treatment effect heterogeneity in clinical trials.
This project aims to use data from large randomised trials to explore novel machine learning methods for the identification of treatment effect heterogeneity and patient subgroups who may benefit from a particular treatment, thereby providing the basis for future confirmatory trials.
We will also build counterfactual predictive models leading to personalised treatment effects. Particular attention will be paid to issues around interpretability, reproducible research, validation of AI methods and the assessment of false discovery rates.
2. Clinical trial monitoring
There is a legal requirement to ensure the safety of participants and the quality of the data generated. The Clinical Trials Unit (CTU) overseeing the research must ensure that all participating trial sites adhere to all applicable ethical and regulatory requirements.
This is done by trial site monitoring, which often involves visiting sites and performing source data verification (SDV), which is time-consuming and expensive. Usually, performance indicators are used to trigger monitoring visits, but choosing performance indicators and validating them is challenging. The aim is to explore AI/ML approaches to identify/predict which sites within an ongoing clinical trial are under-performing or at risk of non-compliance, using centrally held patient-reported data, and previous longitudinal site monitoring data. These predictions will assist in the prioritisation and planning of monitoring actions (e.g. site visits). If CTUs can make better decisions on who and when to visit, this could result in great financial and scientific benefits to the clinical trials community.
The resulting methodology and software will be of interest to researchers conducting clinical trials whether these are publicly funded or sponsored by a pharmaceutical company. We plan to hold workshops with both academia and industry trialists to provide training and help shape our methodology as we go along, in particular in regards to adaptive trials.
Additionally, heterogeneity methods can be used in evaluations of social programs (e.g. school vouchers, training, conditional cash transfers), as knowledge of the optimal subgroups to be targeted can help when designing social protection programs and expansion programmes.
Other scientists conducting experiments may find these methods useful, when exploring how different treatment effects are modified by the characteristics of their experimental sample.