Bilal Mateen

Bilal Mateen

Position

Clinical Data Science Fellow

Partner Institution

Bio

Bilal is a clinical-academic; he splits his time between his clinical commitments at King's College Hospital (KCH), and the Wellcome Trust where he’s the Clinical Technology Lead. Bilal’s work at Wellcome is focused on funding digital public goods and the software infrastructure that makes research possible, as part of the larger Data for Science and Health Priority Area. He also holds an honorary lecturer appointment at University College London’s Institute of Health Informatics, which is where he carries out most of his clinical research.

Bilal’s research is largely focused on applications of data science and machine learning in a variety of clinical settings, from the neuro-rehabilitation research he is involved in at the National Hospital for Neurology & Neurosurgery (UCL/Queen Square, and formerly Warwick Medical School), to his more recent work on diabetes and inflammatory bowel disease. Bilal regularly speaks to both technical and non-technical audiences about reproducible ML/AI, and how we should support innovation whilst implementing sensible regulation. He has had the privilege of being invited to speak at the Open Data Institute, the General Medical Council, the National Institute for Health and Care Excellence, as well as several clinical conferences.

Alongside his research at the Turing, Bilal was formerly the Clinical Data Science Liaison to the Data Study Groups Programme and the Turing-Warwick Data Science for Social Good (DSSG) summer programme. In this voluntarily role, Bilal worked to help academic-led groups take full advantage of these opportunities to explore how data science and artificial intelligence could be applied to cutting edge problems in health and social care.

Research interests

Bilal’s research at the Turing is focused on creating a robust framework for reporting and assessing machine learning (ML) and artificial intelligence (AI)-based predictive modelling in medicine. This framework will allow researchers and regulatory bodies alike, to ensure that all potential ML/AI-based tools are transparent, reproducible, ethical, and effective before they are allowed to materially alter the care that patients receive.