Introduction
Mathematical models, typically encoded in computationally expensive simulators, underpin a substantial majority of modern global scientific endeavour across diverse areas including from the physical, social, engineering and environmental sciences. However, the use of these simulators induces uncertainty through sources that include model structure, numerical approximations, and unknown parameters. These uncertainties must be accounted for, together with uncertainty in data from the real world, forming the multidisciplinary subject of uncertainty quantification (UQ).
Key UQ tasks include the propagation of input uncertainty through simulators, inverse problems resulting from the calibration of models, and facilitating analysis by building surrogate models for computationally expensive simulators.
Explaining the science
Theory and computation
UQ is now the key methodology in understanding scientific models, enabling theory and intuition to be rigorously tested and then exploited to provide predictions, with associated uncertainty, of complex real-world systems. The types of models range from the mechanistic solutions to sets of well-known partial differential equations, for example in engineering or physics, to phenomenological systems in biology, epidemiology or healthcare, to data-based models in the social or political sciences.
UQ via surrogate models
Naive implementation of most UQ tasks would require very many (thousands) of simulator evaluations. Typical computational costs make such implementations unfeasable. Hence much UQ methodology hinges on the use of a surrogate model, a computationally inexpensive approximation of the simulator built from a limited number of simulator evaluations (collected in computer experiments that need to be designed well). Popular surrogates from the statistics and applied mathematics communities, respectively, include the 'Gaussian process emulator' and 'polynomial chaos approximation', both of which provide fast predictions of simulator outcomes.
Model calibration
The most challenging UQ tasks often involve the solution of inverse problems, for example calibrating models to learn uncertain model parameters or identifying input conditions that lead to required or desirable outputs. Model calibration, for example using Bayesian inversion, aims to infer the distribution of model parameters that best match real-world observations whilst incorporating discrepancies between the simulator and reality. Calibration is an important step in ensuring useful and reliable predictions can be made by both simulators and surrogate models.
Aims
This group will bring together a group of Turing-affiliated researchers to form a UQ hub pooling expertise from statistics, applied mathematics, computer science, research software engineering and domain-area expertise.
Through discussion and collaboration, the group will facilitate the development and application of new foundations, algorithmic approaches, workflows and benchmarking of competing and complimentary methodology.
The group will exploit the unique environment of the Turing to adopt a fully-fledged data science approach, combing and exploiting theory, computation and domain expertise.
Talking points
How to scale up and generalise surrogate models to address challenges from large-scale simulators?
Challenges: Many simulators exhibit high dimensional inputs and/or outputs, complex non-stationary behaviour or are formed from chains or networks of smaller models.
Example output: Efficient methods and implemented algorithms for large-scale emulation.
How to combine simulators and data across different scales and levels to provide accurate predictions of real-world systems?
Challenges: How to effectively specify physically meaningful discrepancies across scales? How to distribute experimental effort across different levels, and design combined experiments across levels to enable data fusion.
Example output: New design of experiments methods for estimating discrepancies across scales.
How to democratise UQ systems/platforms, making them available to individuals, organisations or SMEs, even those with few resources?
Challenges: Providing training and computational resources to enable appropriate understanding and facilitate use of complex methods.
Example output: Open source software and training materials via GitHub.
How to provide unbiased benchmarking and comparisons of different UQ methods?
Challenges: As a multidisciplinary field, competing UQ methods regularly arise from different fields and there is an important requirement to compare these methods to understand their relative merits.
Example output: A suite of agreed test problems and methodologies.
How to get involved
Organisers
Professor Serge Guillas
Data-Centric Engineering Group Leader, and Turing FellowDr Jill Johnson
Lecturer in Statistics, University of SheffieldResearchers
Dr Gihan Mudalige
Associate Professor, University of WarwickProfessor Mike Giles
Professor of Scientific Computing, University of OxfordDr Devaraj Gopinathan
Senior Research Associate, UCLDr Chris Dent
Professor of Industrial Mathematics, University of Edinburgh and Turing FellowProfessor Jim Smith
Turing FellowProfessor Mark Girolami
Chief Scientist, The Alan Turing InstituteDr François-Xavier Briol
Data-Centric Engineering Group LeaderDr Pranay Seshadri
Data-Centric Engineering Group LeaderProfessor Tim Dodwell
Turing AI FellowProfessor Peter Challenor
Turing FellowDr Ruchi Choudhary
Data-Centric Engineering Group LeaderDr Dimitra Salmanidou
Visiting ResearcherRebecca Ward
Research AssociateRyuichi Kanai
Visiting ResearcherDr Monika Kreitmair
Visiting ResearcherContact info
External researchers
Mingda Yuan, University of Cambridge
James Salter, University of Exeter
Victoria Volodina, University of Exeter
Deyu Ming, UCL
Mariya Mamajiwala, UCL
Ayao Ehara, UCL
Kaiyu Li, UCL
Alejandro Diaz de la O, UCL
Henry Wynn, LSE