Theoretical foundations of engineering digital twins

Developing theoretical foundations underpinning digital twins for complex engineering systems

Introduction

Digital twins are becoming an increasing essential component in the design and management of assets in modern industry and engineering, with widespread use in civil engineering, the energy and aerospace sectors. Despite the diversity of application areas, there are common challenges faced by each which fall within the remit of:

  • Computational statistics and uncertainty quantification
  • Analysis and approximation of partial differential equations (PDEs)
  • AI and machine learning

These challenges relate to the fundamental question of how to systematically combine data with physics-based models in the creation of digital twins, and how then to correctly incorporate digital twins within a risk-stratified decision-making pipeline.

The overarching goal of this project is to identify fundamental challenges faced in the development of digital twins for complex engineering systems, and by leveraging recent developments in applied mathematics, computational statistics and machine learning develop generally applicable methodology which can be used to address these challenges. 

Explaining the science

Quantification of uncertainty for complex engineering models

Uncertainty quantification (UQ) seeks to address the problems associated with incorporating real world variability and probabilistic behaviour into engineering and systems analysis. In the context of complex engineering systems, the digital models involve complex nonlinear dynamics (e.g. computational fluid dynamics models for aerospace and energy systems and nonlinear elasticity for structural systems). Such models tend to involve legacy and proprietary codes which are not necessarily available to the researcher or decision makers. This motivates the development of frameworks able to perform UQ for models which are black box, i.e. we are not privy to the internal workings.

Sensitivity analysis for complex engineering models

Sensitivity analysis is the process of apportioning the uncertainty in outputs of a model to the uncertainty in each input variable over their entire range of interest. This analysis identifies the most sensitive components of the model, providing a measure of the sensitivity of parameters or sub-components to the state variables of greatest interest in the model. Employed in tandem with uncertainty quantification, sensitivity analysis is an indispensable tool for data-driven design under uncertainty through a digital twin.

Physics informed machine learning

Machine learning (ML) methods are playing an increasingly important role in the development of digital twins. In particular, deep neural networks have demonstrated particular promise in learning complex dynamic relationships across multiple scales without domain dependent feature engineering. While these methods are attractive, the relationships and constraints dictated by fundamental physical principles need not be learned, particularly in data-limited applications. This motivates the development of machine learning methods in which physical constraints and relationships can be embedded within the model.

Project aims

As part of the AI for Science and Government theme of Digital Twins for Engineering, this project aims to deliver ground-breaking research into the application of uncertainty quantification, sensitivity analysis and physics informed machine learning for digital twins. 

The activities of this project can be split into the following thematic goals:

  1. Development of novel frameworks for the integration of complex engineering models with data, with a particular focus on non-intrusive methodologies.
  2. Development of sensitivity analysis methods for complex engineering models based on:
    • Leveraging active subspace methods arising in dimension reduction
    • The development of robust non-intrusive methods for global sensitivity analysis
    • The integration of sensitivity analysis with probabilistic numerics algorithms, particularly in the context of digital twins involving large scale PDE models. 
  3. Development of a framework for physics informed machine learning in which physical constraints and relationships can be embedded within the model, either through the introduction of loss functions with differential equation constraints, or through ML models whose predictors are constructed to explicitly satisfy the underlying physical relationship.

Applications

This work is driven by problems arising in the development of digital twins across engineering sectors - infrastructural engineering, aerospace engineering and the oil and gas industries. This project will be engaging with Turing projects in these application areas to provide foundational support.

Recent updates

June 2019

The paper 'Statistical Inference for Generative Models with Maximum Mean Discrepancy' has been released, which details a novel approach to non-intrusive calibration of digital twins.

May 2019

The paper 'The Statistical Finite Element Method' has been released, presenting a statistical construction of the finite element method and for the first time provides the means for the coherent synthesis of data and FEM.

Organisers

Dr Andrew Duncan

Director of Science & Innovation - Fundamental Research. Department/Programme: Fundamental Research, Programme Leadership

Contact info

[email protected]

Funders