Neural-Symbolic AI for Digital Twins

Integrating symbolic reasoning with cutting-edge deep learning for trustworthy, interpretable, and explainable next-generation digital twin models.

Project status

Ongoing

Introduction

Trustworthy, interpretable models are central when it comes to supporting decision makers. For certain applications more than other (i.e. medicine, critical infrastructure, etc), predictive models need an added level of explainabiltiy and interpretability. To that end, Neuro-symbolic methods provide a state-of-the-art tool for providing model predictions with enough explainabilty and interpretability anchored in symbolic reasoning. Neuro-symbolic AI consists of combining neural leaning with symbolic reasoning. Integrating Neuro-symbolic AI models within digital twins can further support stakeholders for higher-level decision making in complex settings.  

Explaining the science

Neuro-symbolic AI is centred around the combination of neural with symbolic approaches. That consists of merging neural network-based machine or deep learning models with symbolic reasoning and knowledge representation. Neural networks models have permeated across most industrial applications, often achieving state-of-the-art performance. Symbolic modelling approaches, on the contrary, provide rigorous explanation and formal reasoning, but might not scale well for complex real-world applications with large unstructured data. Neuro-symbolic AI builds synergies by combining both neural learning and symbolic reasoning paradigms, leveraging the strength of each approach while accounting for any inherent weakness of each method. 

Deep learning models might not provide enough explainabiltiy and transparency for certain critical applications. To tackle this limitation, we introduce logic-based symbolic reasoning to deep learning models in such a way to provide for more explainable, trustworthy, and interpretable models. The foundation of the symbolic modelling paradigm could be built upon ontologies and knowledge graphs which are capable of articulating domain-specific knowledge and illustrating the intricate web of relationships and data interconnections. 

Project aims

The aim is to build Neuro-symbolic AI models that can be deployed within digital twins, with the goal of assisting decision makers through automation anchored in both learning and reasoning approaches. The project goals include:

  • Synergically integrate formal symbolic reasoning with neural learning methods, leveraging the benefits of each method while bypassing any potential shortcomings.
  • Investigate different automated algorithms along the dimensions of robustness, explainability, interpretability, and trustworthiness. 
  • Deploy neuro-symbolic AI models in digital twin applications to support higher-level decisions for complex systems. 

Applications

For certain applications whereby trust and interpretability are paramount, the implementation of a Neuro-Symbolic AI digital twin platform is essential in enhancing the explainability and trustworthiness of predictive models for decision making. 

Our proposed Neuro-Symbolic AI recipe provides a high-level modelling paradigm that can be readily applied across a wide range of applications (e.g. infrastructure, environment, health). In one preferred embodiment, we study the application of Neuro-Symbolic AI to in complex infrastructure problems.

Organisers

Professor David Wagg

Co-director for Infrastructure, Turing Research and Innovation Cluster in Digital Twins (TRIC-DT); Professor of Mechanical Engineering, University of Sheffield

Researchers and collaborators