In the COVID-19 pandemic, the use of models and data in decision making has been at the forefront of public debate. We have seen prime time television briefings from the government in which modelling results have been presented, and many comment pieces in the mainstream press about methodology associated with model-based decision support.

The use of models to support important decisions, and thus the importance of doing this well, is not as new as its recent public prominence. For many years, computer models have been used in industry and government to support policy and investment decisions making in key areas such as climate, energy, wider environmental issues, and the overall economy.

What does good decision support modelling look like? The first point, which may sound like a truism but which is far from universal practice, is that the modelling should be appropriate for the context in which it is used.

This connects to issues of communication. This is often thought of as a one way matter of evidence being communicated by analysts to decision makers. However communication in the other direction, i.e. those designing decision support modelling understanding the application context and the judgment of decision makers as to how competing factors should be weighed against each other, is as important in making the modelling relevant and useful.

One must also make proper consideration of uncertainty when modelling for decision support. We must of course consider uncertainty in what the world will throw at us in the future (e.g. what the wind power resource will look like on a particular day, or what the climate background will be in 2030), and uncertainty in numbers which go in to analysis (for instance early in the pandemic the case fatality rate and R number for Covid).

The consequences of the difference between model assumptions and the way the real world works are also vitally important. Unless the system is very simple, or has well-defined physics-based governing equations (as in an electricity network), it is very rare to be able to regard the equations underlying a computer model to be in any sense ‘the same thing’ as the real world, and to justify use of a model on that basis.

There are a number of guides available to good practice in modelling for decision support, for instance the Aqua Book, a manual on ‘producing quality analysis for government’ developed by HM Treasury. This is a very high quality document within its scope, covering governance and technical aspects as well as both decision maker and analyst perspectives, however some technical aspects such as how to implement the requirements for uncertainty treatment are out of its scope.

Our work at The Alan Turing Institute centres on consequent questions of how to provide the tools for uncertainty treatment which can answer the challenges laid down by the Aqua Book’s guidance. This includes the White Paper which this article accompanies, and the related Turing Institute project "Managing Uncertainty in Government Modelling”.

Methods for handling relatively simple situations (e.g. where systems are not too large, and there are ample data available for estimating model inputs) are well known in analytical professions. However there are common issues for which knowledge of relevant methodology is less widespread, for instance: designing models and specifying their inputs based on judgments of experts and decision makers where data are limited or associated with a future that has not yet happened; or managing uncertainty in large scale computer models, where one ideally would wish to do many model runs to explore the model’s behaviour, but can only do a limited number of runs due to finite available computer resource.

These issues are magnified further in a large organisation, where there may be a whole suite of models that are used in decision analysis individually or in combination. This includes both additional technical modelling issues, and matters of documentation and model curation which are needed to maintain corporate memory of how to use the models independently of particular staff who developed or commissioned them.

Methods for managing uncertainty in these complex circumstances have been developed in specialist areas of academia, but there is an immediate need for recommendations on good practice for current decision support studies, that can be implemented in government and industry where such highly specialised skills are not available. This should be complemented by a research/innovation/deployment/skills agenda to drive more fundamental developments in practice in the longer term.

This reference to skills is vitally important, as by its very nature the technical side of modelling for decision support cannot be viewed in isolation from the human side (the people involved in the decision support process, and how they interact), and the role of decision support in creating benefits for an organisation and its stakeholders.

Those interested in learning more about these issues may wish to read our Turing White Paper on “The use of multiple models within an organisation” on which this piece is based, and the report that Dent and Wynn co-wrote for the Centre for Digital Built Britain in 2018 on “Methodologies for Planning Complex Infrastructure Under Uncertainty.” The authors would also be pleased to discuss how these ideas might be applied in practice, research or innovation, and can be contacted via [email protected].