COVID-19 has transformed how we travel, work and live. As we emerge from the pandemic, our transport, energy and internet patterns will again undergo a seismic shift, and so will the infrastructure systems that underlie them: our roads, railways, water supply, electrical grids and telecommunications.
To plan and optimise these systems, operators need to forecast future usage. Forecasting energy demand and renewable energy generation, for instance, can help operators to avoid unnecessary use of fossil fuels.
Artificial intelligence (AI), and more specifically machine learning (ML), can play a crucial role in making these forecasts, helping to guide the evolution of our infrastructure systems.
Planning and operating these systems involves making a sequence of decisions. Typically, an initial set of choices is made and, after time passes and new information is received, these choices are revisited. Poor decisions are likely to be unsustainable both financially and environmentally. As the world reopens, we will need to make flexible, responsive and data-driven decisions.
An especially promising AI approach is reinforcement learning (RL): a type of machine learning technique that uses repeated computer simulations to solve these kinds of ‘sequential decision’ problems. It works by developing an optimal strategy, over perhaps millions of simulations, through sophisticated trial and error. A key point is to balance the exploration of new strategies with the exploitation of what has been learned by the simulations so far.
We solve this kind of problem every time we drive somewhere safely, by combining our previous driving experience with new information about our environment and the objects around us. With infrastructure systems, RL will combine human knowledge of the systems, collected over the past decades, with the latest data about their usage. Compared to the time-consuming handcrafting of solutions by human experts – especially difficult if pre-pandemic experience is not a reliable guide to the future – RL has the potential to vastly improve our ability to reopen, plan and operate our infrastructure.
Data-driven forecasts are particularly complex because they are constantly changing according to the latest data. Grappling with this difficulty is one of the focuses of the Turing’s AI for control problems project. Recently, the team launched RangL, a web-based RL competition platform in which forecast-based problems for infrastructure systems can be posed and solved. The project’s aim is to accelerate progress in problems such as ‘generation scheduling’, where continually updated forecasts for energy demand and renewable energy generation help operators to plan – and limit – fossil fuel power generation. The team is now working with the OGTC on the challenge of transitioning the North Sea energy industry to net-zero carbon emissions in an affordable manner.
The generality of RL means that its applications are almost endless. It is used in driverless cars, vehicle route planning and could even help us to rethink our bus routes. We know that people’s health and education is linked to their access to high-quality schools, hospitals and libraries. We also know that different racial and socioeconomic groups have varying access to these facilities, depending on their local transport network. So why not use RL to optimise bus routes, in order to improve social equality?
RL is at the forefront of machine learning. It has the potential to revolutionise our infrastructure in the post-pandemic world, and guide us towards a greener, cleaner and fairer future.
The authors
Prof John Moriarty leads the AI for control problems project in the Turing’s data-centric engineering programme, and is also Professor of Mathematics at Queen Mary University of London.
Dr Phil Winder leads machine learning consultancy Winder Research, which specialises in applications of reinforcement learning, and is the author of Reinforcement Learning: Industrial Applications of Intelligent Agents.
Image: Rodion Kutsaev / Unsplash