There is a growing academic and commercial interest in the measurement and analysis of hardware failures in pivotal data centres and high performance computing (HPC) systems. Recent error detection and failure diagnosis frameworks, which use data about how much computing resource is being used in combination with failure logs, have shown increased accuracy over using failure logs alone. In collaboration with Intel, this project is aiming to develop new frameworks to better detect, diagnose, and predict system errors and failures.

Explaining the science

Computing systems for modern day HPC and data centres are rapidly changing as new technologies and software grow. These computing systems are capable of generating a massive amount of often unstructured data of various different types. Therefore, it is crucial to find the right types of data and analyse it rapidly in order to detect, diagnose, and predict system errors and failures efficiently. This is an important, challenging task for improving the reliability and uptime of computing systems, and it’s importance is demonstrated in the increasing number of large-scale failure analysis research being published.

A significant body of research has also shown the value of failure logs for managing failures. Recent error detection and failure diagnosis frameworks, which use data about how much computing resource is being used in combination with failure logs, have shown increased accuracy over using failure logs alone.

As an example, when there are correlations between the use of system resources for both computing processes and memory allocation, and these activities occur at the same time as memory errors, it indicates that memory allocation activities are associated with the generation of memory errors. Therefore, it’s possible to use the monitoring, or ‘counters’, of the correlated activities of computing process and memory allocation, to assess the state of memory allocation in a system, and then use any associated memory errors to identify which applications are causing these errors.

Relevant publications

A. Pelaez, A. Quiroz, J.C Browne, E. Chuah, M. Parashar, "Online Failure Prediction for HPC Resources using Decentralized Clustering", in Proceedings of 21st IEEE International Conference on High Performance Computing (HiPC), 2014, doi

N. Gurumdimma, A. Jhumka, M. Liakata, E. Chuah, J.C. Browne, "Crude: Combining resource usage data and error logs for accurate error detection in large-scale distributed systems", in Proceedings of 35th IEEE Symposium on Reliable Distributed Systems (SRDS), 2016, doi

Project aims

This project involves studying the nature and characteristics of system errors and failures, developing new data-processing methodologies, and implementing tools for testing on actual large cluster systems. The knowledge that will be gained from the study can then be used to develop error recovery strategies. The reports that will be generated by these data-processing methodologies can then be used to support data centre systems administrators in system diagnosis (and failure prediction). In addition the tools that will be implemented have the potential to be used in automating diagnostics workflows.

Specifically, this project is producing a framework for analysing and reporting error propagation patterns and degrees of success and failure of error recovery protocols. The framework uses both failure logs and resource use data in its analysis. It has the potential to be adapted for application to any cluster system or supercomputer that generates resource use data and failure logs.


The framework has been applied to resource use data and failure logs on three different large HPC systems operated by the Texas Advanced Computing Center. The analyses generated by the framework have revealed many interesting insights into patterns of memory allocation and memory leaks, communication and file-system I/O errors, and chipset and memory errors.

The framework will continue to be tested on the HPC systems at the Texas Advanced Computing Center as well as at other data centres that operate HPC systems that generate resource use data and failure logs.

Published work on online failure prediction using a decentralized clustering approach ( to detect anomalies in resource usage logs showed that 20% of the features in resource usage data give the best results for predicting node failures.'


Researchers and collaborators

Contact info

For more information, please contact The Alan Turing Institute

[email protected]