This paper gives an overview of some ways in which our understanding of performance evaluation measures for machine-learned classifiers has improved over the last twenty years. I also highlight a range of areas where this understanding is still lacking, leading to ill-advised practices in classifier evaluation. This suggests that in order to make further progress we need to develop a proper measurement theory of machine learning. I then demonstrate by example what such a measurement theory might look like and what kinds of new results it would entail. Finally, I argue that key properties such as classification ability and dataset difficulty are unlikely to be directly observable, suggesting the need for latent-variable models and causal inference.

Citation information

Flach P. Performance Evaluation in Machine Learning: The Good, The Bad, The Ugly and The Way Forward (2018). 

Turing affiliated authors