Professor Peter Flach

Peter Flach

Position

Professor of Artificial Intelligence, University of Bristol and Turing Fellow

Former position

Turing Fellow

Partner Institution

Bio

Peter Flach is Professor of Artificial Intelligence at the University of Bristol. His main research is in the area of mining highly structured data and the evaluation and improvement of machine learning models using ROC analysis. He has also published on the logic and philosophy of machine learning, and on the combination of logic and probability. He is author of Simply Logical: Intelligent Reasoning by Example (John Wiley, 1994) and Machine Learning: the Art and Science of Algorithms that Make Sense of Data (Cambridge University Press, 2012).

Professor Flach is the Editor-in-Chief of the Machine Learning journal, one of the two top journals in the field that has been published for over 25 years by Kluwer and now Springer-Nature. He was Programme Co-Chair of the 1999 International Conference on Inductive Logic Programming, the 2001 European Conference on Machine Learning, the 2009 ACM Conference on Knowledge Discovery and Data Mining, and the 2012 European Conference on Machine Learning and Knowledge Discovery in Databases in Bristol. He is a founding board member of the European Association for Data Science (EuADS.org). 

Research interests

The fundamental question of measurement theory is: how to assign meaningful numbers to objects that are not themselves numbers. Issues of measurement are of particular importance in inductive sciences including data science and artificial intelligence, for example when one assesses the capability of models and learning algorithms to generalise beyond the observed data.

Nevertheless, measurement concepts are underdeveloped in data science and AI, in at least the following senses: (i) a wide-spread under-appreciation of the importance and effects of measurement scales; and (ii) the fact that in most cases the quantity of interest is latent, i.e. not directly observable. This Pilot Project will seek to advance understanding of capabilities and skills of models and algorithms in data science and AI, and how to measure those capabilities and skills. Just as psychometrics has developed tools to model the skills of a human learner and develop standardised (SAT) tests, there is a need for similar tools to model the skills of learning machines and for standardised benchmarks which will allow skill assessment with only a few well-chosen test sets.