Researchers in AI and data science from around the globe headed to Long Beach, California this week to attend NIPS 2017, one of the world’s biggest annual conferences for machine learning.
This year’s NIPS (which stands for ‘neural information processing systems’) received a record-breaking number of submissions, and so we were delighted to see a strong cohort of Turing researchers going along to present papers, posters and share their research.
A group of four Research Fellows from the Turing – Joshua Loftus, Ricardo Silva, Matt Kusner and Chris Russell – were invited to give an oral presentation to NIPS on their research in ‘counterfactual fairness’. Only 40 of the 3,000+ submitted papers to NIPS were selected for this kind of presentation.
Explaining their paper, Matt said:
“Machine learning is beginning to make many life-changing decisions, such as who to accept to law school, where to send police officers and whether to give out loans. If the data used to train these algorithms contains societal biases against certain races/genders or other minority groups, then the algorithm will as well.
“In our paper we propose a method to model unfair biases which allow us to interrogate the causes of unfairness in the data underlying algorithmic decision-making. We use real world data taken from US law school students to show how to design causal models, learn fair classifiers, and evaluate the fairness of any algorithm.”
Chris Oates, a researcher working in the Turing’s data-centric engineering programme, attended NIPS this year to present the Turing’s work on modelling of the human heart.
Chris says his paper gives “a statistical perspective on the problem of assessing the quality of computational cardiac models, which could eventually be used in the clinical context to predict a patient’s response to treatment.
“This is a difficult task since these multi-scale, multi-physics models typically require hundreds of computing hours to produce just one output, yet multiple runs are needed to understand sensitivity of the output to even small changes in each of the different inputs to the model.
“This line of research, called probabilistic numerics, is an emergent field, but has the potential to influence both theory and practice in scientific computations. We’re delighted to see several works on this topic appearing at NIPS 2017, suggesting its growing importance in machine learning.”
While at NIPS, he was the recipient of an ‘Outstanding Reviewer Award’.
Turing Fellows Aretha Teckentrup and Mark Girolami were excited to see their paper, ‘How Deep Are Deep Gaussian Processes?’, cited in a NIPS tutorial from Neil Lawrence, Director of Machine Learning at Amazon.
Aretha Teckentrup told us:
“Our paper looks at popular techniques and algorithms in deep learning. Deep learning refers to the cutting edge tools and techniques that allow computer systems to learn from examples, data and experiments. It is the state-of-the-art in machine learning, inspired by the complex and “deep” structures of biological neural networks such as the brain. Deep learning is already used by industry giants such as Google, Netflix and Amazon, for example in voice and image recognition.”
“There is still little known about the mathematical properties of deep learning techniques, and in the paper we’re providing researchers with a general framework for constructing a particular type of deep learning technique, called deep Gaussian processes, bridging together cutting edge research from the last decade.”
There are many other Turing researchers featured at NIPS 2017 – view the whole programme and list of accepted papers on the NIPS website.