AI for Social Good

Date: 12 February 2018

Time: 10:00 – 19:00

Venue: The Royal Society

Register now


A symposium exploring recent advances in AI and how we can achieve positive outcomes for society.

AI applications are used in everyday lives at an accelerating pace, with great potential to benefit society. These applications also raise new challenges around ethics and robust design. This symposium brings together key speakers from the world of AI research to discuss recent work and consider how we can work towards technical solutions to ensure that AI best serves society.

The symposium will featuring a full day of talks followed by a wine reception. It will be held at The Royal Society, London.

This symposium is generously supported by Toyota Mobility Europe.


SPEAKERS

Roberto Cipolla (Professor of Information Engineering, University of Cambridge, UK)

Professor Cipolla has been a Professor of Information Engineering at the University of Cambridge since 2000. Previously he worked as a Toshiba Fellow and engineer at the Toshiba Corporation Research and Development Centre in Kawasaki, Japan, and was awarded a D.Phil. (Computer Vision) from the University of Oxford in 1991.

Roberto’s research interests are in computer vision and robotics and include the recovery of motion and 3D shape of visible surfaces from image sequences; object detection and recognition; novel man-machine interfaces using hand, face and body gestures; real-time visual tracking for localisation and robot guidance; applications of computer vision in mobile phones, visual inspection and image-retrieval and video search.

 

Thomas G Dietterich (Professor of Computer Science, Oregon State University, USA)

Professor Dietterich is Distinguished Professor (Emeritus) and Director of Intelligent Systems at Oregon State University. He is widely celebrated as one of the founders of machine learning. Among his research contributions were the invention of error-correcting output coding to multi-class classification, the formalization of the multiple-instance problem, the MAXQ framework for hierarchical reinforcement learning, and the development of methods for integrating non-parametric regression trees into probabilistic graphical models.

Talk title: Steps Toward Robust Artificial Intelligence

Synopsis: AI technologies are being integrated into high-stakes applications such as self-driving cars, robotic surgeons, hedge funds, control of the power grid, and weapons systems. These applications need to be robust to many threats including cyberattack, user error, incorrect models, and unmodeled phenomena. This talk will survey some of the methods that the AI research community is developing to address two general kinds of threats: The “known unknowns” and the “unknown unknowns”. For the known unknowns, methods from probabilistic inference and robust optimization can provide robustness guarantees. For the unknown unknowns, the talk will discuss three approaches: detecting model failures (e.g., via anomaly detection and predictive checks), employing causal models, and constructing algorithm portfolios and ensembles. For one particular instance of model failure—the problem of open category classification where test queries may involve objects belonging to novel categories—the talk will include recent work with Alan Fern and my students on providing probabilistic guarantees.

 

Sharon Goldwater (Reader, University of Edinburgh, UK)

Dr Goldwater is a Reader in the Institute for Language, Cognition and Computation at the University of Edinburgh’s School of Informatics. She received her PhD in 2007 from Brown University, supervised by Mark Johnson, and spent two years as a postdoctoral researcher at Stanford University before moving to Edinburgh. Her research interests include unsupervised learning for natural language processing, computer modelling of language acquisition in children, and computational studies of language use. Dr. Goldwater holds a Scholar Award from the James S McDonnell Foundation for her work on “Understanding synergies in language acquisition through computational modelling” and is the 2016 recipient of the Roger Needham Award from the British Computer Society for “distinguished research contribution in computer science by a UK-based researcher who has completed up to 10 years of post-doctoral research.”

Talk title: Language learning in humans and machines: making connections to make progress

Synopsis: Current language processing methods are resource-intensive and available for only a tiny fraction of the world’s 5000 or more languages, mainly those spoken in large rich countries. This talk will argue that in order to solve this problem, we need a
better understanding of how humans learn and represent language in our minds, and we need to consider how human-like learning biases can be built into computational systems. Sharon Goldwater will illustrate these ideas using examples from her own research. She will discuss why language is such a difficult problem, what we know about human language learning, and then show how her own work has taken inspiration from that to develop better methods for computational language learning.

 

Matt Kusner (Research Fellow, The Alan Turing Institute, UK)

Dr Kusner is a Research Fellow at The Alan Turing Institute. He was previously a visiting researcher at Cornell University, under the supervision of Kilian Q Weinberger, and received his PhD in Machine Learning from Washington University in St Louis. His research is in the areas of counterfactual fairness, privacy, budgeted learning, model compression and Bayesian optimisation.

Talk title: Counterfactual Fairness

Synopsis: Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. Matt will present a framework for modelling fairness using tools from causal inference. This definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. The framework is demonstrated on a real-world problem of fair prediction of success in law school.

 

Mihaela van der Schaar (Turing Fellow and Man Professor, University of Oxford, UK) 

Professor van der Schaar is Man Professor in the Oxford–Man Institute of Quantitative Finance (OMI) and the Department of Engineering Science at Oxford, Fellow of Christ Church College and Fellow of the Alan Turing Institute.

Mihaela’s research interests and expertise are in machine learning, data science and decisions for a better planet. In particular, she is interested in developing machine learning, data science and AI theory, methods and systems for personalised medicine and personalised education.

Talk title: Machine learning and data science for medicine: a vision, some progress and opportunities

 

Max Welling (Research Chair in Machine Learning, University of Amsterdam, Netherlands)

Professor Welling is a research chair in Machine Learning at the University of Amsterdam and a Vice President Technologies at Qualcomm. He has a secondary appointment at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which was acquired by Qualcomm. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 and has been program chair for AISTATS (’09), NIPS (’13) and ECCV (’16) and general chair of NIPS in 2014. He received an NSF career grant in 2005 and is a recipient of the ECCV Koenderink Prize in 2010. Welling’s machine learning research labs are AMLAB, Qualcomm QUVA Lab and Bosch Delta Lab.

Talk title: Artificial Intelligence per Kilowatt-hour

Synopsis: The successes of deep learning are spectacular. But modern deep learning architectures with hundreds of layers and millions of parameters require an extraordinary amount of computation and data to train and execute. At the same time, more compute is moving to the edge. We predict that the next battle in AI is over how much intelligence can be squeezed out of every kilowatt-hour of energy.

Max will discuss a number ideas to make progress on this problem, including compressing deep neural nets, computing with low bit precision and spiking neural networks.