AI for Social Good
Date: 12 February 2018
Time: 10:00 – 19:00
Venue: The Royal Society
Registration for this event is now closed but you can watch live online.
A symposium exploring recent advances in AI and how we can achieve positive outcomes for society.
AI applications are used in everyday lives at an accelerating pace, with great potential to benefit society. These applications also raise new challenges around ethics and robust design. This symposium brings together key speakers from the world of AI research to discuss recent work and consider how we can work towards technical solutions to ensure that AI best serves society.
The symposium will featuring a full day of talks followed by a wine reception. It will be held at The Royal Society, London.
This symposium is generously supported by Toyota Motor Europe.
Roberto Cipolla (Professor of Information Engineering, University of Cambridge, UK)
Professor Cipolla has been a Professor of Information Engineering at the University of Cambridge since 2000. Previously he worked as a Toshiba Fellow and engineer at the Toshiba Corporation Research and Development Centre in Kawasaki, Japan, and was awarded a D.Phil. (Computer Vision) from the University of Oxford in 1991.
Roberto’s research interests are in computer vision and robotics and include the recovery of motion and 3D shape of visible surfaces from image sequences; object detection and recognition; novel man-machine interfaces using hand, face and body gestures; real-time visual tracking for localisation and robot guidance; applications of computer vision in mobile phones, visual inspection and image-retrieval and video search.
Talk title: Making machines that see: Geometry, Uncertainty and Deep Learning
Synopsis: The last decade has seen a revolution in the theory and application of computer vision and machine learning. I will begin with a brief review of some of the fundamentals with a few examples of the reconstruction, registration and recognition of three-dimensional objects and their translation into novel commercial applications.
I will then introduce some recent results from real-time deep learning systems that exploit geometry and compute model uncertainty. Understanding what a model does not know is a critical part of safe machine learning systems. New tools, such as Bayesian deep learning, provide a framework for understanding uncertainty in deep learning models, aiding interpretability and safety of such systems. Additionally, knowledge of geometry is an important consideration in designing effective algorithms. In particular, we will explore the use of geometry to help design networks that can be trained with unlabelled data for stereo and for human body pose and shape recovery.
Thomas G Dietterich (Professor of Computer Science, Oregon State University, USA)
Professor Dietterich is Distinguished Professor (Emeritus) and Director of Intelligent Systems at Oregon State University. He is widely celebrated as one of the founders of machine learning. Among his research contributions were the invention of error-correcting output coding to multi-class classification, the formalization of the multiple-instance problem, the MAXQ framework for hierarchical reinforcement learning, and the development of methods for integrating non-parametric regression trees into probabilistic graphical models.
Talk title: Steps Toward Robust Artificial Intelligence
Synopsis: AI technologies are being integrated into high-stakes applications such as self-driving cars, robotic surgeons, hedge funds, control of the power grid, and weapons systems. These applications need to be robust to many threats including cyberattack, user error, incorrect models, and unmodeled phenomena. This talk will survey some of the methods that the AI research community is developing to address two general kinds of threats: The “known unknowns” and the “unknown unknowns”. For the known unknowns, methods from probabilistic inference and robust optimization can provide robustness guarantees. For the unknown unknowns, the talk will discuss three approaches: detecting model failures (e.g., via anomaly detection and predictive checks), employing causal models, and constructing algorithm portfolios and ensembles. For one particular instance of model failure—the problem of open category classification where test queries may involve objects belonging to novel categories—the talk will include recent work with Alan Fern and my students on providing probabilistic guarantees.
Sharon Goldwater (Reader, University of Edinburgh, UK)
Dr Goldwater is a Reader in the Institute for Language, Cognition and Computation at the University of Edinburgh’s School of Informatics. She received her PhD in 2007 from Brown University, supervised by Mark Johnson, and spent two years as a postdoctoral researcher at Stanford University before moving to Edinburgh. Her research interests include unsupervised learning for natural language processing, computer modelling of language acquisition in children, and computational studies of language use. Dr. Goldwater holds a Scholar Award from the James S McDonnell Foundation for her work on “Understanding synergies in language acquisition through computational modelling” and is the 2016 recipient of the Roger Needham Award from the British Computer Society for “distinguished research contribution in computer science by a UK-based researcher who has completed up to 10 years of post-doctoral research.”
Talk title: Language learning in humans and machines: making connections to make progress
Synopsis: Current language processing methods are resource-intensive and available for only a tiny fraction of the world’s 5000 or more languages, mainly those spoken in large rich countries. This talk will argue that in order to solve this problem, we need a
better understanding of how humans learn and represent language in our minds, and we need to consider how human-like learning biases can be built into computational systems. Sharon Goldwater will illustrate these ideas using examples from her own research. She will discuss why language is such a difficult problem, what we know about human language learning, and then show how her own work has taken inspiration from that to develop better methods for computational language learning.
Thore Graepel: (Research Scientist, DeepMind, and Professor of Computer Science, UCL, UK)
Professor Graepel leads the multi-agent team at DeepMind and holds a chair of machine learning at University College London. Before coming to DeepMind, Thore worked in probabilistic machine learning at Microsoft, where major applications of his work include Xbox Live’s TrueSkill system for ranking and matchmaking and the AdPredictor framework for click-through rate prediction in Bing. More recently, Thore’s work on the predictability of private attributes from digital records of human behaviour has been the subject of intense discussion among privacy experts and the general public. At DeepMind, Thore has returned to his original passion of understanding and creating intelligence, and recently contributed to creating AlphaGo, the first computer program to defeat a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away
Talk title: From Competition to Cooperation: Multi-Agent Learning for Artificial Intelligence
Synopsis: Human intelligence evolved in societies where interaction with other humans was an important driving factor. In order to create more and more competent artificial agents, I therefore argue, we need to create multi-agent learning environments and train the agents together. To substantiate this idea, I will discuss two examples, one with competitive and one with (emergent) cooperative dynamics.
For the competitive case, I will discuss how the latest version of AlphaGo uses the dynamics of self-play to learn to play the game of Go better than any human player. The role of the competitive self-play procedure is to produce training data that starts with very weak play and develops towards expert level play as learning proceeds, thus creating a curriculum of progressively harder learning problems. The system’s games and behaviours are now eagerly studied by Go players across the world.
For the cooperative case, I will show how we can use advances in deep reinforcement learning to study the age old question of how cooperation arises among self interested agents, even in problematic situations such as social dilemmas. In particular, we can go beyond purely individual rewards by judging the outcome of multi-agent interaction using social metrics such as equality, peace, and sustainability.
Matt Kusner (Research Fellow, The Alan Turing Institute, UK)
Dr Kusner is a Research Fellow at The Alan Turing Institute. He was previously a visiting researcher at Cornell University, under the supervision of Kilian Q Weinberger, and received his PhD in Machine Learning from Washington University in St Louis. His research is in the areas of counterfactual fairness, privacy, budgeted learning, model compression and Bayesian optimisation.
Talk title: Counterfactual Fairness
Synopsis: Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. Matt will present a framework for modelling fairness using tools from causal inference. This definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. The framework is demonstrated on a real-world problem of fair prediction of success in law school.
Mihaela van der Schaar (Turing Fellow and Man Professor, University of Oxford, UK)
Professor van der Schaar is Man Professor in the Oxford–Man Institute of Quantitative Finance (OMI) and the Department of Engineering Science at Oxford, Fellow of Christ Church College and Fellow of the Alan Turing Institute.
Mihaela’s research interests and expertise are in machine learning, data science and decisions for a better planet. In particular, she is interested in developing machine learning, data science and AI theory, methods and systems for personalised medicine and personalised education.
Talk title: Machine learning and data science for medicine: a vision, some progress and opportunities
Synopsis: Mihaela’s work uses data science and machine learning to create models that assist diagnosis and prognosis. Existing models suffer from two kinds of problems. Statistical models that are driven by theory/hypotheses are easy to apply and interpret but they make many assumptions and often have inferior predictive accuracy. Machine learning models can be crafted to the data and often have superior predictive accuracy but they are often hard to interpret and must be crafted for each disease … and there are a lot of diseases. In this talk I present a method (AutoPrognosis) that makes machine learning itself do both the crafting and interpreting. For medicine, this is a complicated problem because missing data must be imputed, relevant features/covariates must be selected, and the most appropriate classifier(s) must be chosen. Moreover, there is no one “best” imputation algorithm or feature processing algorithm or classification algorithm; some imputation algorithms will work better with a particular feature processing algorithm and a particular classifier in a particular setting. To deal with these complications, we need an entire pipeline. Because there are many pipelines we need a machine learning method for this purpose, and this is exactly what AutoPrognosis is: an automated process for creating a particular pipeline for each particular setting. Using a variety of medical datasets, we show that AutoPrognosis achieves performance that is significantly superior to existing clinical approaches and statistical and machine learning methods.
Max Welling (Research Chair in Machine Learning, University of Amsterdam, Netherlands)
Professor Welling is a research chair in Machine Learning at the University of Amsterdam and a Vice President Technologies at Qualcomm. He has a secondary appointment at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which was acquired by Qualcomm. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 and has been program chair for AISTATS (’09), NIPS (’13) and ECCV (’16) and general chair of NIPS in 2014. He received an NSF career grant in 2005 and is a recipient of the ECCV Koenderink Prize in 2010. Welling’s machine learning research labs are AMLAB, Qualcomm QUVA Lab and Bosch Delta Lab.
Talk title: Artificial Intelligence per Kilowatt-hour
Synopsis: The successes of deep learning are spectacular. But modern deep learning architectures with hundreds of layers and millions of parameters require an extraordinary amount of computation and data to train and execute. At the same time, more compute is moving to the edge. We predict that the next battle in AI is over how much intelligence can be squeezed out of every kilowatt-hour of energy.
Max will discuss a number ideas to make progress on this problem, including compressing deep neural nets, computing with low bit precision and spiking neural networks.