Stuart Russell received his BA with first-class honours in physics from Oxford University in 1982 and his PhD in computer science from Stanford in 1986. He then joined the faculty of the University of California at Berkeley, where he is Professor (and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He has served as an Adjunct Professor of Neurological Surgery at UC San Francisco and as Vice-Chair of the World Economic Forum's Council on AI and Robotics.

His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI; it has been translated into 14 languages and is used in over 1,400 universities in 128 countries.

His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

About the event

Turing Lecture banner

Is it reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios? Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? While some in the mainstream AI community dismiss the issue, Professor Russell will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimise arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us.

In this talk, he will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help. This uncertainty causes machine and human behaviour to be inextricably (and game-theoretically) linked, while opening up many new avenues for research. The ideas in this talk are described in more detail in his new book, "Human Compatible: AI and the Problem of Control" (Viking/Penguin, 2019).