Hear from AI experts and Turing AI World-Leading Researcher Fellows as we explore the latest research 'from the labs'.
Dustin Carlino, Research Associate at The Alan Turing Institute will be discussing: A digital twin to design, analyse, and visualise low-traffic neighbourhoods:
Imagine a London with vehicle traffic restricted to a network of major roads, with no through-traffic anywhere else. To combat rising levels of drivers cutting through residential streets, local authorities across the UK have been creating low-traffic neighbourhoods (LTNs). Modal filters in the street physically prevent drivers from crossing, but allow pedestrians and cyclists. When filters are strategically placed to prevent all through-traffic, people in the area are likely to enjoy better air quality, less noise pollution, and higher levels of physical activity. To help design and predict the effects of LTNs, the A/B Street project has created a new open source tool, ltn.abstreet.org. In a web browser, anyone can analyze vehicle shortcuts through an area, create new modal filters, and predict any "spillover" effects onto nearby streets. Users can also check how changes affect their own personal car journeys, by comparing routes before and after a proposed LTN. This talk will demonstrate the software, then dive into the algorithmic challenges building it, such as partitioning space between major roads, calculating cell connectivity, predicting cut-throughs, and heuristics for automated filter placement.
Mirella Lapata, Professor of natural language processing in the School of Informatics at The University of Edinburgh will be discussing: Movie Summarization as a Testbed for Machine Reasoning
The success of neural networks in a variety of applications and the creation of large-scale datasets have played a critical role in advancing machine understanding of natural language on its own or together with other modalities. In this talk, we will argue that long narratives as exemplified by movies can be used to approximate real-world understanding of language and the complex reasoning associated with it. So what are the reasoning steps involved when we are watching a movie? As the movie progresses, we become familiar with the basic plot, main characters, important action, the story and the film's main themes. In the end, we may be able to produce a summary of what happened and who did what to whom. We will approximate some of these steps with a model that segments movies into thematic units, called turning points, and illustrate how they can facilitate the analysis of complex narratives, such as screenplays. We will further formalise the generation of a shorter version of a movie as the problem of identifying scenes with turning points and present a graph neural network model for this task based on linguistic and audiovisual information. Finally, we will discuss why the representation of screenplays as (sparse) graphs allows us to model various reasoning steps required for movie understanding, is interpretable, and reveals the morphology of different movie genres.
Phillip Torr, Professor of Engineering Science at The University of Oxford will be discussing: Towards Robustness for Deep Learning
Computer vision research has entered a new golden age, with applications of computer vision beginning to enter all of our lives via mobile phones, autonomous cars, identifying malignant tumours etc. The startling progress over just the past few years has been enabled by a happy confluence of the availability of “big data”, cheap memory and cheap processing in the form of GPUs leading to the re-emergence of neural networks as the dominant paradigm-the “deep learning revolution”. Deep learning now sets the state-of-the-art in nearly every area of classical computer vision, including tracking, action recognition, segmentation, object recognition, depth recovery, image synthesis, and looks to be the cornerstone of high performing methods for some time to come. However, despite this performance deep nets have also been shown to be vulnerable to adversarial examples – images which are classified incorrectly (often with high confidence), which a human would find indistinguishable from one of the correctly labelled training data. Indeed current deep learning based computer vision algorithms turn out to be surprisingly fragile to perturbation. The existence of adversarial examples challenges the trust in computer vision systems and it has been highlighted by EU’s High-Level Expert Group on AI that deployed AI should be robust. Recent computer vision related deaths in autonomous cars highlight this need. Within this talk, I will give an outline of the problem, and some directions for its solution which should be accessible to a general audience.