Deep learning for object tracking over occlusion

Project goal

Using deep learning to discover occluded objects in an image.

People

Interns

Mario Parreño Centeno and Ricardo Sánchez Matilla

Project Supervisors

Vaishak Belle, Turing Fellow, University of Edinburgh
Chris Russell, Turing Fellow, University of Surrey
Brooks Paige, Turing Research Fellow, University of Cambridge

Project detail

Numerous applications in data science require us to parse unstructured data in an automated fashion. However, many of these models are not human-interpretable. Given the increasing need for explainable machine learning, an inherent challenge is whether interpretable representations can be learned from data.

Consider the application of object tracking. Classically, algorithms simply track the changing positions of objects across frames. But in many complex applications, ranging from robotics to satellite images to security, objects get occluded and thus disappear from the observational viewpoint. The first task here is then to learn semantic representations for concepts such as “inside”, “behind” and “contained in.”

The first supervisor (V. Belle) has written a few papers on using probabilistic programming languages to define such occlusion models — in the sense of instantiate them as graphical models — and use that construction in particle filtering (PF) problems, and decision-theoretic planning problems.

However, the main barrier to success here was these occlusion models need to be defined carefully by hand by a human, which makes them difficult to deploy in new contexts. The main challenge of this internship is to take steps towards automating the learning of these occlusion models directly from data.