Siddharth Narayanaswamy (Sid) is a Reader in Explainable AI in the School of Informatics at the University of Edinburgh and a Safe & Ethical AI Research Fellow at the Alan Turing Institute. Previously, he was a Senior Researcher in Engineering at the University of Oxford and Postdoctoral Scholar in Psychology at Stanford. He obtained his PhD from Purdue University in Electrical and Computer Engineering on Compositionality in Vision and Language.
His research broadly involves the confluence of machine learning, computer vision, natural-language processing, cognitive science, robotics, and elements of cognitive neuroscience, leading towards a central research goal to better understand perception and cognition with a view to enabling human-intelligible machine intelligence.
Sid's research focus in on building computational systems for perception and reasoning that are human-interpretable by default. His approach involves advances in probabilistic inference in hybrid (discrete + continuous) models through probabilistic programming and amortised structured deep generative models, to better capture unsupervised (or weakly-supervised) interpretable representations of perceptual data. He is also interested in how such interpretability can better enable the fair and ethical use of machine intelligence systems.