Dr Subramanian Ramamoorthy is Professor and Chair of Robot Learning and Autonomy within the School of Informatics at the University of Edinburgh, where he has been on the faculty since 2007. He is an Executive Committee Member for the Edinburgh Centre for Robotics and at the Bayes Centre. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh, and has been a Visiting Professor at Stanford University and the University of Rome 'La Sapienza'.
He serves as Vice President - Prediction and Planning at FiveAI, a UK-based startup company focussed on developing a technology stack for autonomous vehicles. His research focus is on robot learning and decision-making under uncertainty, with particular focus on achieving safety and robustness in artificially intelligent systems.
In recent years, AI has enjoyed a sustained period of widespread adoption, with AI-based systems being deployed widely in domains ranging from information retrieval to home entertainment. While many early applications were in domains where errors were tolerable to an extent, e.g., advertisement placement, we now see AI being deployed in safety-critical applications involving physical interaction between humans and machines. This raises several new challenges that must necessarily be addressed if these technologies are to realise their potential benefits. A first challenge is the need for robust decision making despite noisy sensing - providing assurances regarding their closed-loop behaviour, through novel tools for introspection and interrogation of models.
One approach that will be investigated in detail is program induction, to reinterpret or analyse complex models by casting them in a compositional and programmatic form that is compatible with tools for analysis and safety verification (including from control theory and formal methods). A second challenge pertains to ambiguity in models and specifications, requiring techniques for using a dialogue with the human user to iteratively expand the task and model specification, in order to better approximate the intended meaning/behaviour. We will develop the paradigm of programming by discussion, wherein the target of dialogue is the model or reward function that is then used in decision-making. These new tools will enable progress towards the larger goal of safety-critical AI, in the context of experimental efforts within the domain of healthcare, where the Co-I's involvement allows us to make significant inroads into the emerging area of surgical assistance.