Vaishak Belle

Turing Fellow Vaishak Belle’s career was initially inspired by science fiction – now he works to make machine learning interpretable and to understand how responsible decision-making could be codified

In a nutshell, tell me about your research? 

I work on AI and machine learning. Broadly, I’m motivated by the autonomous agency of machines - in the sense of how we can make them learn faster, more accurately model elements of the world, and what this might tell us about our own cognition. Quite a bit of my work involves thinking about models of space, time, dynamics, beliefs, and then finding ways in which these models could be partially learnt from data. Having said that, my work has recently focused on making machine learning interpretable and understanding how ethical behaviour and responsible decision-making could be codified. 

What got you interested in your field of research? 

I was initially inspired, as I’m sure many others are, by science fiction. When I first entered the field, however, I was surprised: on the one hand, there weren’t many rigorously defined models of human level cognition, and on the other, much of the mathematically principled parts of the field were concerned with narrow problems around predictions and labels based on collected data. Of course, there were many impactful applications of this work, but I was interested in the more basic problems of artificial cognition. 

This got me interested in mathematical logic. In this area there is some attempt to tackle problems around cognition, however there was very little work on acquisition and learning from data. I then dabbled in machine learning, before ultimately began looking at how the best of both worlds (logic and learning) could be combined. 

What do you hope is the impact of your research?

The main goal is to have a principled, and rigorous, account of artificial cognition, which is not defined solely by the ability to classify images, such as cats and dogs, but in terms of a commonsensical understanding of the world. This means, among other things, the ability to communicate, to reason, to manipulate and to contextualise decision making in a way that is open-ended or unrestricted.

Of course, since all of this is partly inspired by human cognition but doesn’t necessarily replicate the architecture/functionality in a biologically plausible way, the question of how insights from the two fields can influence each other remains to be seen. 

Can you give us a taster of what you'll be discussing at AI UK?

I will mainly focus on interpretability in machine learning. Although the topic of explainability has interesting connections to fields such as social sciences and human-computer interaction, there is a more urgent need to inspect and scrutinise the boundaries of decision making in machine learning models. I’ll be focusing mostly on how we might approach this, including some recent work we’ve done to categorise these approaches, and future steps involving causality and human-readable model learning. 

What advice would you give to your younger self? 

To not be so worried about doing your own thing and go where your interests take you - even if you’d find yourself on unsure footing!

Finally, when not working what can you be found doing?

I enjoy travelling and backpacking, running, literature and of late, looking after my infant daughter.