Matthew Wicker

Matthew Wicker develops theoretical methods and practical tools to help demonstrate the safety, fairness and robustness of machine learning models

In a nutshell, tell us about what you do.

My work focuses on the safety, explainability and fairness of machine learning methods, particularly supervised learning and deep learning. I aim to explore and develop both theoretical techniques and practical tools which offset the negative externalities (or drawbacks) of applying machine learning methods.

Can you give an example of what these negative externalities might be?

A big one is bias. Say, for instance, you’ve trained a neural network to perform credit risk assessments on people applying for loans. You want to make sure that it doesn’t give different predictions to individuals who differ only in a sensitive attribute – like race, gender, or age – which has nothing to do with their creditworthiness in the first place. The trouble is that even if you don’t train a neural network to consider these sensitive attributes, unfairness can still creep in through correlated features. For example, a person’s postcode can be highly correlated with their race.

What I aim to do with my research is provide guarantees, in the form of concrete mathematical evidence, that even if you make changes to sensitive attributes or correlated features, it will not lead to a change in prediction. In this way, we can be more certain that a neural network isn’t just accurate, but fair.

What are some potential applications for this?

All the techniques I work on are very general purpose, so they could be applied in any context where sensitive information is used or where safety is a primary concern. Ultimately, we want to provide a formal guarantee that minor changes to a neural network’s input won’t have major negative changes to its output. And that could be useful for showing external parties like regulators or users themselves that a neural network has met certain criteria for fairness, safety, or robustness.

Your work at the Turing is part of an ongoing collaboration with Accenture – can you tell us more about that?

The partnership between the Turing and Accenture has allowed me to combine both theory and practical application, in a way that ultimately makes my work more useful and impactful. While my work at the Turing focuses on developing these theoretical guarantees for neural networks, with Accenture, I get to develop tools for actually applying those theories. It’s been really fruitful to take things that we’ve developed on a theoretical level and look at them from a more practical angle, and get feedback on what needs to be done to translate that theory into a real use case.

And finally, when not working what can you be found doing?

I love live music, so you can often find me at a concert! I also enjoy photography as a hobby. I really like taking Polaroid photos and, lately, I’ve been hunting down old cameras from the 70s and fixing them up.