Fairer algorithm-led decisions

Turing researchers from diverse fields have produced a new way of approaching fairness in algorithm-led decisions, by looking at the causes of certain factors that can often result in biased decision-making

Last updated
Friday 27 Apr 2018

Introduction

​Algorithms are increasingly assisting in life-changing decisions, such as in parole hearings, loan applications, and university admissions. However, if the data used to train an algorithm contains societal biases against certain races, genders, or other demographic groups, then the algorithm will too.

Four researchers, a mix of Turing Fellows and Research Fellows who met at the Institute, have developed a framework that aims to ensure fairness in algorithm-led decision-making systems by taking into account different social biases and compensating for them effectively. The research stems from the idea that a decision is fair towards an individual if the outcome is the same in the actual world as it would be in a ‘counterfactual’ world, in which the individual belongs to a different demographic.

The framework calls for the need for decision-making algorithms to be designed with the input of expert knowledge about the situations the algorithms are being used in. Knowledge about the relationships between key factors and attributes that relate to the people and processes involved.

The work is providing practitioners with customised techniques for solving a wide array of problems, in applications from policy-making to policing.

How did it start?

The project was born of the unique environment offered by the Turing. The researchers, with interests across statistics, mathematics, and machine learning, were unified not just by their shared interests, but by their shared lunches in the Institute’s kitchen.

Turing Fellow Chris Russell, on the collaboration’s early days; “I remember a lot of talking over lunch and Josh [Loftus, former Turing Research Fellow] saying ‘we need to do something about these algorithms.’” Matt Kusner, Turing Research Fellow, continues, “We were asking ‘how do you make sure algorithms don’t discriminate against certain groups?’ Then Josh mentioned we could think about using causal inference.” Loftus: “I had this idea that fairness and causal inference were kind of dual problems. In causal inference, you try and figure out the effect of a certain variable, when it might be confounded with [i.e. both be affected by and affect] other things. In fairness, you’re trying to make a certain variable not have any effect, when it too might potentially be confounded with other things.”

This notion led the trio to enlist the help of Turing Fellow Ricardo Silva. Loftus explains, “Ricardo helped us find what specific causal methods and language we could use to make our thoughts more rigorous, clear, and organised.”

What happened?

Finding what’s fair

The next few months were spent exploring various techniques and refining illustrative examples that showed how causal relationships between variables are important for figuring out what’s fair and what’s not fair. This led to the team publishing the paper ‘Counterfactual Fairness’ in March 2017. One hypothetical example presented in the paper was for car insurance:

Imagine a car insurance company wants to price insurance for car owners by predicting their accident rate with an algorithm. The company assumes that aggressive driving is linked both to drivers being more likely to have accidents and a preference for red cars.

Whilst it would seem that the company could use red car preference as a good way of predicting who will cause accidents, there could be other variables at play: for example, what if individuals of a particular race are more likely to drive red cars, but are no more likely to drive aggressively or have accidents? If, in an attempt to be fair, the company removed or ignored race from the decision-making process, they might only have directly observable factors, like red car preference, from which to make a decision, thereby including, rather removing, hidden racial biases.

Loftus believes this example illustrates an important point, “If we start leaving sensitive variables, like people’s race, out of our dataset, which some laws actually require, we can become unable to understand what the causes of certain observations are, and therefore unable to prevent bias against those sensitive attributes.”

“Fairness is a subtle and hard problem; if we can get people to understand that, a lot of the way forward flows from that”

Joshua Loftus, NYU Stern

What needs to happen instead is that when an algorithm is being designed to predict an outcome or make a decision, the variables used to make the prediction or decision need to be carefully assessed. Any differences in these variables identified as being caused by sensitive factors, like race, then need to be cancelled out. This is the basis for ensuring counterfactual fairness in algorithms using causal models. As Kusner explains, “The causal model allows you to reimagine any individual as a different race or gender, or any different attribute, and make a prediction on that imagined person. Then you can see if it’s the same as the prediction that would have happened originally.”

The team also emphasises the need for algorithms to be designed with the input of expert, situational knowledge. “You need somebody who understands the process at hand,” Ricardo Silva explains, “The data just tells you the raw results, it doesn’t tell you what the processes that led you to get a certain result are.” For example, if a bank was looking at loan recommendations, someone either at the bank or at a regulatory agency would need to provide their context specific knowledge about what factors they believe contribute to someone getting a loan, and how these are influenced by other factors, such as demographic.

Research Fellow Matt Kusner at NIPS in Dec 2017
Research Fellow Matt Kusner presenting ‘Counterfactual Fairness’ at the world’s premier machine learning conference NIPS in Dec 2017

Counterfactual fairness as a tool

In the team’s paper they detail a set of technical guidelines for ensuring fairness for those utilising automated decision-making. These guidelines give practitioners an idea of how they should structure their problem, what is implied by their assumptions, and how they could evaluate it all using a causal model.

Silva does offer some caution however, “You’re never going to have a fool-proof solution. You just have to do the best you can with the knowledge that you have and be open that you might be wrong, and iterate to a better solution.

Sharing with the machine learning community

The team presented their work in December 2017 at the Neural Information Processing Systems (NIPS) conference in California, one of the world’s premier machine learning conferences.

In a year of record-breaking submissions for the conference, the team were invited to give an oral presentation of their research; one of only 40 of the 3,000 plus submitted papers selected for this kind of presentation.

“This work is the first to show us how we might use causality to uncover injustices in algorithms”

Simon DeDeo, Assistant Professor in Social and Decision Sciences, Carnegie Mellon University

“The talk was absolutely massive,” Loftus enthuses, “The poster session also; we were just surrounded by a massive crowd for three hours non-stop answering questions.”

Simon DeDeo, Assistant Professor in Social and Decision Sciences at Carnegie Mellon University and previously a Visiting Researcher at the Turing, was at the conference and said: “A crucial part of how we make moral judgements is by talking about causes. The counterfactual fairness work is the first to show us how we might use this to uncover injustices in algorithms”.

What does the future hold?

The team are looking at various different applications for their work and are actively reaching out and engaging with policy-makers and lawyers. One example of where the team’s work could be valuable is identifying racial biases in algorithm used by judges and parole officers in the US for scoring criminal defendants’ likelihood of reoffending. ProPublica, an American non-profit investigative newsroom, collected and made publicly accessible a big dataset from the use of such an algorithm called COMPAS. The Turing team have run their methods on the dataset in order to confirm that systematic, racial bias could be identified and removed with their approach, and that the same approach could be used in the future for similar algorithmic processes.

The team recognises that refinements can be made to the methods they have produced to date. However, they are not the only ones working on these issues, as Russell suggests, “We were lucky enough to be first and now there’s a bunch of people who have got excited by our work and gone ‘we’re doing causal methods as well’, which has been really nice to see.”

Counterfactual fairness has the potential to have a wide-reaching, meaningful effect on algorithms that are becoming an increasingly prevalent part of modern life, but there are significant challenges that will need to be solved. Loftus concludes, “Understanding how decision-making technology impacts people in the real world is a subtle and hard problem. Fairness is a very hard problem. I think if we can get people to understand that, a lot of the way forward flows from that.”

PDF Summary