Machine learning and AI are of growing use to the financial industry.
One successful use case is for lenders to use machine learning algorithms to predict whether or not borrowers will pay back their loans. At least two fundamental questions arise in this context.
Does AI fundamentally change how the financial industry serves the economy?
A natural way to tackle this question is to ask who will benefit from the adoption of machine learning in finance.
Better technology almost certainly makes lenders’ business more profitable. Specifically, algorithms allow lenders to reduce false positives (accepting people for credit who are likely to default) and false negatives (denying credit to people who are not likely to default), both of which would otherwise be a drag on profits.
It is less clear whether all borrowers will benefit from new technology. On one hand, algorithms may single out borrowers who are already disadvantaged as bad credit risks, thereby exacerbating existing inequality. On the other hand, lenders may be able to provide loans to disadvantaged people if (and only if) they can accurately price credit risk. This could particularly impact borrowers who are on low incomes, and who are less likely to get approved for credit. These borrowers often seek out alternative providers such as payday lenders, and end up paying much higher interest rates.
In recent research conducted at Imperial College and the Federal Reserve Bank of New York, we evaluate these trade-offs using administrative data on US mortgages. An especially important question in the US context is whether disadvantaged racial groups—such as Black or Hispanic borrowers—will face less favourable terms when lenders use better algorithms.

The above figure, taken from our research paper, shows some of the key results. Our measure of perceived credit risk is the predicted PD (probability of default) from different statistical technologies. On the horizontal axis is the change in perceived credit risk as lenders move from traditional predictive technology (a “logit” classifier) to machine learning technology (a “random forest” classifier). On the vertical axis is the cumulative share of borrowers from each racial group that experience a given level of change.
Borrowers to the left of the solid vertical line represent “winners,” who are classed as less risky borrowers by the more sophisticated algorithm than by the traditional model. Reading off the cumulative share around this line, we see that about 65% of White Non-Hispanic and Asian borrowers win, compared with about 50% of Black and Hispanic borrowers. In short, we find that the gains from new technology are skewed in favour of racial groups that already enjoy an advantage. Disadvantaged groups are less likely to benefit in this dataset.
We stress that this does not constitute evidence of unlawful discrimination. Lenders in our setup are using algorithms to the best of their ability and in line with the letter of the current US law. In particular, they do not use sensitive variables such as borrowers’ race for prediction, which would be in breach of equal opportunities law. Rather, the unequal effects of new technology are driven by lenders’ use of other variables such as borrowers’ income, credit scores and loan-to-value ratios. It would not be sensible to prevent lenders from considering these variables when making loans. This leads to the next key question:
Are current financial regulations adequate for overseeing an AI-driven industry?
A worrying scenario would be where machine learning algorithms “triangulate” each borrowers’ race, effectively inferring race based on other observable characteristics. Existing equal opportunities law would be useless in this case.
In our research, we ask how much of the unequal impact of new technology is explained by triangulation. The answer is: Not very much (depending on the measure used, it is between 2% to 8%).
We argue that unequal effects are instead driven by the flexibility of the new technology. Machine learning models are able to pinpoint precise combinations of observable variables – e.g., income below $80,000 per year combined with a FICO credit score below 700 – that are particularly risky from the lender’s perspective.
Empirically, it turns out that disadvantaged minority borrowers are much more likely to exhibit these “problematic” combinations than other borrowers. And since machine learning algorithms are flexible enough to uncover these combinations, these minority borrowers lose out.
The message for policy is therefore mixed: On one hand, since triangulation is not the driving force, equal opportunities law retains some of its value. On the other hand, since the flexibility of machine learning models can hurt disadvantaged groups, there is likely a case for new policies that address this issue.
Computer scientists have developed ways to implement “fairer” algorithms. However, how we could impose these on the financial sector, in a way that does not require prohibitively intrusive regulation, is still largely unknown. This question will likely remain at the frontier of research in the coming years.
Conclusion
Earlier this week, the Turing published a new landscaping report, Artificial intelligence in finance, by Bonnie Buchanan and this now concludes our short guest blog series on AI in finance.
For more information about our work in this area, or to learn how to get involved, visit our finance and economics research programme page.