Abstract
Call for Evidence
Machine learning algorithms often work by identifying patterns in data and making recommendations accordingly. This can be supportive of good decision-making, reduce human error and combat existing systemic biases. However, issues can arise if, instead, algorithms begin to reinforce problematic biases, either because of errors in design or because of biases in the underlying data sets. When these algorithms are then used to support important decisions about people’s lives, for example determining their credit rating or their employment status, they have the potential to cause serious harm.
A call for evidence was made on 7 May 2019 was made by the CDEI to investigate if this is an issue in certain key sectors, as well as the extent of this, and produce recommendations to Government, industry and civil society about how any potential harms can be minimised.
Summary of the Turing’s submission
The Alan Turing Institute welcomes the focus of the review on particular sectors. This reflects an understanding that the impact of bias on individuals affected by algorithmic decisions differs by context. It should nonetheless be remembered that bias in algorithmic decision-making may be a combination of biases in data sets, human decision-making in context, and machine learning algorithms, which can be shared, reinforced and amplified through cross-sector and government collaborations. Furthermore, technology companies operate across sectors. This debate must therefore examine cross-sectoral approaches in considering the question of bias.
The response provides personal reflections from a number of researchers in The Alan Turing Institute’s community on some of the questions in the review, touching on various sectors of focus, including consumer-targeted credit risk and credit scoring.