Accenture challenge: Fairness in algorithmic decision-making

Introduction

As the development and implementation of artificial intelligence continues to gather pace conversations on ethics and how to make the technology fair and beneficial for all are increasingly taking place across industry circles. As a major global company that aims to help business transform in the digital world, Accenture’s challenge to Turing DSG researchers was to develop a tool that can promote fairness in algorithmic decision-making across the financial services sector.

Interview

Dr Rumman Chowdhury, Accenture’s Lead on Responsible AI explains, “Fairness is currently a hot topic at conferences and our challenge is inspired by Dr Arvind Narayan’s tutorial at the FATML conference where he discussed 21 different types of fairness. Clients are increasingly aware of the importance of responsible deployment of AI and our challenge for Turing researchers was to develop a tool that can be applied to real world contexts and that is compliant with GDPR.”

Before work on the challenge began Rumman - a data scientist with a background in political science and quantitative social science - convened a space where the team of 11 researchers with backgrounds in statistics, ethics, applied statistics, and machine learning had a chance to review the literature on fairness and discuss its conceptual meaning.

Explaining her rationale for preparing the researchers in this way, she says: “At the moment a global conversation on fairness is taking place, but developing a real-world tool that is explainable, transparent, and accountable appears to be an insurmountable challenge. The idea of fairness is perceived in different ways across disciplines such as statistics and social science and I wanted to give the group the opportunity to consider a range of academic definitions outside of their own disciplines to help inform their approach to the challenge.”

Using a publicly available dataset on credit risk, the researchers were tasked with developing a quantifiable metric-based tool to map out and identify where bias and unfairness can creep in and fix it. Rumman says: “We envisage this tool being used by our clients in a broad range of industries. They would use the work flow assistant or software to assess the algorithms that they are currently using.”

"An excellent forum to solve hybrid questions that are experimental and underdefined"

Dr Rumman Chowdhury, Lead on Responsible AI at Accenture

After five long days of dedicated hard work, Rumman is highly satisfied with the outcome and believes it was definitely time well spent. She says: “This experience has been absolutely worthwhile. The team were fabulous and have come up with code that Accenture will build into a prototype. We plan to demo this tool at Accenture’s pavilion on responsible AI at the CogX exhibition in June [2018].”

Reflecting on the team’s dynamics and the DSG experience overall, she adds: “The researchers were able to work collaboratively and come up with a coherent solution, but also demonstrate that there are multiple ways to address the problem. I think people were able to shine as individuals, learn something new, while also contributing at a macro level.

“It’s been an excellent forum to solve hybrid questions that are experimental and underdefined. I think it’s difficult to find the skills sets that they brought to this challenge and I think that as data scientists this is a great space to mentally breathe and try out new things and bring form and structure to what was a nebulous but fun challenge.”

Finally thinking back to the key highlight, Rumman concludes: “The main highlight was the code of conduct, which was understanding, overwhelmingly clear and went a long way in generating a safe, collaborative and zero tolerance environment. I think this really demonstrated the Turing’s willingness to address and rectify any issues. I think it was done right and given the due respect and time.”