Towards accountable AI in Europe?

Tuesday 18 Jul 2017

Introduction

AI and its promise

We are living in the age of Big Data, but data is useless if we do not have algorithms that help us to interpret it. Algorithms are increasingly used in both the public and the private sectors; across industrial sectors for financial trading, recruiting decisions (hiring, firing, and promotions), and for setting insurance premiums. Algorithms help decide whether individuals are desirable candidates for insurance, eligible for a loan or a mortgage, or should be admitted to university. The criminal justice system uses algorithms for sentencing or to decide if someone should be granted parole and to calculate the probability whether someone will commit a crime. Algorithms can – if well-designed and fed unbiased data – make more accurate, efficient, and fairer decisions than humans.

AI and its challenges

AI based systems are often opaque ‘black boxes’ and are difficult to scrutinise. As increasingly more of our economic, social and civic interactions – from credit markets and health insurance applications, to recruitment and criminal justice systems – are carried out by algorithms, concerns have been raised about the lack of transparency behind the technology, which leaves individuals with little understanding of how decisions are made about them. We need proper safeguards in place to make sure that the decisions that are being made about us are actually fair and accurate.

AI and the EU’s General Data Protection Regulation

In 2016 the EU General Data Protection Regulation (GDPR),  Europe’s new data protection framework, was approved. The new regulation will come into force across Europe – and the UK – in 2018. It has been widely and repeatedly claimed that a ‘right to explanation’ of all decisions made by automated or artificially intelligent algorithmic systemswill be legally mandated by the new regulation. This “right to explanation” is viewed as an ideal mechanism to enhance the accountability and transparency of automated algorithmic decision-making.

Such a right would enable people to ask how a specific decision (e.g. being declined insurance or being denied a promotion) was reached.

An explanation can be offered in various ways. There are at least two possible algorithmic explanations: an explanation of “system functionality” and an explanation about the “rationale” of an individual decision. Explaining the algorithmic methods used to assess the credit-worthiness or to set interest rates (system functionality) does not have the same quality as an explanation of “how” a certain rate was set or “why” a credit card application was declined.

Together with Turing Researchers Dr. Brent Mittelstadt and Prof, Luciano Floridi we examined this claim. Unfortunately, contrary to what was hoped, our research has revealed that the GDPR is likely to only grant individuals information about the existence of automated decision-making and about “system functionality”, but no explanation about the rationale of a decision. In fact, in the whole GDPR the “right to explanation” is only mentioned once in the regulation, in Recital 71, which lacks the legal power to establish stand-alone rights. The purpose of a Recital is to provide guidance on how to interpret the operational part of a regulatory framework, if there is ambiguity. But in our research,  I see there is no ambiguity regarding the minimum requirements that requires further clarification.

Placing the “right to explanation” in a Recital and the fact that the recommendation of the European Parliament to make this right legally binding was not adopted, suggests that European legislators did not want to grant this idea the same legal status as the other safeguards in the legally binding text in Art 22 GDPR. Of course, that does not mean that data controllers could not voluntarily decide to offer explanations, or that future jurisprudence or law built on this Recital could create such a right in the future.

Sandra Wachter at The Alan Turing Institute. Copyright @ Initium Media

The regulation also creates ‘notification duties’ (Art 13-14) for data controllers (e.g. companies that hold personal data about us) at the time when data is collected. This means that the data controller needs to inform their customer about “the existence of automated decision-making, including profiling […] and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. This information needs to be provided when the data is collected but before automated decision-making starts. Hence data controllers only need to inform about system functionality, since no automated decision was made yet that could be explained. However, users can request the same information at any time  – meaning even after decisions have been made – using their “right of access” in Art 15. Nonetheless, Art 13-15 are identical in wording, and the wording strongly indicates information about the system itself rather than the rationale. Hence it is highly unlikely that an explanation of an individual decision will be legally mandated by the GDPR.

Indeed, a similar right has existed for 20 years in the EU Data Protection Directive, which grants individuals to use the “right of access” to obtain “knowledge of the logic involved in any automatic processing”. The majority of Member State laws, legal scholars and jurisprudence have seen this provision as a mechanism to learn about how a system works in general and not about the rationale behind an individual decision. Further details about the algorithmic system, such as code, details about the algorithms, the weightings, the criteria, and the reference groups have been widely seen as protected by trade secrets. Hence these details have formed a significant barrier to transparency. However, future jurisprudence might interpret the GDPR differently.

All our research led me to conclude that the GDPR is likely to only grant a ‘right to be informed’ about the existence of automated decision-making, (the fact that these methods are being used) and about system functionality (logic involved) as well as the significance (scope) and the envisioned consequences (intended purpose e.g. setting credit rates which can impact payment options) rather than offering reasons for how a decision was reached. In practice, this could mean that as a customer, a data controller such as an insurance company has to inform you that algorithmic methods are used to assess your creditworthiness, and that this assessment could have an influence on whether you can pay with a credit card or cash, but not how this assessment was made. Additionally, the regulation states that legitimate interests of data controllers, such as “trade secrets or intellectual property and in particular the copyright protecting the software” need to be protected and it is not clear how to balance transparency with business interests.

AI and the future

Governing bodies will determine the scope and the effectiveness of the GDPR’s safeguards. New legislation will decide how much “meaningful information” needs to be provided and how to balance the business interests with transparency. New regulations will also decide whether meaningful information has to go beyond “system functionality” and hence establish e.g. that in order to contest decisions or to express views the rationale of a decision has to be known. Furthermore, the legal standing of Recital 71 could also be increased if judges want to grant those provisions more meaning and decide that a “right to explanation” should be legally binding.

This leaves Europe with a massive accountability gap, and great uncertainty. Uncertainty for individuals that do not yet have clearly defined rights, and uncertainty for data controllers who have no clear understanding of their duties. This is especially worrisome since non-compliance with the GDPR can be punished with fines up to 4% of annual turnover.

Hence it is important to start a public debate now, before the regulation comes into force. Making the “right to explanation” legally binding should be the first step in this direction. Having a third trusted and independent party to enforce the right could help to strike a balance between competing interests. On the one hand, we have people who have a vested interest to understand how decisions are made about them. On the other hand, we have data controllers that have a legitimate interest of not disclosing their trade secrets, which could happen if they are required to give detailed explanations about their algorithmic decision-making processes and methods. A third party could help to find the middle ground.

Transparency will increase public acceptance and trust in these technologies, and will consequently increase economic growth and support innovation and research. We need to start this discussion today to shape the algorithmic society of tomorrow.

About the Author

Dr. Sandra Wachter is a Turing Research Fellow at The Alan Turing Institute and a Researcher in Data Ethics and Algorithms at the Oxford Internet Institute. Her research focuses on the legal and ethical implications of Big Data, AI, and robotics as well as governmental surveillance, predictive policing, and human rights online.

Dr. Sandra Wachter

The Alan Turing Institute;

Oxford Internet Institute, the University of Oxford

Twitter handle @SandraWachter5

Conclusion