News

Alan Turing Institute researchers warn that upcoming laws requiring companies to reveal how their algorithms work could fail to deliver

As increasingly more of our economic, social and civic interactions – from credit markets and health insurance applications, to recruitment and criminal justice systems – are carried out by computer algorithms, concerns have been raised about the lack of transparency behind the algorithmic technology, which leaves individuals with little understanding of how decisions are made about them.

The EU’s General Data Protection Regulation (GDPR), approved in 2016 and due to be applied in the UK and across Europe in 2018, is widely believed to include rules requiring data controllers to provide individuals with an explanation as to how their algorithm reached a specific decision about them.

However, a new paper by researchers from The Alan Turing Institute argues this is not the case. Their analysis shows that the ‘right to explanation’ is, in fact, not legally required by the GDPR as many had hoped. The researchers also raised the questions about what such explanations would contain, and urged lawmakers to consider the technical feasibility of such solutions. The paper calls for unified action now before the framework comes into force to shape Europe’s algorithmic future.

Key findings of the paper include:

  • The ‘right to explanation’ is in reality more a ‘right to be informed’: data controllers only need to inform individuals about the usage and the basic design of algorithmic decision-making methods, but not about the details or steps they took to make a specific decision.
  • The provisions relating to automated decision-making are missing vital safeguards: for example, contesting the decision of the algorithm is only possible when algorithmic processes which are fully automated are being used (consequently, the safeguards do not apply if humans are put in the decision-making loop at any point, even if they do not plan to intervene) and they only apply in certain circumstances (when the outcome of the decision is considered to have ‘legal or other significant effects’ on an individual).
  • The information required to be given to individuals concerning automated decision-making will likely be heavily limited by trade secrets and other interests of data controllers. There is therefore a risk that individuals will receive very little meaningful information about automated decisions.

The paper goes on to make a number of recommendations to clarify the wording of the Regulation, in order to significantly strengthen its accountability and transparency requirements.

In addition, the paper recommends establishing a trusted third party or regulator made up of experts in computer science, ethics and law, with responsibility for auditing automated decision-making in a way that both manages the risk to data controllers of exposing trade secrets, and ensures the rights of individuals are protected.

Sandra Wachter, researcher at The Alan Turing Institute and lead author of the paper, commented:

“Our article shows there is a lot of ambiguity and vagueness in the General Data Protection Regulation, which could result in fragmented standards across Europe.

An open dialogue with industry, government, academia and civil society is needed on ways forward, and we aim to provide a clear path forward to achieve meaningful accountability and transparency in rapidly emerging algorithmic and artificially intelligent applications.

It is important that we discuss these issues now, as algorithmic decisions affect our lives in increasingly significant ways.”

Luciano Floridi, co-author on the paper and Chair of The Alan Turing Institute’s Data Ethics Group, commented:

“This research shows that ethical understanding is essential in order to formulate, interpret, and then improve the law. The critical work of the Data Ethics Group at The Alan Turing Institute is already contributing to develop the right approach to the values and norms that should regulate our digital realities.”

Notes to Editors

For more information:

Sophie McIvor

Head of Communications, The Alan Turing Institute

smcivor@turing.ac.uk / 0203 862 3334

Read the whole article here.

Authors of the paper:

Sandra Wachter 
The Alan Turing Institute

Brent Mittelstadt 

The Alan Turing Institute and the University of Oxford, Oxford Internet Institute

Luciano Floridi 

The Alan Turing Institute and University of Oxford, Oxford Internet Institute