How can we design fair, transparent, and accountable AI and robotics?

Friday 02 Jun 2017

Introduction

The proliferation of AI and robotics is increasing in both the private and the public sectors. Due to the wide-ranging applications of algorithmic technologies we must ensure fair, transparent and accountable autonomous systems.

In a new paper published in Science Robotics, Turing and Oxford researchers Sandra Wachter, Brent Mittelstadt and Luciano Floridi outline their arguments for why we need new policies and a holistic mindset to design, explain, and audit AI and Robotics. The use of opaque, inscrutable algorithms in technologies like self-driving cars, security and companion robots poses challenges for regulators, and risks for users.

The design of ethically satisfactory and technically feasible regulation is complicated by the range and complexity of AI systems, with their creators often unable to provide simple explanations of how they work. For example, the EU General Data Protection Regulation offers safeguards against solely automated decision-making if the outcome has legal or other significant effects.

However, robotics systems sometimes make decisions that are not solely automated, and yet can still significantly affect us. For example, when semi-automated cars decide whether to brake or to accelerate which in the worst case could lead to crashes. Hence, we need to design systems that allow us to scrutinise these systems to prevent harm and to improve them.

Driverless cars are an example of robotics systems which need to be scrutinised to ensure control over their functionality and prevent potential harm.

The authors recommend further research in three areas to unravel the best ways to keep AI, robots and decision-making algorithms accountable:

  • It is taken for granted that making a system more interpretable will also make it less efficient. This assumption needs to be questioned if explainable AI, robotics, and decision-making systems are to see wide-scale deployment.
  • Explanations may not always be technically feasible or practical. Alternative accountability mechanisms need to be explored for these cases, including certification schemes and auditing functions.
  • Similar systems should be similarly regulated. Methods to identify parallels between algorithmic systems need to be created to ensure appropriate system- and context-specific accountability requirements can be set.

Sandra Wachter, Fellow at the Alan Turing Institute and Postdoctoral Researcher in Data Ethics at the Oxford Internet Institute at University of Oxford commented:

“The most important thing is to recognise the similarities between algorithms, AI and robotics. Transparency, privacy, fairness and accountability is essential for all algorithmic technologies. We need to address these challenges together to design save systems.”

Brent Mittelstadt, Postdoctoral Researcher in Data Ethics at the Alan Turing Institute commented:

“The right to explanation is something we still need to fight for, and even if it is granted in the future, we still need to make sure that it will apply to a much broader range of algorithmic systems than is currently envisioned.”

Luciano Floridi, Faculty Fellow at the Alan Turing Institute and Chair of the Data Ethics Group and Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab at the Oxford Internet Institute at the University of Oxford commented:

“AI can be an immense force for good, but we need to ensure that its risks are prevented or minimised. To do this, it is not enough to react to problems. A permanent crisis approach will not be successful. We need to develop some robust ethical foresight analysis not only to see “which grain will grow and which will not” but above all to decide which grains we should sow in the first place.

Read the full paper.

Read the Interview with Sandra Wachter in Science.

 

-ENDS-

For more information please contact:

Sophie McIvor, Head of Communications

The Alan Turing Institute

[email protected]/0203 862 3334

Conclusion