Blog

Can ethics and law help build trust in artificial intelligence?


Algorithms are being used to make decisions across many areas of public life, including healthcare, the criminal justice system, social services, politics and financial services. Against this backdrop, legal and ethical frameworks which ensure that artificial intelligence systems are explainable, safe and fair need to be developed urgently. This was the consensus reached by a panel of experts at an event hosted at The Alan Turing Institute last month.

The event on ‘AI, ethics and the law’, organised by Turing Fellows and Oxford Internet Institute researchers, Corinne Cath and Dr Sandra Wachter, brought together representatives from across industry, academia, civil society and the European Commission to debate the challenges and opportunities presented by AI.

Setting the context in her introductory remarks, Corinne Cath, who chaired the session, said:

“AI is everywhere and shaping crucial spheres of many aspects of our society. As these systems are being developed, it is also important to take into account the potential impacts this has for various social groups – in particular disadvantaged and minority communities. We need to understand the societal impacts across the board, but at the moment there is a lack of shared understanding, and this seems to be generating apprehension about the use of these technologies.”

A number of issues were raised during the discussion which enabled panellists to outline their main concerns as well as highlight the potential benefits that AI could bring to society.

However, the question of whether AI should be regulated by ethics or law generated a passionate response and was influenced by each panellist’s view and understanding of the meaning of AI and how it differs from other technologies. Panellists were also asked to reflect on the responsibility of each of their respective sectors in developing ethical and legal frameworks to regulate AI.

There was broad agreement that the technical sophistication of AI systems is what sets it apart from other technologies and that within existing legal frameworks this poses huge challenges for accountability. The idea of the black box was raised as a further distinguishing feature, in addition to the perception that the technology is being developed behind closed doors.

Dr Sandra Wachter said: “The difference is that AI systems act autonomously. They act on their own. The challenge for the legal system is how to make such systems accountable. Also, the unpredictability of AI is a distinct feature. AI systems can be risky. I also agree that inscrutability and opacity is a problem. We don’t understand the systems and how it reaches decisions.”

Principal Advisor in the EU Commission Directorate for Justice and Consumers, Paul Nemitz, flagged other legal issues relating to data protection, liability and responsibility. He also highlighted the challenges such technologies can present for democracy and the integrity of the electoral process in contexts where politicians use algorithms to manipulate voter choice. Similarly, he highlighted legal and ethical challenges posed by the concentration of power, “since just five or six companies increasingly control the automated public sphere.”

The question of how AI should be regulated generated a mixed response and demonstrated a range of priorities and concerns panellists had encountered in their respective sectors.

Article 19’s Team Digital Policy Advisor, Vidushi Marda, articulated concerns relating to the effects of AI on free expression and human rights. She emphasised the mediating role and responsibility of civil society organisations to shed light on what AI means and act as a conduit between technical, policy and government actors to facilitate an exchange of views. She said: “Our job is to ensure that AI systems are aligned with existing human rights frameworks and ensure that policymakers on all sides speak to each other.”

Director of NAVER LABS Europe (previously Xerox Research Centre Europe), Dr Monica Beltrametti added: “Companies should have a set of values that they are governed by and there needs to be accountability. Law is important, and I do think there should be state regulation. I also think that there should be industry-led self-regulatory bodies in place, since they are the experts and have a better understanding of the priorities and what is most effective.”

Professor of Robot Ethics at the University of West of England, Alan Winfield, expressed a similar position stressing the innate ethical responsibility that developers carry themselves, in addition to the introduction of soft ethical frameworks which inform responsible research and innovation. However, on the issue of safety and any risk of serious harm he was unequivocal in his call for state regulation. He cited examples relating to driverless cars, medical diagnosis and some aspects of financial services.

Finally, departing from this consensus and in the context of democracy and constitutional rights, Paul Nemitz said: “When it comes to fundamental individual rights or human rights, it should be governed by law.

“If AI social media bots and chat bots are now influencing people unwittingly during elections, then the law needs to be enforced.”

For a full overview of the panel event read the full report

About the author: Chizom Ekeh is Communications Officer at The Alan Turing Institute.