Earlier this year, the European Union’s AI Act became the world’s first legally binding, comprehensive legislation on AI. Its aims include promoting “the uptake of human-centric and trustworthy AI” while protecting fundamental rights “against the harmful effects of AI systems”, and it will create legal obligations for developers of any “high-risk” AI systems used in the EU, including those developed in the UK.
As a lawyer, I’ve seen just how crucial safeguarding fundamental rights (also called human rights) is to maintaining a healthy democratic society. So how can we make sure that these rights are protected as technological innovation advances at breakneck pace? Will the mechanisms upon which the AI Act (and future legislation) relies be up to the task of protecting people’s rights?
What are fundamental rights?
To tackle these questions, we need to grasp what fundamental rights are and why they matter. Fundamental rights are inherent to all human beings, whatever our nationality, place of residence, sex, ethnicity, colour, sexuality, religion, language, or any other status. Every one of us is entitled to respect for our fundamental rights without discrimination, simply because we are human.
This distinguishes fundamental rights from ordinary legal rights, endowing them with special ‘constitutional’ status that prevents law-making bodies, or other governmental authorities, from ignoring, overriding or attempting to extinguish them. Some fundamental rights, such as the right to freedom from torture, freedom from slavery and freedom of thought, are absolute and violation is never acceptable. Others, such as the right to privacy or freedom of expression, allow for qualification, but only in very limited circumstances, and only if the proposed rights interference is deemed to be necessary and proportionate in a democratic society.
The importance of technical standards
It is now widely recognised that AI systems may undermine fundamental rights in a number of ways, whether through AI-generated misinformation and deepfakes or when used to inform decisions about an individual’s entitlement to housing, finance, recruitment or release from state custody. However, identifying concrete mechanisms capable of effectively guarding against these kinds of rights violations raises tough and novel challenges.
Computational processes lie at the core of AI, so these technologies are reliant on data, software and hardware. This suggests that protecting fundamental rights from the adverse impacts of AI systems will require technical protection mechanisms of various kinds. Although fundamental rights mark out a protective political and legal sphere around each individual, those who develop and implement AI systems may not know how to do so in a rights-protective manner without more specific guidance.
This is where technical standards have an important role to play. Although we may be tempted to dismiss them as mundane and boring, it is through technical standards that ‘by design’ protection mechanisms infuse our everyday encounters with technology and, in turn, their impact on our fundamental rights and freedoms.
Yet these technical standards are typically drafted by engineering experts who do not have legal expertise, and whose social, political and cultural backgrounds may not be representative of the population at large. The technical safety standards for motor vehicles are a classic example: they result in safety mechanisms suited to adult male bodies, meaning that women are more likely to be seriously injured in car accidents. Scientific and technological innovations often fail to protect women’s rights to equal treatment, as vividly highlighted in Caroline Criado Perez’s 2019 book Invisible Women.
Will AI standards be up to standard?
The regulatory architecture of the EU’s AI Act delegates the role of technical standard-setting to two European standards organisations: the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). This delegation has come under fire because these standard-setting bodies are private rather than directly accountable to the European public, and are thus not required to consult with those who will be affected by AI technologies in their daily lives. Also, the standards that these bodies produce are not freely available: they are copyright protected and only available on payment.
A research project that I am involved in with the Turing called the European Lighthouse on Secure and Safe AI (ELSA) is investigating the legitimacy and effectiveness of standard-setting and assurance regimes, which often rely on technical standards produced by private bodies such as CEN and CENELEC. Meanwhile, I also work with Equinet (the European Network of Equality Bodies) to advocate for the protection of fundamental rights in these standards.
All this work is especially important for the AI Act because the standards now being drafted by CEN and CENELEC are intended to have concrete legal consequences. (Although the standards themselves will be voluntary, those who choose to comply with them will enjoy a “presumption of conformity” with the Act’s essential requirements, once the standards become ‘harmonised’ via publication of their titles in the EU’s Official Journal.) Furthermore, the standards will apply in all countries that are members of CEN and CENELEC, including the UK, which is represented by the British Standards Institution.
We need legal minds as well as technical ones
The legal significance of these AI standards means that we need legal expertise alongside the technical expertise of those who typically set standards. The AI Act is especially ambitious because it has the novel aim of protecting an entire suite of fundamental rights, rather than just, for example, the right to privacy or the right to equal treatment.
Although engineers may be accustomed to drafting standards aimed at facilitating interoperability or protecting health and safety, the violation of a fundamental or legal right is especially nuanced because it might not necessarily cause tangible, quantifiable harm.
For example, if someone creates a deepfake porn video of me and I remain blissfully unaware of the video, I may not experience mental distress nor suffer financial loss. But I have clearly suffered a wrong and my right to a private life has been seriously violated.
The nature of rights violations as wrongs creates challenges when hardwiring fundamental rights into technical standards: rights may mark out intimate, personal spheres of activity that are entitled to special protection that, when violated, might not be accompanied by tangible harm. This makes the prevention of rights violations all the more important, as violations cannot be adequately compensated for after the event by a payment of money. The shame, stigma, pain or fear that may result from a rights violation is impossible to quantify in financial terms.
We need ‘human rights by design’ technical standards for AI. This is no easy task, but the EU’s AI Act demands that we learn to swim even as we dip our toes into uncharted waters. One thing is certain: unless legal and fundamental rights knowledge and expertise is brought into the European standards organisations now engaging in the task of setting technical standards for AI, we may be sunk before we push off from the shore.
Top image: Emily Rand / London Office of Technology and Innovation / Better Images of AI