Professor and author Kate Crawford recently joined the Turing’s Judy Wajcman to discuss her latest book Atlas of AI at a virtual Turing ‘fireside chat’. This event was organised by the Women in data science and AI project, which upholds the Turing’s commitment to redressing gender inequality in these fields, and is a hub of resources and community news for female data scientists.
Whether as a panacea to halt climate change, or as a means of freeing up more leisure time for workers, we are used to hearing optimistic and idealistic definitions of AI.
The main problem with this popular conception of AI, argues Crawford, is it takes AI to be only “abstract computation or immaterial algorithms” as opposed to a “full, tangible system” with a “profound material impact on the world.”
Put simply, AI is not just ideas and innovation, it is the cost of that innovation – environmental, economic, and social. And in turn, AI not only shapes the world but is shaped by it. In this way it reflects and reinforces dominant political interests.
Applying Crawford’s lens to the examples included above: why would automation free up leisure time for workers in an economic system where growth is monopolised by the world’s wealthiest 1%? And what does it mean that this 1% are funding research into technological ‘hacks’ to fight climate change, even as they expend double the amount of carbon as the world’s poorest 3.1 billion people combined?
Crawford proposes adopting a more holistic view of AI, so we can see these tensions and contradictions which would be invisible were we to focus only on AI at the level of computation.
Arial view of a lithium mining operation in Nevada, U.S. Lithium is essential for the computer batteries used in AI systems, part of what Crawford calls AI’s “profound material impact on the world” (via Wikimedia Commons)
“Neither artificial nor intelligent”
Central to Crawford’s Atlas of AI is the argument that artificial intelligence is “neither artificial nor intelligent.” Not artificial because it consumes the earth’s natural resources, and not intelligent because its parameters are set by human programmers. AI is therefore subject to all the same biases and blind spots that afflict human researchers, as well as all the politics that shape our societies.
Crawford quotes the former computer scientist turned right-wing political financier Robert Mercer—who claimed ‘more data is always better data’— as emblematic of the problem. She argues that by harvesting data uncritically and using this to train AI models, we not only reproduce the mistakes of the past, we entrench them.
One area where this can have extremely damaging consequences is policing. Police forces across Britain and the US have adopted AI models which predict where crimes will occur next, or who is likely to commit them next, based on data from past police records. The problem is, by training them on data generated from past racist policing practices, these models will make racist predictions on who is likely to offend or where, leading to more police targeting of minority neighbourhoods and their inhabitants. This then yields more biased data to feed the model, in an intensifying spiral where racist policing methods generate their own justification.
Police officers in Los Angeles, U.S., where several highly controversial ‘predictive’ policing AI models have been implemented (via Unsplash)
Supercharging prejudice
Even worse is AI software which not only feeds on junk data, but is premised on junk science: from AI models which categorise human faces into one of five “races,” to “AI polygraphs” which claim to identify the emotional states of workers and students.
Private companies made $3.5 billion in 2019 (expected to rise to $10 billion by 2025) marketing AI “solutions” to police departments, solutions which critics claim are at best useless, and at worse serve to aggravate bigotry and exploitation. But by branding their products in the language of science and objectivity, they can claim to just be ‘following the data,’ and their customers can find ‘evidence’ to double down on whatever policy they are pursuing, whether spying on employees or over-policing minority neighbourhoods.
Such technologies are not just flawed, argues Crawford, they are “fundamentally broken at the very tap root.” They represent “a will to phrenology”, the idea that humanity can be neatly categorised according to visible physical characteristics, and our behaviour predicted on this basis. As a science, phrenology has been discredited since the 19th century, but we now see its logics returning to our streets and our workplaces via these tools. As the American Civil Liberties Union puts it, we are “supercharging” old prejudices “with 21st century surveillance technology.”
Crawford urges us to look behind the façade of some AI tools, to find they are not AI tools at all. For example, she notes how ‘AI’ digital assistants are often just badly paid workers answering questions in real time – she quotes the activist Astra Taylor, who calls this phenomenon “fauxtomation”. The sensation of smooth efficiency for the consumer is powered not by technology or innovation—as the branding would have you believe—but by workers doing what Crawford calls “digital piece work” for “sub-poverty level wages” in “extremely tedious and extremely stressful” conditions.
Conclusion
Atlas of AI is a timely reminder that AI is not simply a product of smart ideas and innovations. It is just as much a product of the lithium drilled from the earth to make computer parts, the carbon expended moving goods across global supply chains, and the labour value extracted from workers along the way.
It is the product of a deeply unequal and exploitative global economic system, and so will reproduce those inequalities and pathologies—from racist policing to workplace exploitation—unless we intervene to stop it.