Introduction
While the term AI appears frequently in public discourse, it can be difficult to define and is often poorly understood – particularly as it encompasses a wide range of technologies used in different contexts and for distinct purposes. There is no single definition of AI, and the public may see the term applied in a wide variety of settings. Understanding people’s perspectives on data and AI is a key component in ensuring these technologies are developed to align with societal values, and are able to address societal needs.
In late 2022 The Alan Turing Institute in partnership with the Ada Lovelace Institute conducted a nationally representative survey of over 4,000 members of the British public, to understand their awareness, experience of and attitudes towards different uses of artificial intelligence including views and expectations on how these technologies should be regulated and governed. The results of this survey were published in 2023.
The Turing and Ada have since conducted a further survey of 3,513 UK residents. This survey focuses on minoritised voices in order to achieve a deeper understanding of the groups most negatively impacted by AI developments. Since the last survey, Chat GPT and other LLMs have become readily available so this was a new focus of this paper too, alongside the public’s experiences of harm and expectations in relation to governance, regulation and the role of AI in decision-making.
Project aims
This research aims to make an important contribution to what we know about public attitudes to AI, and provides a detailed picture of the ways in which the British public perceive issues surrounding its many diverse applications. We hope that it will be useful in helping researchers, developers and policymakers understand and respond to public expectations about the benefits and risks that these technologies may pose, as well as public demand for how these technologies should be governed.
Ultimately, we hope that this research can help to maximise the potential benefits of AI.
Research outputs
A report was produced based on initial findings of the first survey. The entirety of the anonymised survey data is also openly available on Github, which provides the opportunity for researchers from any institute to further explore the results.
The second report is available here.
Highlights from the first survey results analysis
The survey found that the public see clear benefits for many uses of AI, particularly technologies relating to health, science and security.
For example, when offered 17 examples of AI technologies to consider, respondents thought the benefits outweigh concerns for 10 of these:
- 88% of the public said that AI is beneficial for assessing the risk of cancer
- 76% saw the benefit of virtual reality in education
- 74% think climate research simulations could be advanced using the technology
The survey also showed that people often think speed, efficiency and improving accessibility are the main advantages of AI. For example, 82% think that earlier detection is a benefit of using AI with cancer scans and 70% feel speeding up border control is a benefit of facial recognition technology.
However, attitudes do vary across different technologies. Almost two thirds (64%) are concerned that workplaces will rely too heavily on AI for recruitment, rather than using professional judgement, and 61% are concerned that AI will be less able than employers and recruiters to take account of individual circumstances.
Public concerns extend beyond use of AI in the workplace. People are most concerned about advanced robotics:
- 72% express concern about driverless cars
- 71% are concerned about autonomous weapons
- 78% worry that the use of robotic care assistants in hospitals and nursing homes would mean patients missing out on human interaction
- 45% worried about the reliability of robotic vacuum cleaners
However, even for the technologies that people were most concerned about, they could still see potential benefits.
And when asked what would make them more comfortable with the use of AI, almost two thirds (62%) chose ‘laws and regulations that prohibit certain uses of technologies and guide the use of all AI technologies’ and 59% chose ‘clear procedures for appealing to a human against an AI decision’.
The full report can be read here, and the full data is available for download on GitHub.
Watch a recording of the panel launching the report, featuring researchers and expert speakers:
This report from the survey was co-authored by The Alan Turing Institute (Professor Helen Margetts, Dr Florence Enock, Miranda Cross) and the Ada Lovelace Institute (Aidan Peppin, Roshni Modhvadia, Anna Colom, Andrew Strait, Octavia Reeve) with substantial input from LSE’s Methodology Department (Professor Patrick Sturgis, Katya Kostadintcheva, Oriol Bosch-Jover).
This project was made possible by a grant from The Alan Turing Institute and the Arts and Humanities Research Council (AHRC).
Highlights from the second survey results analysis
Public awareness, benefits and concerns
The survey found that public awareness varies widely depending on what AI is being used for. For example, while 93% of respondents had heard of driverless cars, only 18% were aware of the use of AI for welfare benefits assessments.
The public do see benefits to specific uses of AI, and perceptions of overall benefits have not changed dramatically since the 2022/23 survey. The most commonly reported benefits are speed and efficiency improvements.
Levels of concern have increased across all six uses of AI since the previous survey, with common concerns being around overreliance on technology, mistakes being made and lack of transparency in decision-making.
Exposure to harm and support for regulation
The survey also shows that exposure to harms from AI is widespread. Two thirds of the public reported they have encountered some form of AI-related harm at least a few times, with false information, financial fraud and deepfakes being the most common.
There is a strong public demand for laws, regulation and action on AI policy. 72% indicate that laws and regulations would increase their comfort with AI – an increase from 62% in 2022/23. 88% of people believe it is important that the government or regulators have the power to stop the use of an AI product if it is deemed to pose a risk of serious harm to the public and over 75% said government or independent regulators, rather than private companies alone, should oversee AI safety.
Furthermore, when asked about the extent to which they felt their views and values are represented in current decisions being made about AI and how it affects their lives, half of the public (50%) said that they do not feel represented.
Demographic differences
This survey recognised that much of the existing evidence on public attitudes to AI does not adequately represent marginalised groups. In order to ensure that these views were fully represented, three minority demographic groups were deliberately oversampled: people from low-income backgrounds; digitally excluded people; and people from minoritised ethnic groups, such as Black, Black British, Asian and Asian British people.
It was found that attitudes vary between different demographics, with underrepresented populations reporting more concern and perceiving AI as less beneficial. For example, 57% of Black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the general population.
For people from low-income backgrounds, all AI uses were deemed as being less beneficial than people on higher incomes.
You can explore the full findings here.
Authorship
This report was co-authored by Roshni Modhvadia and Tvesha Sippy, with substantive input from Octavia Field Reid and Helen Margetts.
About the research
The research reported here was undertaken as part of Public Voices in AI, a satellite project funded by Responsible AI UK and EPSRC (Grant number: EP/Y009800/1). Public Voices in AI was a collaboration between: the ESRC Digital Good Network at the University of Sheffield (Grant number: ES/X502352/1) , Elgon Social Research Limited, Ada Lovelace Institute, The Alan Turing Institute and University College London.