Artificial intelligence (AI) has dominated the headlines over the past six months, since the launch of ChatGPT and other Large Language Models.
Researchers from The Alan Turing Institute have taken part in dozens of media and speaking engagements to support understanding around these rapidly emerging technologies. This post shares just a few highlights from our contributions to reporting during this time.
Following the release of ChatGPT at the end of 2022, The Alan Turing Institute was featured in the Telegraph, the Guardian on BBC Radio 5Live and Tonight with Andrew Marr on LBC to set the scene for how this technology could work.
Professor Mike Wooldridge, director of Foundational AI Research at the Alan Turing Institute told the Telegraph at that time that the technology is seeing “an unimaginable amount of data”. But he warned that, “because [the training data] is behind closed doors, we don’t know exactly what it’s picked up".
And Dr Mhairi Aitken, Ethics Research Fellow at the Turing told Andrew Marr that there are concerns that the technology could create “convincing and well-structured essays that seem like they’ve been created by a human”. The potential uses of ChatGPT in education has been a major talking point, with Mhairi speaking about this on BBC Radio 5Live and BBC Two’s Politics Live.
Mhairi also spoke to Glamour about whether AI could advance gender equality and Cosmopolitan about using ChatGPT to in daily tasks, such as responding to emails or booking a dentist appointment.
Stories moved on to discuss competing technology from rival companies
Mhairi was interviewed by the Daily Mail, the Today Programme and Woman’s Hour on BBC Radio 4 and New Scientist about the competition from Google’s rival chatbot, Bard, which launched in February 2023. And Professor David Leslie, Director of Ethics and Responsible Innovation at the Turing, did an interview with Bloomberg on the topic.
Shortly after, we held a national press briefing in partnership with the Science Media Centre about ChatGPT and Large Language Models with the aim of providing journalists with a good understanding of the technology to support accurate reporting. Mike, Mhairi and Turing AI Fellow Professor Maria Liakata shared their expert views on the panel. This led to a piece in the Times highlighting the potential difficulties that historians may have years from now telling the difference between texts that were written by humans and by artificial intelligence unless a “digital watermark” is added to all computer-generated material.
This press briefing was preceded a one-day symposium hosted by the Turing and Mike Wooldridge exploring the state-of-the-art in foundation models, how they work, what they are and will be capable of, how they are being and will be used, and how to address the many challenges – both technical and ethical – that they raise.
Sovereign AI capability in the UK
Dame Wendy Hall, a web pioneer who also sits on the Government’s Artificial Intelligence Council, told MPs on the Science and Technology Committee: “We absolutely need to develop a sovereign large language model capability. The UK government needs to get behind this.”, reported the Telegraph.
Mike told the Times: “We have absolutely world class researchers in UK academia. What they don’t have is the facility to build the systems. [A sovereign AI] will generate immensely good research, which is then going to feed back into the UK economy by way of spin off companies.”
Launch of the Future of Compute
The Times reported that Britain has fallen behind Russia, Italy and Finland in the world league table for computing power, according to the Independent Review of the Future of Compute which the Turing contributed to. Mike told the Times: “All of science is computer science today. The problem this report is tackling is that our national capability in this space has not grown at the same speed, which to be fair is not surprising given the speed at which science and technology have progressed.” He also told the Telegraph that growth in demand for large-scale computer facilities “has been incredibly rapid, and the UK got left behind.”
Launch of Government AI regulation white paper
Following the publication of the Government’s AI regulation white paper in April, Institute Director Sir Adrian Smith said: “This white paper is good news for how AI is used and regulated in the UK.” The Turing has responded to many other stories about ChatGPT and Large Language Models, including an appearance from Adrian Weller on BBC PM’s programme and an interview on Bloomberg.
Other recent and notable broadcast appearances include the Turing’s Drew Hemment on BBC Radio 4’s Moral Maze, Mike on the 30-minute Radio 4 programme, The Briefing Room, Mhairi on LBC speaking about the impact of AI on jobs and featuring on BBC Radio 4’s Today Programme in April discussing the importance of regulating against existing AI risks.
The Government’s announcement of the £100m for a Foundation Model Taskforce is vital to help the UK to build safe and ethical AI technology. Large Language Models and Foundational Models can be immensely powerful and have the potential for great benefit, but as with all AI technologies it is crucial to understand their limitations and the very real risks associated with them.
The Turing has also begun a programme of work around benchmarking foundation models – developing the core science to map out their capabilities in depth. It plans to significantly extend this programme of work with a range of public and private partner organisations in the year ahead.