It is becoming widely acknowledged that the World Wide Web can be a dangerous place. Indeed, on the 30th anniversary of the submission of his original proposal for the Web, its founder Sir Tim Berners-Lee said the Web is taking a “downward plunge to a dysfunctional future,” and global action is required to tackle this. Recent developments, such as the rise of online misinformation and the so-called fake news phenomenon, have prompted wide ranging responses from a variety of stakeholders including scholars, media observers and the government.
To examine such issues and discuss how they might be addressed, we held a workshop at The Alan Turing Institute, focussing on conspiracy theories, fake news and political trolling. This blog provides a high-level overview and highlights of the workshop. We brought together representatives of key stakeholders, including media professionals, AI researchers and policy makers, with the goal of understanding the nature of misinformation and fake news and their impacts, how AI could be used to tackle them and, alongside such technical responses, what kinds of policy measures would be feasible and effective. Chatham House Rule was followed, so in the account below, only names of those identifying themselves are mentioned.
The workshop had six speakers in all, and the talks were either a characterisation and commentary on the current state of affairs, or a response (technical, policy etc.) to improve the current state.
Three of the speakers provided a characterisation or commentary of different kinds of issues plaguing the web. These can be categorised into:
The first characterisation talk took a long look at conspiracy theories, comparing and contrasting online conspiracy theories with those from the pre-internet age. It argued that conspiracy theories have been a feature of societies for centuries, but a combination of political upheaval and profound changes in our media ecosystem (enabled by social media) have brought conspiracy theories into the mainstream as never before. Because this is a problem brought about by a pollution of the public sphere, the ‘marketplace of ideas’, by disinformation, it argued that there can be no purely technical solution to the problem.
The second talk, by Carl Miller of Demos, took a deep dive into the world of ‘clickbait merchants’ in a south-eastern European state. It revealed a world where politics are being influenced through the digital sphere, but where the motives may be financial (e.g. potential revenues from selling advertising space online) rather than political. Nevertheless, there are state actors who are politically motivated and who appear to have had an influence on democratic processes in other countries, taking advantage of the global nature of digital platforms. Their aim appears to be “to confuse rather than convince … to trash the information space so the audience gives up looking for any truth amid the chaos.”
Fringe web communities
The final characterisation talk focused on various fringe Web communities such as 4chan, gab and certain subreddits. An interesting dynamic was demonstrated of how memes created in these communities spread into and influence more mainstream settings, becoming popular and sometimes changing context. In many cases, these memes provided the language for popularising extremist (e.g. racist) or partisan content. This piece from the conversation discusses this theme further.
 Pomerantsev, P. (2014). Nothing Is True and Everything Is Possible
AI and automatic detection of fake information
The availability and wide reach of social media makes it very easy to spread misinformation. Rumours take hold quickly, with false information being repeated without understanding provenance or veracity. AI-based techniques have been developed to detect the veracity of information and robustly classify rumours. Similar techniques are starting to be developed to detect so-called deep fake videos. This latter example illustrates that ‘bad actors’ can also adopt AI-based techniques to make disinformation and fake news appear more convincing, which challenges the idea that technology alone can solve this problem.
Fact checking and automated fake claim detection
The second talk was from the frontlines of fact checking. Fact checking is an important service being performed by trained professionals, who spend a lot of time and use a large knowledge base to verify news stories. “War stories” of fact checking were shared, showing that even mainstream news can be guilty of misreporting; that politicians’ claims should not always be taken at face value, as there have been several prominent and misleading errors. A state-of-the-art fact checking tool designed to assist human fact checkers was also demonstrated.
Government and policy responses
We also had a representative of the UK government who talked about the nation’s response from a policy perspective. Government is preparing a “whole society” approach. The speaker emphasised that it has become very clear that we need to think about the consequences that online actions have on the offline world, and it was key to work with media industry stakeholders if effective solutions are to be found. This thinking has led to the current government position which stipulates that clear responsibilities will be set for the tech companies.
The government understands that “we need to do more, and do it together” as a society, but also that government needs to do more to help. Various steps are being taken to ensure data driven tech is being used responsibly. The government’s position on the issue of disinformation and fake news is set forth in the recently published Online Harms White Paper.
There is clearly a recognition that misinformation and information manipulation online (which was the topic of a related Turing Lecture with BuzzFeed's Craig Silverman [below] the same evening as the workshop) poses a huge new problem for society. A strong interdisciplinary community of academic researchers, as well as non-academic stakeholders (such as mainstream media, fact checkers, think tanks and the government itself) have started to look into various aspects of this issue. This workshop is a first response from The Alan Turing Institute. Expect more, as there are a number of Turing Fellows and researchers who are working on related problems. For example, Dong Nguyen and Rebekah Tromble’s research project ‘The (mis)informed citizen’ considers this issue.