In June 2019, the House of Lords Select Committee on Democracy and Digital Technologies was appointed to consider democracy and digital technologies. In July, they announced an Call for Evidence, guided by 14 questions, which is available online.
Summary of the Public Policy Programme’s submission
The Public Policy Programme’s response addresses 2 of the 14 Consultation Questions. We are open to engaging further with the Select Committee’s investigation of how technology impacts democracy, political debate and political participation.
Social media has transformed contemporary politics, from making huge audiences instantaneously accessible to sidestepping the role played by ‘gatekeepers’, such as the traditional broadcast media. Increasingly, social media are where people find, consume and share political information and news, and participate, communicate and organise politically. However, concerns have been raised about the potentially negative impact of digital technologies on politics, such as the effect of various ‘online harms’, including misinformation, pro-terrorist propaganda and hate speech.
The first part of our evidence submission primarily focuses on the spread and impact of abusive language. We argue that there is a lack of research into the prevalence of online abuse (although available evidence suggests it is quite low) and that abuse manifests in uneven and complex ways, affected by time, geography, type and use of platform, and whether the ‘targets’ are prominent figures. As such, policymakers must avoid using a broad-brush to characterise experiences of online abuse. We also present new analysis of data from the 2019 Oxford Internet Survey, which shows how different demographics are more likely to experience online abuse, related to age, ethnicity and levels of Internet use – but, surprisingly, not gender.
The second part of our evidence submission focuses on content moderation processes, and how these can be further improved. Content moderation is crucial for ensuring online spaces are safe and accessible for all: the question we face is not whether content should be moderated but how we want it to be moderated. We argue:
- There needs to be more scrutiny, transparency and collaboration in developing moderation processes and reporting on their efficacy.
- We need to develop more nuanced and sophisticated computational systems for abusive content detection, given the need to balance maintaining freedom of speech with protecting vulnerable users, and ensuring that systems are fair and unbiased.
- More consideration should be given to less invasive forms of content moderation, such as demonetising content and making it unsearchable, rather than relying only on bans.
- Platforms could also consider different interventions for different events, such as having ‘heightened’ processes during terrorist attacks, when the level of abuse is likely to be far higher.