Why content moderators should be key workers

Protecting social media as critical infrastructure during COVID-19

Wednesday 15 Apr 2020

Amidst the turmoil of COVID-19, we are relying on digital technologies to stay socially connected while in physical isolation. In this changed world, social media platforms are crucial to many for social life, entertainment, work, education and news. 

However, the systems that these platforms use to weed out the worst and ugliest online communications—the opaque world of content moderation—are creaking under the pressures of the pandemic. Online harms have spread alongside the virus, such as pernicious-health related misinformation, causing huge potential damage to society. We need to use this moment to rethink how we approach and maintain this critical infrastructure and to properly tackle the ‘online harms’ that are putting the safety, accessibility and worth of our online spaces at risk.

Content moderation and AI

Content moderation is the unseen but omnipresent underbelly of the internet. An army of humans and automated tools work around the clock to review and monitor the content posted online. This ongoing battle comes with costs: some workers have reported developing post-traumatic stress disorder as a result of the unthinkably horrific content they are exposed to.

Artificial intelligence (AI) is often portrayed as the solution to this problem. The hope is that sophisticated systems, trained on enormous datasets using vast computing resources, could learn to moderate content reliably at scale, sparing humans the burden. Significant advances have already been made: Facebook reports that the share of hateful content that it removes “before users report it” rose from just 24% in late 2017 to 80% by 2019. 

But as any project manager knows, you get 80% of the work done in 20% of the time—so whilst these figures sound impressive, tackling the ‘long tail’ of that remaining 20% is where the most difficult work lies. We are still a long way off from creating moderation systems that are purely automated and will need to keep relying on a hybrid human-computer taskforce for the foreseeable future. 

The widening content moderation “capacity gap”  

The current crisis surrounding COVID-19 has scaled up the challenge of content moderation, severely reducing supply and massively increasing demand. On the “supply side”, content moderators have, like other workers around the world, been told not to come into work. YouTube has already warned that, as a result, it will conduct fewer human reviews and openly admits it may make poor content takedown decisions. 

On the “demand side”, the growth of the pandemic has seen an upsurge in the amount of time spent online. BT recently noted an increase in UK daytime traffic of 35-60%, and social networks report similar increases, particularly in their use for education, entertainment and even exercise. Sadly, harmful activity has increased too: Europol reports “increased online activity by those seeking child abuse material” and the World Health Organisation has warned of an emerging “infodemic” of pernicious health-related disinformation. Recently,  concerns have been raised that false claims are circulating online about the role of 5G

At a time when social media is desperately needed for social interaction, a widening gap is emerging between how much content moderation we need and how much can be delivered. As a result, AI is being asked do tasks for which it is not ready, with profound consequences for the health of online spaces. How should platforms, governments, and civil society respond to this challenge? Following Rahm Emmanuel’s exhortation to “never let a crisis go to waste,” we argue that, now that the challenges in content moderation have been exposed by the pandemic, it is time for a reset. 

Content moderation as critical infrastructure

Back in 2017, Dharma Dailey and Kate Starbird argued that, following a local disaster in the United States, social media platforms “perform[ed the role of] critical infrastructure during [the] crisis response”. The immediacy, reach and low cost of social media is well-suited to crises. Indeed, so far, tech firms have played a key role in the Government’s communication strategy, with telecoms companies texting official guidance to millions of people across the UK.

Social media’s utility goes beyond just being used to broadcast content. Platforms are fundamental to how many of us stay in contact with friends and family, work, entertain ourselves, debate issues and find information and advice. This is true of both the pandemic and also calmer times, when digital intermediaries have long played an essential role in how our ‘information’ society operates. But as the move to living online accelerates during the crisis, having effective content moderation which protects us from bad actors (from sexual predators to ‘Zoom bombers’) is indispensable.

Of course, many of the structural features and challenges of the pre-COVID-19 world remain unchanged: social media platforms are still commercial organisations motivated by profit. And persistent digital divides mean that some people have substantially less access to the internet. Even in the UK, broadband speed and coverage in rural areas still lags far behind urban areas, and internet use is skewed in favour of richer, younger and more educated citizens. Social media is far from free of problems. But these issues do not change the fact that, as the current crisis has powerfully exposed, it is a crucial part of many of our lives.

The challenge for us now is ensuring that platforms are not only pervasive but also safe places to be—otherwise we risk the equivalent of creating a start-of-the-art water supply and then filling it with water that is unsafe to drink. 

Recommendations

The moderation of social media content needs to be viewed as critical infrastructure: the harms that moderation addresses are societal hazards, and we all have a stake in tackling them. We recommend five actions:

  1. Platforms should engage with social scientists, activists and community leaders who are experts in tackling social oppression and injustice, and use their insights to guide the development, use and regulation of content moderation systems.
  2. The voices and experiences of everyday social media users, particularly those most vulnerable to harmful content, should also be incorporated in content moderation design.
  3. Platforms and governments should invest more in the development of AI to detect and counter online harms. The huge advances have been made in other domains should be leveraged for this work.
  4. Platforms should open-source their content moderation systems, sharing data and methods across industry, whilst protecting users’ privacy and rights.
  5. Content moderators should be recognised as ‘key workers,’ given financial compensation and mental health support reflecting the difficulties and importance of their roles, and enabled to work flexibly with privacy-enhancing technologies.

Designating content moderation on social media as critical infrastructure would have seemed impossible just a few months ago. In the ‘new normal’ of digital life, we need to refocus our attention on the benefits of social media, protecting users with innovative, equitable AI and properly supported content moderators. Much as we have taken drastic steps to protect those who are most vulnerable to COVID-19 itself, a similar approach should be adopted to protect those who are vulnerable to harmful and misleading online content, both now and in the future.