Why online harms research urgently needs new collaboration, direction and a shared sense of purpose

Friday 19 Jun 2020

Online platforms are fundamental to how we live, work, socialise and entertain ourselves, especially during the COVID-19 lockdown. Yet many are at risk of becoming unwelcoming and dangerous, leaving users at threat of serious harm.

The 2019 Online Harms White Paper demonstrated how serious the UK Government is about addressing the challenges plaguing the internet; this can only be welcomed. The agenda that it puts forth is wide-ranging, covering behaviours from dangerous health-related misinformation through to sharing images of child sexual abuse and hate speech.

Traditionally, each ‘online harm’ has been studied separately with different academics working in very different disciplines using different theories, methods and sources of data. It is hard to think of any conferences which would be attended by a linguist working on online hate, a criminologist studying gangs, a computer scientist detecting misinformation and an international affairs scholar of terrorism. Although lots of good work is being undertaken in each of these separate areas (and many others), the online harms research field as a whole lacks coherence, direction and a shared sense of purpose. 

This is a problem. The power of the proposed online harms agenda is recognising that we do not just face a smorgasbord of distinct issues to be tackled in isolation, but must consider the overarching challenge. If we want policymakers’ responses to be coherent and coordinated, then we urgently need theories, frameworks and concepts which consider the landscape’s complexity.

The Online Harms Journey

At The Alan Turing Institute, we believe that online harms need to be addressed from a cross-disciplinary and cross-domain perspective, bringing together insights and ideas from a range of disciplines and research areas. To frame this, we have established four steps to help us understand how individuals engage in harmful behaviour, which we call the Online Harms Journey:

  1. Thinking: The harmer (implicitly or explicitly) develops harm-related objectives, motivated by ideology, attitudes and beliefs, economic gain and other factors. 
  2. Enabling: Harms are discussed, planned and organised. This involves use of electronic device(s) and digital platform(s).
  3. Engaging: Harm is transmitted to the victim through a ‘harm pathway.’
  4. Impact on victims: Harm is experienced by the victim, often with severe consequences.

The Online Harms Journey is invariant to the type of behaviour and the actors involved and the setting, and helps to understand commonalities across harmers. It can be used to identify potential intervention points, informing the creation of tailored policy responses. Whereas phase one (thinking) may require education and information, phase two (enabling) may be better tackled by active outreach and phase three (engaging) by bans and takedowns. Phase four (impact) reflects a point of failure in efforts to stop online harm, and involves mitigating activities such as providing support to victims.

The Online Harms Journey captures how individuals engage in harmful behaviours at a very high-level and needs to be complemented by more detailed conceptual work in each phase. Understanding how harms are actually transmitted during the ‘engaging’ phase is arguably the most critical piece of this puzzle. We have identified four main Harm Pathways’:

  1. One to One: Harm sent by one originator to one victim. This might be a one-off activity or it could be through ongoing contact, such as interactions between groomers and young children online.
  2. One to Many: Harm sent by one originator to multiple victims, often through a large ‘broadcaster’ social media platform.
  3. Many to One: Harm sent by many originators to one victim, such as with ‘pile-ins’ where malicious actors act in concert to harass a public figure.
  4. Many to Many: Harm that spreads from many originators to many victims, often without the full awareness of those involved. Misinformation is an archetypal example.

Tackling online harms is urgent and vital work which needs more research, advocacy and reflection—and collaboration between everyone with a stake in keeping the internet safe, accessible and inclusive.

The Online Harms Journey and the Harm Pathways are by nature imperfect frameworks—but they serve the important purpose of enabling us to think clearly and fully about online harms. In particular, they let us contextualize other important aspects, such as the role of the platforms, the activities of amplifiers and the distinct challenges posed by content that is low-volume but high-impact. This is high stakes work for which we need to be ambitious and adventurous, willing to try new things and fail (and learn) fast. 

Editor’s Note: On Thursday 25 June the Turing will host the related event: AI UK | Online harms and disinformation post COVID-19.