In April 2019, the Department of Digital, Culture, Media & Sport and the Home Office released the Online Harms White Paper. It set out the government’s plans for tackling online harms and was followed by a 3-month consultation period, guided by 18 questions.
Summary of the public policy programme’s submission
The public policy programme’s response addresses 8 of the 18 Consultation Questions as well as 7 additional issues. We are open to engaging further in the development of a regulatory framework for online harms and welcome any questions regarding our response.
Overall, the White Paper marks an important step forward in achieving better regulation of the Internet and shows the UK’s commitment to being at the forefront of responsible Internet governance. The broad message is commendable: “We cannot allow these harmful behaviours and content to undermine the significant benefits that the digital revolution can offer […] If we surrender our online spaces to those who spread hate, abuse, fear and vitriolic content, then we will all lose.” (p.3) However, several issues are left unresolved, of which two are particularly important:
- The White Paper advocates creating a new independent regulator. However, existing regulators have accumulated much of the expertise needed in dealing with data-intensive digital platforms that the regulation of online harms will require. We recommend that a new unit with a specific remit for online harms is established within one of the existing regulators, such as Ofcom or the ICO.
- The discussion of ‘harms’ in the White Paper requires additional nuance and clarity. It should include a high-level explanation of what constitutes a harm, how differing harms will be prioritised, and how their impact will be assessed. This will help the regulatory unit to act in a targeted and proportionate manner and provide more certainty to stakeholders.
Our response also includes discussions of key social issues raised in the White Paper, such as provisions for protecting freedom of expression, protecting worker welfare, determining what is ‘true’ online, and the need for a joint-up approach which considers how harmful content moves between platforms.