About the event
Governments around the world are increasingly concerned by the prevalence, spread and impact of harmful online content, such as harassment, bullying and hate speech. Online abuse poses myriad concerns: it can inflict harm on targeted victims, pollute civic discourse, make online environments unsafe, create and exacerbate social divisions, and erode trust in the host platforms.
Many hope that increasingly sophisticated and powerful algorithms will ‘solve’ the problem of online abuse by making this content easier to detect and take down. However, abusive content detection has proven to be a wicked challenge. Not only is it a very difficult engineering task; it is also imbued with complex legal, social and political challenges. Researchers are increasingly drawing attention to the biases in some widely used tools and datasets, raising concerns that they might perpetuate the injustices they are designed to overcome.
Currently, Facebook’s ‘Supreme Court’ of content moderation is gearing up to pass judgements; the UK Government is reviewing its wide-ranging Online Harms White Paper; social media platforms across the world are tightening up their community guidelines and investing in more tech to counter online abuse. In this pertinent moment, our experts discuss a fundamental question for society: Can technology solve online abuse?
This is the third event of the 'Driving Data Futures' lecture series in the Public Policy Programme, where we invite audiences to learn and critically engage with new research at the intersection of new technologies, public policy, and ethics. At this event, there will be presentations delivered by academia, industry, and government, in addition to the Turing’s Hate Speech project team presenting their latest research in the field. This will be followed by a detailed Q&A, chaired by Dr Bertie Vidgen
Additional speakers will be announced soon
For more details on the Hate Speech project, including our most recent paper “Challenges and frontiers in abusive content detection”, please see – https://www.turing.ac.uk/research/research-projects/hate-speech-measures-and-counter-measures