Publications & policy submissions

Explore our latest research publications, software, and policy submissions in response to government and parliamentary calls for evidence

Research publications

Understanding RT’s Audiences: Exposure Not Endorsement for Twitter Followers of Russian State-Sponsored Media

The Russian state-funded international broadcaster RT (formerly Russia Today) has attracted much attention as...

Crilley, Rhys; Gillespie, Marie; Vidgen, Bertie and Willis, Alistair (2022). Understanding RT’s Audiences: Exposure Not Endorsement for Twitter Followers of Russian State-Sponsored Media. The International Journal of Press/Politics, 27(1) pp. 220–242. DOI: https://doi.org/10.1177/1940161220980692
Research publications

Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for Multimodal Hate

Accurate detection and classification of online hate is a difficult task.Implicit hate is particularly...

Austin Botelho, Scott Hale, and Bertie Vidgen. 2021. Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for Multimodal Hate. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1896–1907, Online. Association for Computational Linguistics.
Research publications

An Expert Annotated Dataset for the Detection of Online Misogyny

Online misogyny is a pernicious social problem that risks making online platforms toxic and...

Ella Guest, Bertie Vidgen, Alexandros Mittos, Nishanth Sastry, Gareth Tyson, and Helen Margetts. 2021. An Expert Annotated Dataset for the Detection of Online Misogyny. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1336–1350, Online. Association for Computational Linguistics.
Research publications

Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-Based Hate

Detecting online hate is a complex task, and low-performing models have harmful consequences when...

Hannah Kirk, Bertie Vidgen, Paul Rottger, Tristan Thrush, and Scott Hale. 2022. Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-Based Hate. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1352–1368, Seattle, United States. Association for Computational Linguistics.
Research publications

Findings of the WOAH 5 Shared Task on Fine Grained Hateful Memes Detection

We present the results and main findings of the shared task at WOAH 5...

Lambert Mathias, Shaoliang Nie, Aida Mostafazadeh Davani, Douwe Kiela, Vinodkumar Prabhakaran, Bertie Vidgen, and Zeerak Waseem. 2021. Findings of the WOAH 5 Shared Task on Fine Grained Hateful Memes Detection. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), pages 201–206, Online. Association for Computational Linguistics.
Research publications

Introducing CAD: the Contextual Abuse Dataset

Online abuse can inflict harm on users and communities, making online spaces unsafe and...

Bertie Vidgen, Dong Nguyen, Helen Margetts, Patricia Rossini, and Rebekah Tromble. 2021. Introducing CAD: the Contextual Abuse Dataset. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2289–2303, Online. Association for Computational Linguistics.
Research publications

Detecting East Asian Prejudice on Social Media

During COVID-19 concerns have heightened about the spread of aggressive and hateful language online...

Bertie Vidgen, Scott Hale, Ella Guest, Helen Margetts, David Broniatowski, Zeerak Waseem, Austin Botelho, Matthew Hall, and Rebekah Tromble. 2020. Detecting East Asian Prejudice on Social Media. In Proceedings of the Fourth Workshop on Online Abuse and Harms, pages 162–172, Online. Association for Computational Linguistics.
Research publications

Recalibrating classifiers for interpretable abusive content detection

We investigate the use of machine learning classifiers for detecting online abuse in empirical...

Bertie Vidgen, Scott Hale, Sam Staton, Tom Melham, Helen Margetts, Ohad Kammar, and Marcin Szymczak. 2020. Recalibrating classifiers for interpretable abusive content detection. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science, pages 132–138, Online. Association for Computational Linguistics.