Abstract

Online abusive content detection is an inherently difficult task. It has received considerable attention from academia, particularly within the computational linguistics community, and performance appears to have improved as the field has matured. However, considerable challenges and unaddressed frontiers remain, spanning technical, social and ethical dimensions. These issues constrain the performance, efficiency and generalizability of abusive content detection systems. In this article we delineate and clarify the main challenges and frontiers in the field, critically evaluate their implications and discuss solutions. We also highlight ways in which social scientific insights can advance research. 

Citation information

B. Vidgen, A. Harris, D. Nguyen, R. Tromble, S. Hale and H. Margetts. Challenges and frontiers in abusive content detection. To appear at the 3rd Workshop on Abusive Language Online at ACL 2019.

Turing affiliated authors