fatma

Position

Enrichment Student

Cohort year

2022

Bio

Fatma (she/her) is a 3rd year PhD student at the University of West of Scotland. She works on bias and fairness in hate speech detection models. Fatma is interested in AI for social good and in particular hate speech detection, and investigating bias and fairness in Natural Language Processing (NLP) models. She understands from my personal and research experiences how bias in NLP and ML applications has a direct impact on the lives of the under-represented groups of people and how hate speech towards minorities has severe implications on society like hate crimes. She believes that these topics are crucial to study now more than ever for building models and communication platforms that are safe, and accessible to all people regardless of their gender, ethnicity, or sexual orientation.

Research interests

For the Turing research project, her goal is to understand the influence of social bias on the task of hate speech detection. Training a machine learning (ML) model for the task of text classification, e.g. hate speech, involves encoding the textual content in datasets into a numerical representation (numerical vectors). This process is very important as a good representation would capture the semantic relationship between words and sentences. These numerical representations are called word embeddings. For these word embeddings to be effective they are trained on a large corpus of text like Wikipedia or news articles. Recently, many research papers have shown that these word embeddings are socially biased.

Her research goal can be achieved by answering the following question: What is the effect of social biases in word embeddings on the task of hate speech and abuse detection? To answer this research question, She proposes to use causal inference methods to learn how the social bias in the word embeddings causes hate speech detection models to treat different groups of people. Understanding how social bias influences the task of hate speech detection models, could help us to develop effective methods to remove the bias.