Elena Kochkina is a Postdoctoral Researcher at Queen Mary University of London and The Alan Turing Institute, working on tackling misinformation using Natural Language Processing. Her current research is funded by UKRI trough the PANACEA project.
Elena have completed a PhD in Computer Science supervised by Dr Maria Liakata and Professor Rob Procter with the Warwick Institute for the Science of Cities (WISC) CDT, funded by the Leverhulme Trust via the Bridges Programme. She was an enrichment and visiting student at The Alan Turing Institute in London. Her background is Applied Mathematics (BSc, MSc, Lobachevsky State University of Nizhny Novgorod) and Complexity Science (MSc, University of Warwick, Chalmers University).
The main focus of Elena's research is on Tackling Misinformation using Natural Language Processing.
In her PhD she focused on rumour stance and veracity classification in social media conversations. Veracity classification means a task of identifying whether a given conversation discusses a True, False or Unverified rumour. Stance classification implies determining the attitude of responses discussing a rumour towards its veracity as either Supporting, Denying, Questioning or Commenting. In her work she studies the relations between these tasks, as patterns of support and denial can be indicative of the final veracity label. As the input data is in the form of conversations discussing rumours, she utilises the conversation structure to enhance predictive models. She works with deep learning models as this approach allows flexible architectures and has benefits of representation learning. Recurrent and recursive neural networks allow to model time sequences and/or conversation tree-like structures.
Currently she is working on the “PANACEA: PANdemic Ai Claim vEracity Assessment” project, which aims to create an AI-enabled evidence-driven framework for claim veracity assessment during pandemics. Within the project her focus is on (1) collecting COVID-19 related data from social media platforms and authoritative resources and (2) developing novel unsupervised/supervised approaches for veracity assessment by incorporating evidence from external sources.
She is also interested in general area of online harms, and tasks like propaganda detection and multimodal hate speech detection.