Bio
Feargus is a PhD cybersecurity student with the Systems Security Research Lab at King's College London and the Information Security Group at Royal Holloway, University of London where his research explores the limitations of machine learning when applied to the security setting. He is supervised by Prof. Lorenzo Cavallaro and Prof. Johannes Kinder.
Prior to joining the Turing, Feargus was a PhD intern at Facebook where he worked with the Abusive Accounts Detection team to develop novel techniques for detecting adversarial behaviour on its social media platforms.
He is also the author and maintainer of TESSERACT, a framework and Python library for performing sound ML-based evaluations without experimental bias.
Research interests
Feargus' research focuses on the challenges of deploying machine learning (ML) systems in hostile environments. While ML has shown to be effective in lab settings there are a host of complications that arise when used in the wild: e.g., adaptive attackers that seek to evade and poison classifiers; an evolving environment that results in concept drift; poor explainability that limits the use of ML as part of a larger analysis pipeline.
To date Feargus has demonstrated that previous work has vastly overestimated the performance of ML classifiers in security and shown that large-scale attacks against malware detectors are a reality by generating adversarial examples with automated software transplantation.
His work at the Turing continues to explore problems of dataset shift and adversarial ML while also exploring new techniques for the robustness, recovery, and interpretability of detection models.