Bio
Carolyn is a Senior Research Associate in Safe and Ethical AI at the Alan Turing Institute. Her work is motivated by the question: How do we ensure AI and other digital technologies are researched, developed and used responsibly? Her research into algorithmic fairness seeks to understand the fairness implications of data-driven systems from both theoretical, practical and domain specific lenses. As well as mitigating the impacts from deployed systems, her work in responsible research seeks to understand the role of the machine learning (ML) research community in navigating the broader impacts of ML research.
Carolyn also works to convene technical, policy, and domain experts, for example to ensure that regulators have access to necessary technical expertise. She has facilitated discussions between various communities through workshops co-organised with the ICO, NICE and CDEI. She also sits on a range of advisory boards and working groups, including with the FBI, CSIS, ICO and the Turing Research Ethics process. Previously she worked as a senior researcher scholar at Oxford, and as a data and research scientist in various roles within government and finance. She holds a PhD in maths from the University of Bath.
Research interests
- Algorithmic fairness
- Responsible machine learning
- AI governance
- The role of the machine learning research community in navigating the broader impacts of AI