Bio
Adarsa is a second-year PhD student at the Department of Computing Science at the University of Aberdeen under the supervision of Prof.Ehud Reiter, Prof.Nava Tintarev and Prof. Nir Oren. Her research focuses on the understandability of Bayesian reasoning specifically in medical risk communication. Part of her PhD is under the European Union’s Horizon 2020 research and innovation programme, the NL4XAI project which focuses on using AI models and techniques by non-expert users. Before PhD, she worked on open-source system building at scale primarily in applied Natural Language Processing. She holds a Master's in technology from DAIICT with a thesis on Attacks in Automatic Speaker Recognition Systems.
Research interests
The comprehension and trust that end-users place in AI models, particularly in critical and life-altering applications such as healthcare, hold significant implications for its usefulness and adoption. While AI is becoming increasingly public-facing, prominent frameworks in Explainable Artificial Intelligence (XAI) have been developed for use by experts. The need to integrate XAI methodologies into the user interfaces of AI products has become imminent. Adarsa's work addresses this in the context of public-facing healthcare prediction tools. Her research focuses on interpretable models and generating natural language explanations. She is working on grounding her studies in in-vitro fertilisation treatment in the UK, evaluating understandability and trust in uncertainty communication. (Sivaprasad, A., & Reiter, E. Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models. Presented at UncertaiNLP Workshop, EACL 2024. https://doi.org/10.48550/arXiv.2401.17511)