Ewa Luger is a Chancellor's Fellow in Digital Arts and Humanities, University of Edinburgh, a consulting researcher at Microsoft Research UK (AI and Ethics), and Research Excellence Framework (REF2021) co-ordinator for Design at Edinburgh College of Art. Her work explores applied ethical issues within the sphere of machine intelligence and data-driven systems. This encompasses practical considerations such as data governance, consent, privacy, explainable AI, and how intelligent networked systems might be made intelligible to the user, with a particular interest in distribution of power and spheres of digital exclusion.
Ewa holds an EPSRC-funded PhD (Computer Science), BA Hons (Politics & International Relations), and an MA (International Relations ESRC Research Track) from the University of Nottingham. Ewa previously undertook a Fellowship at Corpus Christi College (University of Cambridge) and Microsoft Research (UK) and builds upon 15 years as a practitioner in the third sector conducting research and evaluation studies related to digital/financial inclusion amongst marginalised groups.
Ethical AI by Design: Formalising a Human-Computer Interaction (HCI) Agenda Intelligent devices and services have become an embedded feature of our lives. Such systems act to distribute cognition and control between humans and computational agents and are increasingly used to support decision-making processes in sensitive/high-risk contexts. This has led to a broad consensus that algorithms should (a) be predictable to those that govern, (b) robust against manipulation, and (c) transparent to inspection. To be transparent is to make visible or expose all aspects of an entity. However, the algorithms that will underpin emerging intelligent systems rely on more complex models that endeavor to reflect the processes of the human brain.
The complexities of such models make it very difficult to predict how they will perform on some given input, even for subject matter experts. Whilst the dominant work emerging from this field is technical, an equally pressing problem is how one might consider social, conceptual and experiential understandings of algorithmic systems. How can we design systems that support human trust and understanding of AI? In light of this, the proposed research seeks to investigate how we might design intelligible, inspectable and accountable systems from the perspective of human-computer interaction.