Introduction
The rapid roll-out, commercialisation and adoption of foundation models (general purpose AI applications based on broad data) across domains has rightfully triggered a range of AI governance initiatives. These interventions seek to tackle the harmful impacts exacerbated – and even generated – by these technologies. However, these governance initiatives, largely emerging from the global north, have so far focused on concerns of technical safety and speculative risks. This narrow lens fails to account for the growing body of evidence highlighting the range of sociotechnical risks and harms posed by the proliferation of foundation models.
Drawing on a review of multi-disciplinary foundation models literature, we developed a typology which accounts for the socio-technical impacts of these technologies. We identify 14 categories of risks and harms across three core areas: individual, social and biospheric. This typology provides a more expansive framework through which the impacts of foundation models can be identified, and technical and normative interventions can be undertaken.
Alongside this impacts mapping, we are conducting an in-depth qualitative analysis of emerging governance initiatives related to foundation models and generative AI. This ongoing work aims to evaluate how the various risks posed by foundation models are being addressed globally by different policy and technical interventions, across their entire value chain.
Project aims
The aim of this work is to offer a critical and comprehensive account of the rapidly evolving landscape of foundation models, understand the risks they pose to individuals, society and the planet and examine how emerging governance initiatives are capable of adequately addressing the range of issues they raise.
Applications
The scale and breadth at which foundation models are being developed and deployed necessitates timely, responsive and robust governance to address any harmful impacts. As with any effective governance interventions, these must be rigorous and expansive; however, current interventions have so far focused on narrow and speculative framings of risk and harm, failing to more comprehensively capture the observed impacts these technologies are already having on social, political, and material realities across the globe.
Our critical framework can be used to inform a more comprehensive assessment of the socio-technical risks and harms that arise with the development and deployment of foundation models. This framework, in conjunction with our upcoming paper evaluating emerging governance interventions, supports the establishment of effective actions at the level of policy as well as technical design that help advance fair, transparent and responsible AI.