About the event

Professor Emily M. Bender will present her recent (co-authored) paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

In this paper, Bender and her co-authors take stock of the recent trend towards ever larger language models (especially for English), which the field of natural language processing has been using to extend the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks. In the paper, they take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks?

The presentation will be followed by a panel discussion.

Time: 16:00-17:15 BST / 8:00-9:15 PDT

Recommended reading

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Shmargaret Shmitchell

Ideal Words: A Vector-Based Formalisation of Semantic Competence - Aurélie Herbelot and Ann Copestake 

Alignment of Language Agents - Zachary Kenton, Tom Everitt, Laura Weidinger, Iason Gabriel, Vladimir Mikulik and Geoffrey Irving

Improving Language Model Behavior by Training on a Curated Dataset - Irene Solaiman and Christy Dennison


Professor Ann Copestake

Professor of Computational Linguistics and Head of Computer Science and Technology Department at University of Cambridge


Dr Adrian Weller

Programme Director for Artificial Intelligence, Turing Fellow and Turing AI Acceleration Fellow