Narrative Understanding with Large Language Models

Explore how Large Language Models (LLMs) are reshaping our comprehension of long narratives.

Project status

Ongoing

Introduction

Narrative stories are characterised by their extended length and depth. They consist of elements such as the viewpoint or perspective from which the story is told, the characters involved, and events that happened. All of these come together to create a cohesive plot, whether it is in a book, a movie, or other form of storytelling. Narrative understanding involves comprehending these elements, how they connect or influence each other, how the relationships between the characters change over time, how events cause other events to happen, and how events are interwoven to shape the narrative’s progression. 

Explaining the science

LLMs have demonstrated their impressive capabilities in generating human-like language. However, using LLMs for narrative understanding faces several challenges. For example, LLMs have a difficulty in dealing with long narrative text. Events or characters introduced early on in a narrative may have a significant impact on later events. Also, relationships among characters are not fixed and may evolve over time. LLMs may struggle to monitor and capture these evolving relationships. Furthermore, narratives often involve causal relationships among events, characters, and actions. Extracting and correctly inferring these causal links can be challenging for LLMs, as it requires understanding not only what happened but also why and how events are connected. Narrative understanding also requires the ability of inferring the emotional states of characters, their motivations and intentions. This requires LLMs to possess the Theory of Mind (ToM) capabilities. The question of whether LLMs can be trained to develop such ToM capabilities remains open.  

Project aims

  1. Character-centric analysis. We aim to develop automated approaches for identifying key characters, their roles, characteristics, and relationships within a narrative. This involves tackling challenges such as character co-referencing and linking. Also, detecting relationships among characters may require addressing conflict and incomplete information in narratives and tracking dynamic character relationships.  
  2. Reasoning in narratives. We aim to enhance LLMs’ capabilities to reason about the temporal order of events and their relationships in narratives. We will also explore approaches to enable LLMs to understand cause-and-effect relationships and perform theory-of-mind reasoning within narratives.  
  3. Interactive narrative understanding. We will investigate a framework for interactive narrative understanding, which will involve a few key components, including (1) Agents – extracting comprehensive character-centric memory from narratives, including aspects such as character’s personal traits and preferences, beliefs and desires, their relationships with other characters, their past behaviours, and their anticipated actions. (2) Settings – identifying character locations and settings, which are often vaguely defined unless crucial to the plot. (3) Responses – ensuring consistency and engagement in user interactions, despite varying user inputs. 

Applications

Narrative understanding can find a wide variety of applications. Some example applications are listed below:

  • Personalised interactive narrative storytelling: Creating immersive and interactive narrative environments tailored to individual preferences, resembling the dynamic storylines experienced by individuals.
  • ESG report analysis: Deriving insights from lengthy ESG reports, which convey comprehensive and data-driven narratives about companies’ performance and initiatives related to environmental sustainability, social responsibility, and corporate governance.
  • News storyline generation: In platforms like news aggregators, narrative understanding could improve content relevance and engagement by generating more coherent and informative news storylines.
  • Consistent and long-range conversations with LLM chatbots: Long conversations with LLM-based chatbots can be considered as a form of narrative storytelling, although it is interactive and dynamically generated. Such long-range conversations that involve diverse topics present a challenge for conventional methods, as they struggle to effectively address the issue of retaining contextual coherence over long stretches of discourse. 

Recent updates

Organisers

Professor Yulan He

Turing AI Acceleration Fellow and Professor in Natural Language Processing, Kings College London

Researchers and collaborators