This group unites international researchers and heritage professionals who have an interest in using digitised image collections (maps, photographs, newspapers, books) in computer vision tasks. Applying CV to historical datasets raises issues of provenance and bias as well as processing challenges distinct from the recent or born digital images used in most computer vision work. We offer researchers at the Turing and beyond an opportunity to establish connections and build this new interdisciplinary field together. We focus on shared practices in data science around computer vision, providing a much needed center of gravity for a growing, otherwise disparate community.

Explaining the science

Computer vision refers to a range of tasks and methods aimed at allowing computers to work effectively with images. Common computer vision tasks include the classification of images into different categories and segmenting images into smaller parts based on image content, for example, detecting which parts of an image contains a person. Computer vision methods have been applied across a broad range of scientific and engineering domains including application in self-driving vehicles, processing medical images and to process digitised documents to recognise text content (optical-character-recognition). Increasingly there is a growing interest in applying and expanding these techniques to work with heritage materials including maps, books, newspapers and paintings, which have been digitised. This raises questions about how well computer vision methods developed for other types of images perform on digitised heritage content as well as questions about how we effectively and responsibility work with collections at scale through automated methods.

Talking points

How might heritage institutions and researchers work together to improve accessibility to large collections of images that have been and continue to be scanned?

Challenges: Many libraries, archives, museums, and galleries have restrictions on sharing images of their collections that have been scanned in the last few decades.

Example output: A white paper co-authored by researchers, curators, funders, and third-party digitisation partners about the future of open data access.

How can computer vision methods be used in humanities and heritage contexts?

Challenges: Computer vision literature currently focuses on problems related to certain kinds of relatively contemporary images (web content, photographs, remote sensing data, text with standard fonts).

Example output: Case studies demonstrating open challenges for working with older materials (printed and manuscript maps, woodcuts, newspapers with historical fonts and layouts).

How to get involved

Click here to join us and request sign-up

Recent updates

Register for the Computer vision for digital heritage SIG talk on Friday 30 July at 10:00 here.

We will welcome Dr Tarin Clanuwat and Professor Asanobu Kitamoto (both from the ROIS-DS Center for Open Data in the Humanities, Japan).

Professor Asanobu Kitamoto will present a talk on “Visual and Spatial Digital Humanities Research for Japanese Culture”. Dr Clanuwat will be presenting her work on “miwo: Kuzushiji recognition smartphone application with AI”.

Kitamoto Abstract: The talk introduces various research projects at the ROIS-DS Center for Open Data in the Humanities (CODH) with emphasis on visual and spatial data. In contrast to textual data, which is typical in digital humanities research, visual and spatial data requires different approaches and expertise, such as computer vision, machine learning, geographic information science, and linked data. The talk will discuss information technology to solve research questions in the humanities and data platforms such as IIIF (International Image Interoperability Framework) to create big structured data of the past.

Project links (some in Japanese only):
IIIF Curation Platform
Collection of Facial Expressions (KaoKore)
Bukan Complete Collection
Edo Maps

Asanobu Kitamoto earned his Ph.D. in electronic engineering from the University of Tokyo in 1997. He is now Director of Center for Open Data in the Humanities (CODH), Joint Support-Center for Data Science Research (DS), Research Organization of Information and Systems (ROIS), Professor of National Institute of Informatics, and SOKENDAI (The Graduate University for Advanced Studies). His main technical interest is in developing data-driven science in a wide range of disciplines such as humanities, earth science and environment, and disaster reduction. He received Japan Media Arts Festival Jury Recommended Works, IPSJ Yamashita Award, and others. He is also interested in trans-disciplinary collaboration for the promotion of open science.

Clanuwat Abstract: Reading kuzushiji (cursive script) is an essential skill for the study of premodern Japan, but gaining kuzushiji proficiency can be a challenge. This talk will offer a brief introduction to approaches to learning how to decipher kuzushiji. Dr. Clanuwat will demonstrate how artificial intelligence based kuzushiji recognition system works to transcribe premodern Japanese documents. Finally she will introduce the Kuzushiji recognition smartphone app “miwo”. The name “miwo” comes from the fourteenth chapter of The Tale of Genji, “Miwotsukushi”, referring to waterway markers. Just as the Miwotsukushi is a guide for boats in the sea, the miwo app aims to act as a guide for reading kuzushiji materials.

Dr. Clanuwat is a project assistant professor at ROIS-DS Center for Open Data in the Humanities. She received her PhD from the Graduate school of Letters Arts and Sciences at Waseda University, where she specialized in Kamakura-era Tale of Genji commentaries. In 2018, she developed an AI-based kuzushiji recognition model called KuroNet. In 2019, she hosted a Kaggle machine learning competition for kuzushiji recognition which attracted over 300 machine learning researchers and engineers from around the world. Her AI and kuzushiji research won the Information Processing Society of Japan Yamashita Memorial Research Award. Her Kuzushiji recognition smartphone application won the ACT-X AI Powered Innovation and Creation research grant from Japan Science and Technology Agency.


Visit our GitHub page here.


Contact info

Katherine McDonough
[email protected]