How AI can see the natural world in ways humans can’t

Computer vision tools developed at the Turing and elsewhere can transform environmental research and monitoring

Wednesday 31 Jan 2024

From single seeds to the contents of entire oceans, images and video provide the means to study the natural world in exquisite detail and on huge scales. However, getting useful information from vast quantities of visual data can be a laborious process. A field of AI known as computer vision offers automated solutions and, here at The Alan Turing Institute, we’re developing tools to help environmental researchers make better use of them.

Computer vision systems mimic human visual perception by using AI to interpret and understand visual information, typically from images or videos. We can use them to detect and classify objects, for example, or recognise faces. In the context of environmental research and biodiversity monitoring, computer vision can make it quicker and easier to study species, landscapes and other aspects of the natural world over larger and more remote areas, and longer timescales. 

Already, computer vision models are proving more accurate and efficient than manual methods for tasks such as detecting elephants in drone images, identifying individual dolphins from the sound wave patterns their whistles make and classifying plants based on photos taken on our phones. However, it is not always immediately obvious which model will work best with a particular environmental dataset. One model might promise exceptional results for images taken of certain species or habitats, for example, but translate poorly to another researcher’s data from a different species or environment. 

At the Turing, we have been developing a computer vision catalogue to address this problem. Scivision makes it simpler for experts across scientific domains including ecology, cell biology and marine science to access and use state-of-the-art AI computer vision models, in order to gather information from images. Researchers can use Scivision to easily identify and test out pre-trained models to analyse data across a wide range of scales and spectrums, from microscope to satellite images, and from colour to X-ray and infrared. 

Until now, researchers had to search for existing models, or create and train their own from scratch, before testing them on their data. Scivision reduces the time and cost involved in this process. Whilst other platforms, such as Hugging Face, host large numbers of models across many disciplines, Scivision is unique in that it focuses on computer vision problems in environmental and biodiversity domains, and aims to build a community of collaboration. It allows scientists to share their work and recognise when the same tools can be applied to tackle different real-world problems. 

Here are just three examples of how Scivision is being used to support environment and sustainability research:

Focus on food

Researchers around the world are working to understand how crop plants can be sustainably grown in order to ensure future food security with minimal impact on the natural environment. Automated measurements of crop plants, taken throughout plants’ life cycles, can support these efforts and are now being collected by Turing researchers and Rothamsted Research using a combination of Scivision and MapReader software, which was originally developed for digitising maps. (Detecting connected plant parts like branches and flowers is similar to detecting connected elements on a map, such as railway networks.) The shape, size and spatial arrangement of crop plant seeds are also being automatically extracted with a model from Scivision that was originally applied to cell microscopy images. Together, these tools for measuring plant growth will allow researchers to better predict how plants grow under changing climate conditions.

Diagram comprised of 3x3 boxes showing AI software analysing plant images
MapReader software enables growth tracking by analysing images of plants (top row). It breaks up the images (middle row) to identify the patches (bottom row) that belong to flowers and green parts (image: Evangeline Corcoran / Williams et al., 2023)

Satellite spy cams

Satellite imagery is increasingly used to remotely observe changes in our environment. Much like in the BBC series Spy in the Wild, where scientists use spy cams to observe the animal world, we use satellite data to spy on the changes happening across our planet. Scivision incorporates AI models that can be used with large-scale satellite data. For example, the coastal vegetation detector ‘VEdge_Detector’, developed by researchers at the University of Cambridge and Birkbeck, University of London, uses a neural networks approach to help track changes in coastal regions over time. It is currently being used to measure the erosion of sandy beaches in Australia. Researchers can download the pre-trained model and explore the sample dataset of satellite imagery from two different coastal locations to determine its suitability for their work. As the code is open source, they can make small changes to it to make it more suitable for their own datasets.

Diagram showing the detection of coastal vegetation in satellite imagery
The VEdge_Detector tool can automatically detect the position of coastal vegetation based on satellite images (image: Rogers et al., 2021)

Marine learning

In the field of deep-sea biodiversity monitoring, Scivision has helped researchers to build tools for solving problems like real-time plankton classification and remote monitoring of species that live on the ocean floor. In one case, data scientists working with the Centre for Environment, Fisheries and Aquaculture Science (Cefas) used it to help them develop a model for automatically classifying plankton species based on their shape, size and features. The model is now used with a high-speed camera aboard a research vessel in the North Sea and can be adapted to classify images of other marine objects and species. In another Cefas collaboration, researchers used Scivision to develop a prototype model capable of detecting and classifying marine creatures called ‘sea pens' from video footage of the ocean floor. The resulting model was 95% accurate compared to manual identification.

Photographs of marine creatures with superimposed labels identifying their species
Marine species identified by a computer vision model (YOLOv5) that was trained by Turing researchers on images from FathomNet, an open-source database for marine images (image: Meghna Asthana)

 

Head to the Scivision website to browse the catalogue, read the user guide and sign up for the newsletter and community call.

 

Top image: auris