Understanding multimodal data streams – complex sequences of data from different sources – is a key challenge of data science. By developing mathematical descriptions of these streams, using ‘rough path’ theory, it’s possible to gain insights into the data that can be used to generate meaningful decisions and actions. Applications of these models range from recognising Chinese handwriting on a smartphone, to classifying physical human actions and assisting mental health diagnosis.
Explaining the science: Data streams and rough path theory
When streamed data arrives, it rarely comes all at once, or from a single scalar source, but in multiple modes. Multimodal data streams are found in a huge range of situations and on all scales, and successfully summarising these streams is key to understanding them, and many facets of the world around us. The order of different signals in different modes in these streams provides key information. For example, the order in which glucose levels and insulin levels rise and fall in someone’s blood.
The key question that can be asked about a data stream is how to summarise it over short intervals, to create actionable information that can predict the stream’s effect and interaction on other systems. For example, summarising web click history well enough to be able to discuss and evaluate a range of strategies for effective advert placement, in systematic and automatic ways.
Many analysis techniques aren’t well equipped to deal with multimodal data, as they often treat each mode independently, nor do they deal well with randomness in the data. This is where Rough Path theory, a highly abstract and universal description of complex multimodal data streams, is incredibly useful. It allows us to directly capture the order in which events happen and better model the effects of these data streams, without needing to do high-dimensional recovery of the individual data points.
This is done by generating what’s called the ‘signature’ of the data stream; a set of step-by-step descriptions (or iterated integrals). The elements of the signature form, over short intervals of time, an ideal ‘feature set’ of inputs that can then be used to enhance conventional machine learning algorithms. The signature can dramatically reduce the size of certain learning problems and therefore the amount of data needed to train the related algorithms.
A successful application of using the signature with deep learning is the product gPen, a smartphone app developed by SCUT (South China University of Technology), and initiated by Ben Graham, formerly of the University of Warwick and now at Facebook, that converts user input of finger drawn Chinese handwriting into on-screen Chinese printed text. The 3-dimensional data stream here being the x-y position of the pen and the pen off/pen on action. The app has recognised billions of characters and is a part of a programme trained with data that can deliver a 96% accurate result on an iPhone in milliseconds. It does this with potentially thousands of outcomes, using less than 30MB of data and program, all without needing an internet connection.
The projects led by Turing Fellows Terry Lyons and Hao Ni are looking at further developing fundamental, signature based mathematical tools and introducing them to contexts where it is possible to achieve significant outcomes.
One part of the work is the development of useful open source software tools that could be utilised in various machine learning environments. Another bigger part is the interaction with complex, real world data, to be able to easily tackle questions where there is a variety of different data to consume. Themes include mental health, action detection, human/computer interfaces, and astronomy.
These projects aim to build bridges between high quality, fundamental mathematics and data science applications, and bring these new mathematical technologies into wider use. The Turing will act as ‘an intellectual nursery’, as well as a valuable conduit to transfer ideas between mathematics and critical business activity.
Mental health: Using speech and self-reported mood data from a prior clinical trial, a tool has been developed which looks in an automated way at self-reported data from individuals. When trained on the data provided across the participants in the trial, the tool captures the order in which individuals become more or less angry, anxious, elated, irritable, and sad.
The tool has already shown that, over a relatively short window of measurement, it’s possible to meaningfully position an individual on the bipolar spectrum. The sensitivity of this methodology offers the potential for its use in providing feedback for clinicians and clients.
Action Detection: Using signatures and computer vision to classify physical human actions from real time data is an important challenge, with a range of potentially interested parties. In particular, through the Turing, the research group have linked to the Health and Safety Executive. The annual cost of work related injuries in the UK is estimated at £4.6bn, and these injuries are often due to poor practice and training in manual handling. As the HSE currently does a lot of manual, frame by frame video analysis at a substantial cost, real time analysis would be of great value.
Human Computer Interfaces: This is related to the Chinese handwriting example detailed above. Further advances in this area are to be able to detect who has written the handwriting and also improved accelerometer analysis. As much of the computation of signatures for this must be done on simple, lightweight devices this of interest to the research group’s industrial partner ARM. Through this partnership, and with support from the Turing, the group aims to engage with other potentially parties that could be interested in this area such as Apple, Fitbit, Google etc.
Astronomy: Radio telescopes are used to receive and study huge complex astronomical radio wave data. Non-linearity is a crucial aspect of the way these telescopes collect and analyse their data. Rough path models have potential to help the development of measurement instruments and processing techniques used in new telescopes such as SKA in Cambridge. Our aim, in collaboration with SKA, is to improve detection sensitivity and make new observations of fast transients.