It is becoming easier and easier to collect large amounts of data across a broad range of research areas, and there is a growing need to understand how this can best be exploited to make new discoveries. The use of modern computational methods has already revolutionised research in physics and biology, and the stage is set for this approach to become a standard methodology in many different fields. However, we know it is not sufficient to just collect data, and hope that some generic algorithms will be able to help – the crucial step is in how to incorporate the deep knowledge that already exists about a system into the computational methods used.
For example, in particle physics, the search for the Higgs boson was guided by the well-developed theory of the Standard Model. In contrast, the search for particles of 'dark matter' has very little theory to guide it. In linguistics, there has been much progress in developing statistical models of language use, but it is not clear how to combine this with what is understood theoretically about how humans read, write and speak. In others cases, a computational model may be needed to make sense of data – for example, the need for an atomic-level model of a material when characterising its properties from high resolution images.
The main aim of the programme is to work with researchers from all disciplines across the Turing's university partner network, and with national research facilities, to make effective use of state of the art methods in artificial intelligence and data science.
The social sciences and arts provide particularly interesting challenges within the programme. Our understanding is often qualitative, and aligning this with what data sets are telling us can be difficult. There is also a considerable need to provide relevant training in these new methodologies, in a way that is acceptable and meaningful to researchers from non-numerate disciplines.
It is envisaged that, within a very few years, research across universities and national facilities will more and more come to be based on the computational data science and AI methods being developed at The Alan Turing Institute and its partners. It is therefore a key challenge that the programme's research remains at the vanguard of this movement nationally.
In April 2017, the Royal Society published the results of a major policy study on machine learning. This report considered the potential of machine learning in the next 5 – 10 years, and the actions required to build an environment of careful stewardship that can help realise its potential.
Its publication set the direction for a wider programme of Royal Society policy and public engagement on artificial intelligence (AI), which seeks to create the conditions in which the benefits of these technologies can be brought into being safely and rapidly.
As part of this programme, in February 2019 the Society convened a workshop on the application of AI in science. By processing the large amounts of data now being generated in fields such as the life sciences, particle physics, astronomy, the social sciences, and more, machine learning could be a key enabler for a range of scientific fields, pushing forward the boundaries of science.
Read the joint discussion paper by the Royal Society and The Alan Turing Institute, which summarises discussions at the workshop. It is not intended as a verbatim record and its contents do not necessarily represent the views of all participants at the event, or Fellows of the Royal Society or The Alan Turing Institute.
For more information about the programme, please contact [email protected]