Annual Report 2020–21 Section 1 An exceptional year Section 2 Trustees’ and strategic report 65 Section 3 Financial statements 89 Section 1 An exceptional year 1.1 Chair’s report 4 1.2 Institute Director’s report 6 1.3 Equality, diversity and inclusion 9 1.4 Partnerships and collaborations 12 1.5 Research highlights of the year 19 1.6 The Turing’s response to COVID-19 36 1.7 The year in numbers 43 1.8 Engagement, outreach and skills 48 Section 1.1 Chair’s report The work of our data science and AI community, alongside our partners in industry, third sector and government, has once again been evident across many domains. This sense of collaboration was demonstrated at our highly successful first national showcase AI UK. You will see in this year’s annual report examples of how the Institute is collaborating by predicting sea ice loss, mapping the UK’s solar panels, and even developing underground farms. The Alan Turing Institute has also been proud to play its part in the response to the devastating COVID-19 pandemic. The Institute continues to drive the kind of research and innovation that will be vital to the future of our environment and our economy. This includes areas such as digital twins, which are providing crucial insights by bridging the gap between data and actionable intelligence. I was delighted to see the launch of the Institute’s first start-up, Quaisr, with its own approach to digital twin technology. This has been an especially testing time for the higher education sector and I particularly wish to thank our network of universities for their support. This year we have seen two universities running Data Study Groups in collaboration with the Institute. The launch of a new research showcase series that engages partners from across our network, and the rapid growth of our interest groups, are both powerful examples of the Institute’s ability to convene and connect some of the brightest and best minds. The Institute is uniquely placed to leverage its strategic position and convening power, and to help ensure the UK delivers on its commitment to scale up AI research, development and innovation. As the national institute, we relish every opportunity to respond dynamically to the rapidly changing world around us. I would like to thank Adrian Smith and the Board of Trustees for their invaluable direction. I am also grateful to all of our partners and our Turing colleagues for their support and hard work this year. Howard Covington Chair of the Board of Trustees “The Institute continues to drive the kind of research and innovation that will be vital to the future of our environment and our economy.” Section 1.2 Institute Director’s report In this exceptional year we have demonstrated the scientific value of convening and collaborating. Our annual report for 2020/2021 highlights how we have been able to harness regional, national and international partnerships to help tackle issues from COVID-19 to climate change. The terrible effects of the pandemic on public health and human life have sadly continued, but in response the pace of new scientific insights has been incredible. Our researchers have made important improvements to the accuracy of the NHS app, developed algorithms to ensure social distancing in London’s streets, and combined NHS datasets to help answer clinical questions about the effects of COVID-19. Working alongside government and regulators on issues from misinformation to explaining AI decisions, our researchers continue to make a vital contribution to how data science and AI can improve policy­making. It has never been more apparent that our Institute has a critical role in pushing the boundaries of science for public good. Climate change is one of the biggest challenges of our time and I am proud that our researchers are using the power of data science and artificial intelligence to better understand and respond to the threat it poses to the planet and our way of life. We are of course part of a complex and thriving AI ecosystem. This was evident at our first-ever national showcase, AI UK, where the AI community explored issues such as ethics, diversity, shocks and resilience, and AI skills. Earlier this year, the government announced its intention to publish a national AI strategy. This followed the release of the AI Roadmap, which alongside the AI Council’s survey and further stakeholder engagement will help inform the Office for Artificial Intelligence’s national AI strategy later this year. This will give all those operating in the nation’s AI ecosystem a strategic pathway to future success despite the uncertainty of our times. This year we saw a positive mid-term review by the Engineering and Physical Sciences Research Council (EPSRC) of our AI for science and government (ASG) research programme led by Alan Wilson, Director of Special Projects. ASG was established in 2018 with a grant from the UKRI Strategic Priorities Fund, which Alan was instrumental in securing. Alan stood down from ASG in March 2021 and I would like to thank him for his key role in the evolution of the programme and his continued contribution to the Institute since its inception. Read more about the impact of our ASG programme in our research highlights (pages 19-42). Despite the many challenges, I am pleased to lead the Institute with the peerless support of the Board and my colleagues. I would like to thank our community for its endeavours and resolve during this exceptional year. Adrian Smith Institute Director and Chief Executive “The terrible effects of the pandemic on public health and human life have sadly continued, but in response the pace of new scientific insights has been incredible.” Strategic partners The Institute’s unique position at the interface of academia, business, third sector and policy distinguishes it from other research institutions. In addition, it creates a wealth of potential collaborative opportunities through our dynamic relationships with our strategic partners. Our relationships with our strategic partners contribute to what sets the Institute apart from universities and other research organisations and drives our innovation and impact. As well as delivering ambitious programmes of research, the partnerships seek to build meaningful connections between academic excellence and real-world challenges in business and government. Section 1.3 Equality, diversity and inclusion Promoting and embedding equality, diversity and inclusion (EDI) in our function as an employer, research institute and national body is integral to achieving our mission to advance data science and AI research for everyone’s benefit. The Alan Turing Institute is committed to making measurable progress, and this year has seen us taking some important steps in delivering on this commitment. We recognise that our work in this area is just beginning, and we look forward to working with our wider community to effect real change. Our EDI strategy In mid-2020, the Turing commissioned an independent EDI audit in order to provide evidence-based analysis and recommendations for embedding EDI into every aspect of the Institute. The audit confirmed that a strategic approach was needed, and, guided by this audit and a robust internal consultation process, an EDI strategy and action plan will formally launch later in 2021, directing our strategic work and setting ambitious delivery targets. The Turing Management Team, along with the EDI Advisory Group, will be responsible for delivery of the action plan. The EDI working groups (soon to be renamed ‘network groups’) will act as ‘critical friends’ to the Turing alongside a formal governance structure. The Turing recently recruited an EDI Strategic Lead to spearhead this work, supported by an EDI Officer. We also expanded our Board of Trustees, recruiting four new Independent Trustees with a diverse range of backgrounds that span technology, government, academia and finance. The Turing’s EDI working groups Much of the progress in our EDI agenda is achieved by our thriving EDI working groups, made up of passionate members of the Turing community. Here are some highlights from the past year: The attracting diversity, developing talent and public engagement working group led our activity for National Inclusion Week 2020, developing connections between staff and the wider research community. The health and wellbeing working group has been providing additional community support during the pandemic, including access to a meditation app and supporting the ongoing development of a reasonable adjustments policy to help all the community fully participate in the workplace. The gender and LGBTQ+ equality working group has connected with Bletchley Park to explore our shared connection with Alan Turing, and celebrated and amplified our research outputs from initiatives including the ‘Women in data science and AI’ project (see page 50) and the humanities and data science interest group. The group also worked with colleagues in the People team to extend the Turing’s maternity support leave to four weeks. The race and social economic equality working group led challenging and vitally important conversations following the murder of George Floyd in the US, including three sessions that empowered the Turing community to discuss racial equality in the workplace and wider society. Section 1.4 Partnerships and collaborations A new strategic partnership This year, Accenture and The Alan Turing Institute established an exciting new, five-year strategic partnership that aims to bring together industry and academia to deliver real value from cutting-edge AI and data science research to businesses and society in the UK and beyond. Since 2017, Accenture and the Turing have conducted joint research activity, including the exploration of new techniques for network analytics for use in fraud detection in sectors such as telecommunications. Accenture has also run Data Study Groups. Now, the partnership is building on this existing three-year alliance through joint research being undertaken across many areas, including: – Creating tools for the generation and management of synthetic data as well as tools in other privacy-enhancing technologies (PETs) for safe and secure data sharing. – Developing interoperable software tools focusing on advanced modelling techniques to create complex digital twins. – Applications of AI to help quantify uncertainty and design systems that are more resilient to shocks, more sustainable, and deliver social good. In addition to cutting-edge research, the partnership will also deliver engagement activities to help transform executives’ understanding of how AI and data science can be used as a tool for business, and will provide internship opportunities for students from across the Turing network with Accenture and its clients. “We have some of the leading minds in the field of AI here in the UK and across Europe,” said George Marcotte, Applied Intelligence Lead for Accenture in Europe. “But without scaling the technology in businesses we’ll never realise its full technical, societal and economic potential. We’re excited to continue our collaboration with The Alan Turing Institute as a strategic partner, building skills and fuelling growth in an area that is critical for the future of the economy.” Crick data partnership The Turing has agreed a partnership with biomedical research institute (and London Knowledge Quarter neighbour) The Francis Crick Institute to collaborate on projects that will use data science to unlock new understandings about health and disease. To mark its launch, the partnership is funding four 12-month projects in which early career researchers on secondment to the Turing work with biomedical data generated by Crick scientists. These include a project to analyse data from studies involving mice with lung cancer to explore how interactions between cells impact cancer immunotherapy treatments, and a project to develop an open-access software toolbox to make it easier to analyse data about populations of neurons. Other projects will be launched as the partnership continues over its initial three-year term, with the hope that it will continue beyond this. Norwich BioScience Institutes With biological research becoming increasingly data-rich, a timely new collaboration between the Turing and the Norwich BioScience Institutes (NBI) will identify new ways to exploit this wealth of information using machine learning and AI. The Turing will work with the NBI – which includes the Earlham Institute, John Innes Centre, Quadram Institute and Sainsbury Laboratory – in a £600,000 project to advance life science research. Half of the funding will come from the Turing’s AI for science and government (ASG) programme, funded by the Strategic Priorities Fund award, and half from a strategic award from the Biotechnology and Biological Sciences Research Council (UKRI-BBSRC). The funding will support up to six researchers in year-long posts who will work together in a cross-institute cohort to expand the application of machine learning and AI to several key areas, which may include: – Identifying cell types using a deep neural network. – Characterising the circadian rhythms of plants. – Understanding how genetic changes affect plant structure, influencing crop yield. – Finding the ‘weapons’ used by plant pathogens to invade plants. The Turing-RSS Lab In September 2020, the Turing began a partnership with the Royal Statistical Society (RSS) to provide expertise to the UK government’s Joint Biosecurity Centre (JBC). The JBC is part of NHS Test and Trace within the Department of Health and Social Care, and the Turing-RSS Lab feeds into JBC’s work by developing statistical and machine learning techniques to help decision makers respond to the spread of COVID-19. For more information, see page 40. Met Office alliance An ‘operational alliance’ was this year signed by the Turing and the Met Office – the UK’s national weather service – formalising ambitions to co-design a dynamic programme of research at the interface of data science, AI, climate science and meteorology. This collaboration demonstrates the Turing’s continued commitment to tackling environmental issues. Researchers at the Turing and Met Office will draw on their combined expertise to deliver advances in areas including climate change and biodiversity loss; high-impact weather events; predictive data analytics for air quality; and physics-driven machine learning in weather forecasts. Kick-starting the collaboration are two projects, one blending satellite and surface data for environmental monitoring and the other investigating the impact of climate change on agriculture. This work is funded via the Turing’s ASG programme, which aims to deploy AI and data science in priority areas to support the UK economy and society. Researchers at the Turing and Met Office aim to deliver advances in areas including climate change and biodiversity loss, air quality, and weather forecasts. AMRC agreement An agreement between the Turing and the University of Sheffield Advanced Manufacturing Research Centre (AMRC) will help to boost research at the intersection of AI and manufacturing. This ‘memorandum of understanding’ (MoU) will allow researchers from the AMRC and the Turing’s data-centric engineering programme to collectively address key challenges facing the manufacturing sector, such as optimising automation, integrating intelligent software and increasing production capacity. The AMRC is a network of world-leading research and innovation centres working with manufacturing companies of all sizes from around the globe. With machine learning and AI playing an increasingly important role in manufacturing, this MoU is an important step for the Turing as we seek to consolidate our research strategy in this area. Cervest collaboration The Turing has established a new collaboration between its data-centric engineering programme and UK earth science AI company Cervest to accelerate research and development aimed at helping communities and organisations to quickly understand and measure the effects of climatic events. This collaboration will build on the Turing’s multiresolution machine learning algorithms and Cervest’s data engineering platform and machine learning algorithms, initially applied to crop yield forecasting and remote sensing data. This process – called multiresolution multitask learning models – is designed to create high-resolution images of land areas by integrating information and evidence from multiple resolutions across space and over time. By integrating data from sensor networks, organisations and communities globally will be empowered to understand and predict what is happening to the world around them, helping them to make decisions relating to, for instance, land management, asset planning and optimising investments for adaptation. NHS Scotland The Turing’s partnership with NHS Scotland continues to gather steam as we work towards the deployment of a new computer model for predicting patients’ risk of emergency hospital admission. Version four of the SPARRA (Scottish Patients At Risk of Readmission and Admission) tool will improve on the accuracy of its predecessor, helping doctors to better identify patients at risk of health breakdown, and allowing them to intervene early by increasing appointments, adjusting medication or making targeted referrals, for instance. This will improve patient care while also easing pressure on the Scottish healthcare system. The team, part of the Turing’s health and medical sciences programme, hopes to deploy SPARRA v4 in summer 2021. New cross-theme projects The Turing has launched three new large-scale, two-year research projects under its ASG programme. The projects all aim to make societal systems more resilient in the face of future challenges. ‘Shocks and resilience’ is using insights from the COVID-19 crisis to develop data, methodologies and tools to enable policy makers to make better informed decisions about complex socio-economic systems, boosting the resilience of local and national governments against future shocks. ‘Ecosystems of digital twins’ is developing systems of interconnected digital twins – computer models that simulate real-world objects, from buildings to vehicles to entire cities. These ecosystems will help in the modelling and optimisation of myriad aspects of society, from health and commerce to economics and urban infrastructure. ‘Environment and sustainability’ is using data science and AI to tackle environmental and climate challenges, including developing a national crop modelling framework that integrates data from a variety of sources in order to boost understanding of the vulnerabilities of agriculture to climate change. International engagement The Turing has working relationships with organisations and institutes across the globe, including Australia, Switzerland, India, Canada and Paraguay. Two key highlights this year were the signing of agreements with organisations in Singapore and the US. Our MoU with DSO National Laboratories, Singapore’s national defence research agency, seeks to drive advances in three critical areas: misinformation, terrorism and humanitarian aid. The agreement aims to bring together researchers from the DSO and the Turing’s defence and security programme to work on techniques for analysing complex datasets, which could help with detecting false or extremist content in social media posts, for example, or finding locations in satellite imagery where disaster relief is urgently needed. Second, our MoU with the world-leading Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin – an agreement led by the Turing’s data-centric engineering programme – will combine the engineering expertise of these two institutes through joint research projects, seminars, workshops and symposia, with the aim of addressing pressing challenges including urban air pollution and the climate crisis. We also continue to strengthen our ties with researchers in Japan. The PATH-AI project, a collaboration between the Turing, the University of Edinburgh and the RIKEN institute in Japan, has now completed its first year exploring how privacy, agency and trust (and their relationship to AI) differ across cultures, with a focus on the UK and Japan. And in November 2020, the Turing and the Toyota Mobility Foundation were winners of the award for best UK-Japan partnership at the British Business Awards, organised by the British Chamber of Commerce in Japan. This was for the ‘Optimising flow within mobility systems with AI’ project, which developed an AI-based visualisation toolkit to help traffic managers reduce city congestion. As the UK recovers from the COVID-19 pandemic, the Turing is looking to ramp up its international activities, and our International Strategy Working Group is developing an institute-wide strategy for maximising the scientific and societal impacts of our global collaborations. Section 1.5 Research highlights of the year Understanding the risks of climate change to human and national security Climate change has been identified as one of the most serious threats to national security around the world, as it puts pressure on populations, economies, livelihoods and natural resources. The Turing’s defence and security programme is undertaking a project exploring the implications and risks of climate change to human and national security. The report, 'Climate aware and resilient national security: challenges for the 21st century', details initial research in this area. Written by a cross-disciplinary panel of experts, researchers have analysed interactions between societal, economic and environmental factors affecting instability and conflict resulting from climate change. The report provides a background of the risks that climate change poses to the UK’s security, providing evidence based on recent events, and future expectations. The research illustrates opportunities for the UK to develop more sophisticated and useful data-driven forecasting and strategic insight tools and methods. These could help identify and measure climate risks, provide early warnings for climate security-related tipping points and help policy makers carry out climate security risk assessments. Report author Eirini Malliaraki said: “Climate change is one of the most serious threats to security globally. Being able to pinpoint and measure climate security risks is essential to help identify where resilience needs to be built in or safeguarded.” “This research demonstrates the modelling and data requirements needed to show future potential climate-security tipping points.” Eirini Malliaraki ‘Climate aware and resilient national security: challenges for the 21st century’ report author Section 1.5 Research highlights of the year Forecasting Arctic sea ice coverage “Understanding why our IceNet system outperforms physics-based climate models will provide new insights for the climate research community, helping us to better simulate and predict our planet’s future.” Scott Hosking Senior Research Fellow at The Alan Turing Institute and co-leader of the British Antarctic Survey AI Lab The Arctic is warming twice as fast as the rest of the world, resulting in a devastating decline in summer sea ice coverage to around half of that four decades ago. This puts the future of local ecosystems and indigenous communities in doubt, and has knock-on effects for the entire climate system. When ice melts, for instance, the reflectivity (‘albedo’) of the region decreases, meaning that less sunlight is reflected back to space, and the planet warms up even faster. In order to identify where Arctic sea ice will disappear next, and where conservation efforts are most needed, researchers make forecasts of sea ice coverage. However, the interplay between ice, atmosphere and ocean in this region is so complex that physics-based models struggle to make accurate forecasts beyond a few weeks in advance. Now, a collaboration between the Turing and the Artificial Intelligence Lab at the British Antarctic Survey has developed an alternative, data-driven deep learning system that takes sea ice forecasting to the next level. Their IceNet system, trained on over 30 years of observational climate data plus more than 2,000 years of simulated data from computational climate models, predicts monthly average Arctic sea ice coverage with accuracies of up to 97%. It can forecast sea ice coverage up to six months ahead, and outperforms the leading physics-based model at forecasts of two months and longer. The researchers are now planning to make IceNet available as an open source web tool, so that conservationists have access to an early warning system for sea ice loss. Section 1.5 Research highlights of the year History in the making Now entering its third and final phase, Living with Machines has made staggering progress so far in changing the face of digital humanities, using data-driven approaches to shed new light on the Industrial Revolution. The project, funded by the UK Research and Innovation (UKRI) Strategic Priorities Fund, generates new historical perspectives on the effects of the mechanisation of labour and changes to the lives of ordinary people during the 19th century. It also develops: – Tools and code components embedded into an infrastructure that can be adapted for and inspire future interdisciplinary research projects. – New computational techniques to marshal the UK’s rich historical collections. This year, a series of tutorials on computer vision have been developed, and a Python package will shortly be released. Crowdsourcing projects were launched integrating linguistic research questions with tasks that encouraged volunteers to engage with social and technological history in the pages of 19th century newspapers. The collaboratively developed open-source package Defoe, that queries large digital collections, continues to be embraced by other researchers and institutions. Using image analysis and computer vision, the project team is comparing 9,000 different Ordnance Survey maps over three time periods, to understand locations of industrial buildings and measure how close people lived to services across different regions. Looking ahead The project will deliver high-quality research publications that challenge norms of how research is undertaken and published. Further ahead, the team also intends to develop new methods and make strategic recommendations to the heritage and higher education sectors, funding councils and policy makers, about the infrastructure required for UK researchers to work with the UK’s cultural heritage collections. “Our work engages with multiple communities. We are speaking simultaneously to academics in the fields of history, digital humanities, archival science, the spatial humanities and geographic information systems (GIS), computational linguistics, computer science and data science. This can be observed from the variety of venues in which the team has published conference papers and delivered talks.” Ruth Ahnert Principal Investigator of Living with Machines and Turing Fellow Living with Machines is a partnership between The Alan Turing Institute, the British Library, and the universities of Cambridge, East Anglia, Exeter, and QMUL. Section 1.5 Research highlights of the year Quaisr: harnessing the power of digital twins “At Quaisr, our cloud integration service powers the creation of digital twins to accelerate our customers on their journey to digitalisation.” Omar Matar CEO Quaisr As the digitisation of industrial sectors accelerates, the creation of digital twins of physical assets, processes or services is rapidly increasing. The potential of digital twins has seen the Turing’s data-centric engineering programme and Imperial College London create their first-ever joint spin-out, Quaisr. This year, the digital-twin market has doubled in value, rising to $3 billion. It is expected to reach $48 billion in 2026. The power of digital twins is their ability to combine data and use computational models and physical systems capable of monitoring, control, automation, and performance improvements. This ability is brought about by the digital twins ‘learning’ to adapt via feedback from their environment through real and simulated digital information. The Quaisr team, which includes Turing Data-Centric Engineering Strategic and Group Leaders, and works directly with the programme, enables the creation of digital twins. Quaisr builds simulations and visualisations and seeks to answer ‘what if?’ questions: what if an asset is ‘pushed’ into a new operating space? Would it remain safe, secure and resilient, and if not, what interventions would be necessary? By providing timely insights for design and prototyping, Quaisr is enhancing decision-making by bridging the gap between data and actionable intelligence. The team is looking ahead to tackling diverse challenges for the built environment such as environmental contamination detection, production-line decision automation, and optimisation of offshore wind farm locations. Section 1.5 Research highlights of the year Supporting innovation in the fintech sector The Turing was a key academic partner in the Digital Sandbox Pilot – an initiative from the Financial Conduct Authority (FCA) and the City of London Corporation that ran in winter 2020 and provided 28 financial technology (fintech) firms with tools to test, develop and showcase their technologies. The aim was to develop products and services to tackle financial challenges brought about by the COVID-19 pandemic, to support people who have become financially vulnerable and to prevent fraud and scams. A crucial aspect of this pilot was synthetic data – artificially generated data that have realistic statistical properties but no identifying information, making them suited for training and validating computer models in areas where privacy is key. The Turing was the lead evaluator of the synthetic data used in the pilot, with researchers in the finance and economics programme and Research Engineering team analysing the data to ensure that they were good quality. The synthetic data made available to participants included details of seven million fictional individuals, 400 million banking transactions made through five fictional banks, and five million devices used for electronic payments. The pilot ran from November 2020 to February 2021, and the FCA hopes to make the synthetic data more widely available, so that other fintech firms can benefit. The FCA is now planning a second pilot, this time focusing on financial challenges related to sustainability, while also exploring operating and governance models for a permanent version of the digital sandbox. “Our project wouldn’t have been possible without the synthetic data that was evaluated by the Turing’s experts. They helped us to make sure that the data matched up as closely as possible to real-world data.” Matt Lowe Senior Technical Specialist at the FCA and technical lead on the Digital Sandbox Pilot “There is a lot of hype around the use of AI in healthcare. These guidelines will help to cut through that by providing clear standards for AI clinical trials, with the ultimate aim of speeding up the delivery of safe and effective AI innovations to patients.” Xiao Liu Project co-leader and junior doctor in ophthalmology University of Birmingham Section 1.5 Research highlights of the year Ensuring quality in AI healthcare technologies Clinical trials are routinely used in health research to test the safety and efficacy of new treatments and products. A growing number of these medical interventions have an AI component, such as AI-assisted diagnostic tests, smart wearable devices, and personalised apps. Until now, however, there had been no universally agreed set of standards for assessing the quality of trials involving AI, making it difficult for the interventions to be compared with each other. In September 2020, the first international guidelines for the design and reporting of clinical trials involving AI were published in Nature Medicine, The BMJ and The Lancet Digital Health. The work was funded by the Turing, along with the Wellcome Trust, Research England and Health Data Research UK. The two sets of guidelines, called SPIRIT­AI and CONSORT-AI, provide checklists for researchers to follow to improve the quality and transparency of their trials involving AI. Recommendations include providing clear details of the skills required to operate the AI system, the process for acquiring and selecting the input data, and how the output data will be used, such as whether they will feed into clinical decision-making. The guidelines will help researchers, peer reviewers, funders, journal editors and regulators to ensure that AI technologies in healthcare are supported by the best possible evidence. Section 1.5 Research highlights of the year Optimising the world’s first underground farm Thirty-three metres beneath the busy streets of Clapham in London lies a farm that is producing subterranean salad greens. In a repurposed WW2 air raid shelter, the Growing Underground project uses soilless hydroponic technology and LED lighting to grow crops year-round, producing 12 times as many crops per unit area as conventional UK greenhouses. The farm offers a vision for how food production might be increased for a growing global population without using up valuable land resources. However, farming crops without sunlight is an energy-intensive process, so researchers at the Turing and the University of Cambridge have developed a digital twin of the farm, to find ways of maximising crop growth while minimising energy use. The model is fed with variables including water use, relative humidity, temperature, and CO2 and light levels, from both manual observations and automatic sensors. The researchers can then use the model to identify the combination of variables that most improves crop growth. The digital twin can also make forecasts, helping growers to make decisions about the day ahead. If the model predicts that the farm is likely to be too cold, for instance, the grower might add a temporary heater or tweak the lighting. In turn, the model provides data on how effective the measures were. Thanks to this work, the farm has reduced the time it takes to grow crops by as much as 50%, and increased yields by almost 25%. It is a brilliant example of how data-driven models can help green, innovative projects such as Growing Underground to blossom. “The underground farm provided the perfect opportunity to test our digital twin technology in a unique environment. Bespoke, data-rich computer models such as this will be crucial for optimising the farms of the future, to maximise their output in a changing climate.” Ruchi Choudhary Project leader and Data-Centric Engineering Group Leader The Alan Turing Institute “The underground farm provided the perfect opportunity to test our digital twin technology in a unique environment.” “The more time I spend working on dynamic graphs, the more excited I am by their potential. We’re hoping that Raphtory will become the go-to tool for their analysis.” Felix Cuadrado Turing Fellow and computer scientist Queen Mary University of London Section 1.5 Research highlights of the year A purpose-built tool for tracking how networks evolve The world has never been more connected, and every day we generate a tsunami of data about our physical and virtual interactions. Tools are increasingly needed to make sense of this data, and a new, Turing-funded analysis tool called Raphtory is the first to be built specifically for tracking how networks and connections change over time. The team behind Raphtory, led by Richard Clegg and Felix Cuadrado, is already looking into using the software to understand urban transport patterns, spot cryptocurrency fraud and track changes in the meaning of words. In fact, it can be used in any situation where the data can be represented as an evolving network of points connected by lines – what’s known as a ‘dynamic graph’. Raphtory works by splitting the dynamic graph over multiple computers, increasing the amount of memory for data storage and processing, and the software automatically updates the graph as new data come in. The researchers have also started to apply Raphtory to social network data. By analysing changes in interactions between users, the software could ultimately help to identify where communities within a social network are becoming more insular, which might be a sign of their views becoming more extreme. Combining this with analysis of the content being posted could provide an automatic way of flagging up any potentially toxic communities. Read our blog for further info. Section 1.5 Research highlights of the year Anti-bias test implemented by Amazon A test for detecting bias in AI and machine learning systems, developed by researchers at the Turing, has been adopted by Amazon for its cloud computing platform, Amazon Web Services (AWS). The test, called ‘Conditional Demographic Disparity’, was first proposed in a 2020 paper by Sandra Wachter, Brent Mittelstadt (both Turing Fellows at the time) and Chris Russell (former Group Leader in Safe and Ethical AI at the Turing). It is a metric that gives a measure of inequality within a dataset, and so can flag up discrimination in, for example, job recruitment processes, automated loan approval, healthcare access and university admissions. The strength of the test is that it incorporates the standards of fairness used in European courts of law, and it also accounts for underlying factors which might be driving the bias, making it useful for detecting ‘intersectional’ discrimination where multiple factors are at play. Amazon has included the test as part of its SageMaker Clarify service, which provides machine learning developers using AWS with tools to detect and measure biases in their datasets and models, helping them to understand their models’ predictions and pinpoint issues of inequality. It’s a major success for the researchers behind this test, as their work is now in the hands of those who are developing the AI systems of the future. “The paper that proposes this test is a delight to read, and it clearly lays out the legal and ethical foundations of the work. Machine learning researchers using Amazon Web Services are now using this test to help them identify bias in their datasets and model predictions.” Sanjiv Das Amazon Scholar at AWS and Terry Professor of Finance and Data Science at Santa Clara University “Data safe havens are all about increasing trust, so that data providers can feel confident in supplying sensitive data to researchers. Now, we want to build a community around our system that extends far beyond the Turing.” James Robinson Senior Research Software Engineer The Alan Turing Institute Section 1.5 Research highlights of the year Data safe havens spread their wings Researchers frequently require access to sensitive data, such as health and financial records, or information on protected characteristics such as disability and sexual orientation. To prevent unauthorised access to this data, they have to be hosted in secure computing environments. Since 2018, the Turing’s Research Engineering team has been pioneering the development of cloud-based ‘data safe havens’ – remotely-hosted environments that allow researchers to safely analyse their data, while taking advantage of the scale and flexibility of cloud computing. Alongside the technical aspects of creating the digital infrastructure for these safe havens, the team has developed a policy framework so that data providers and users can easily specify the security requirements of their data. When a safe haven is created for a new project, the researchers select one of five pre-defined security levels, and the software automatically creates a safe haven which is tailor-made for that level. In this way, the system is scalable for a broad range of projects. The safe havens are routinely used in the Turing’s Data Study Groups, and also support several Turing projects involving sensitive data. The team is now talking to around 10 organisations and universities who are evaluating the system for their own potential use. In May 2020, The Health Foundation – a charity committed to bringing about better health and healthcare for people in the UK – made modifications to the system to develop its thinking around the security of cloud-based platforms for processing health data, and how such platforms could be securely accessed. The Turing team has since incorporated many of these improvements into its core system, and is ultimately aiming to make its code completely open so that organisations can create their own data safe havens from scratch. Section 1.5 Research highlights of the year Related programmes and teams Mapping the UK’s solar panels Where are all the solar photovoltaic panels in the UK? The answer is that no one exactly knows – there are no comprehensive records. This is a problem because, in order to predict how much solar energy will be generated and fed into the national grid, electricity providers need to know precisely where the solar panels are, as well as the cloud cover at that location. Without a good, short-term solar energy forecast, fossil­fuelled generators have to be kept running in the background, wasting resources. While there is detailed UK government data on the location of solar farms, there is less information on smaller, domestic installations, which account for about one-third of the country’s solar capacity. Now, researchers led by Turing Fellow Dan Stowell, in collaboration with National Grid ESO, are using the power of the crowd to fill in the gaps. The team asked members of OpenStreetMap – a Wikipedia-style, editable map of the world – to look for solar panels in their local area, or in aerial imagery, and add them to the map. So far, around 350 volunteers have pinpointed over 300,000 solar installations across the UK, and the researchers have combined this information with other data to create an open, geographic dataset that covers an estimated 85% of the country’s capacity. Now, the team is developing an app that will help it to reach more volunteers, and it also hopes to run a machine learning contest to develop algorithms that can automatically detect solar panels in imagery. With around one million solar installations in the UK, there are still plenty to find! “To reduce our reliance on fossil fuels, we need high-resolution forecasts of solar energy generation. This crowdsourced map of solar panels will help to achieve that.” Dan Stowell Project leader and Turing Fellow The Alan Turing Institute Section 1.5 Research highlights of the year Streamlining jet engine design and manufacture To analyse the performance of jet engines, aerospace engineers use computer models to simulate the engine’s components and the intricate, super-heated airflow through them. But the sheer complexity of these models means that they can take days or even weeks to run, slowing down the speed with which engineers can test new designs. Researchers led by Andrew Duncan and Pranay Seshadri in the Turing’s data-centric engineering programme have been working with Rolls-Royce to use statistical methods from data science to streamline these models. A key achievement has been the development of algorithms that rapidly home in on the model variables that are most important to the problem. For instance, if the engineers want to make the engine’s fan blades more efficient, this new technique will tell them which of the blades’ 300+ design variables to focus on. The overall result is that engineers can reduce the number of variables in their models, so that the models run quicker, speeding up the development of more efficient engines. These use less fuel, resulting in a lower carbon footprint, and Rolls-Royce is now using this technique in the design of its future jet engines. The principles of this work can also be applied to the engine manufacturing process, providing a potential way to cut waste and costs. And looking ahead, the researchers say that their tools could speed up component design in more radical flight concepts such as zero-emission planes. “This work has the potential to change the way we design and manage our manufacturing processes.” Shahrokh Shahpar Fellow in Aerothermal Design Systems Rolls-Royce The Turing redesigns the Finite Element Method Another big story in the data-centric engineering programme this year was the radical redesign of a well-known mathematical method by researchers led by Mark Girolami, the Turing’s Programme Director for Data-Centric Engineering. The Finite Element Method (FEM), which provides numerical solutions to mathematical equations of complex systems, has been routinely used in engineering and the physical sciences for more than 70 years. The new version, described in a paper in PNAS, reconsiders the FEM from a statistical perspective, and allows data to be integrated with the FEM. This will be important in advancing data-driven models that simulate real-world objects. “It lays the mathematical foundations of the digital twin revolution,” says Girolami. “As well as providing key expertise, the Turing helped us to ensure that we consulted with a wide range of voices within the AI community, so that the final guidance was as accessible and useful as possible.” Abigail Hackston Co-author of the guidance and Senior Policy Officer ICO Section 1.5 Research highlights of the year Helping organisations to explain decisions made with AI Organisations are increasingly using AI to make or assist decisions that directly affect people, from diagnosing disease and approving bank loans to assessing job applications and recommending products. From a legal and ethical perspective, it is important that those affected by these decisions understand how and why the decisions are made. Moreover, when organisations are transparent about how they use AI, it helps to build trust within the workplace and the wider public, especially where the decisions raise possibilities of discrimination against protected characteristics such as age, disability or race. Striving to make AI systems explainable can also help to flag up potential biases within the systems. Since 2018, the Turing’s public policy programme and the government’s Information Commissioner’s Office (ICO) have been working to produce a co-badged guidance document for organisations, providing advice on how to clearly explain AI decisions to those affected by them. Published in May 2020, it is the most comprehensive practical guidance on AI explanation produced anywhere to date. It gives four key principles for organisations to follow when explaining AI: be transparent, be accountable, consider the context you are operating in, and reflect on your AI system’s impacts on the individual and wider society. David Leslie, Ethics Theme Lead at the Turing and co-author of the guidance, has given several lectures and workshops about the work, including a presentation to the US National Institute of Standards and Technology (NIST). He has recently begun research to gauge how organisations are using the guidance to improve their practices. Section 1.6 The Turing’s response to COVID-19 Since the beginning of the pandemic, researchers across the Turing have shifted their focus and embarked on new collaborations to tackle the spread and impacts of COVID-19. Here are some of our key projects – further details on these and more projects can be found on our dedicated webpage. Combining data from NHS trusts “It is so inspiring to see how combining high-resolution data from two NHS trusts can only be achieved by diverse teams working together. Our efforts will help the health data science community to improve patient care for years to come.” Kirstie Whitaker DECOVID Analytics Workstream Co-Lead and Programme Lead for Tools, Practices and Systems The Alan Turing Institute Initiated during the early stages of the pandemic, the DECOVID project has created a detailed database of anonymised patient health data, which can be analysed to answer pressing clinical questions about the virus, providing insights into the treatment and effects of COVID-19. The project was founded by researchers at the Turing (led by Chris Holmes, Programme Director for Health and Medical Sciences) and four other partner institutes, and the initial funding was diverted from an existing Turing grant from the Engineering and Physical Sciences Research Council (EPSRC). A major breakthrough has been the transfer and combination of data from the electronic patient record systems of two NHS trusts (University College London Hospitals and University Hospitals Birmingham), covering 185,000 patients. Now, the work continues with in-kind contributions from researchers around the UK, who are analysing the data to shed light on four key questions: how COVID-19 patients are affected by blood clots, when to put critical COVID-19 patients onto a ventilator, how patients with long­term health conditions are affected by COVID-19, and whether the current patient scoring system successfully identifies the COVID-19 patients at most risk of further deterioration. Ethicists in the Turing’s public policy programme have also been embedded within the research teams from the start of the project to ensure that the algorithms used and developed during DECOVID are as transparent and bias-free as possible. Results from the first analyses are expected later in 2021. Section 1.6 The Turing’s response to COVID-19 Understanding vulnerability to health-related misinformation Misinformation has been one of the dominant themes of the COVID-19 crisis, with inaccurate claims and guidance about cures, vaccines and lockdowns proliferating online. In response to this infodemic, the Turing’s public policy programme launched a project, funded by The Health Foundation, to understand who is most vulnerable to health-related misinformation. By pinpointing these factors, it is hoped that policy makers will be able to develop more targeted, effective interventions that tackle the root causes of the problem, rather than deploying draconian or overly restrictive policies such as banning content from entire websites. The team recruited 1,700 participants, using online experiments and surveys to explore people’s responses to claims about COVID-19 – some true (e.g. “COVID-19 can spread through the air”) and some false (e.g. “COVID-19 can be treated by drinking lemonade”). The results show that people with higher numerical, health and digital literacy tend to be better at assessing health-related statements, and that many traditionally important socio­demographic traits (such as education, gender, and political affiliation) make little or no difference. These are important results as they mean that developing people’s cognitive skills and literacies has the potential to make a big difference to their ability to identify misinformation. The researchers published a report on their findings in March 2021, and they are now aiming to feed into government policy­making around measures to counter the damaging effects of online misinformation, both in relation to COVID-19 and more broadly. “Our work is helping to understand what makes people susceptible to misinformation. This is critical if we’re going to find ways to tackle the problem before the next public health crisis.” Bertie Vidgen Project leader and Turing Research Fellow in Online Harms The Alan Turing Institute “The Alan Turing Institute has played an instrumental role in the development of the NHS COVID-19 app. The app has now been downloaded over 23 million times, and the work of the Turing has made it far more effective at measuring distance and risk, better protecting its users and their communities. “It is also thanks to the work the Turing has done to measure the efficacy of the app that we know it averted an estimated 600,000 COVID-19 cases between October and December 2020 alone, and continues to play a huge role in protecting the public as we move out of lockdown.” Baroness Dido Harding Former Executive Chair NHS Test and Trace Section 1.6 The Turing’s response to COVID-19 Improving the accuracy of the NHS COVID-19 app Turing researchers have been advising the Department of Health and Social Care on the development of the NHS COVID-19 app. Rolled out across England and Wales in September 2020, the app uses the Google-Apple Exposure Notification system, in which the phone sends Bluetooth Low Energy signals to nearby app users in order to detect when two users have come into close contact. In October 2020, an update was released which included improvements to the app’s algorithm for measuring the distance between two phones – work that was led by Mark Briers, the Turing’s Programme Director for Defence and Security, and the app’s lead scientist. The updated algorithm, which makes use of a statistical process called an ‘unscented Kalman filter’, more accurately calculates the risk that the phone’s user has been in contact with a COVID-positive person, meaning that the app can better identify those who need to self-isolate. A statistical analysis by Briers and colleagues published in Nature in May 2021 estimated that, for every 1% increase in app users, the number of coronavirus cases in the population can be driven down by around 2%, due to people self-isolating following contact with an infected person. Briers continues to lead on the app’s development, and is currently exploring the idea of modifying the app to provide different levels of isolation advice, depending on which variant of the virus the user has contracted. Section 1.6 The Turing’s response to COVID-19 Estimating positive COVID-19 test counts In September 2020, the Turing began a partnership with the Royal Statistical Society (RSS) to provide expertise to the UK government’s Joint Biosecurity Centre (JBC). The JBC is part of NHS Test and Trace within the Department of Health and Social Care, and it provides scientific analysis to help decision makers respond to the spread of COVID-19, especially local outbreaks. The Turing-RSS Lab feeds into JBC’s work by developing statistical and machine learning techniques to solve key, policy-relevant problems. One of these problems relates to counting the number of positive COVID-19 tests in local authorities – crucial information for monitoring the virus’s spread. It can take up to five days for PCR test results to be processed and reported, so there is a time lag in this data. A statistical model developed by a lab team led by Chris Holmes, the Turing’s Programme Director for Health and Medical Sciences, overcomes this lag by using the incomplete test data to estimate (or ‘nowcast’) the total positive COVID-19 test count, so that authorities can respond without having to wait for all the data to come in. The model has now been shared with JBC to support policy- and decision-making. So far, the lab has recruited over 20 people in research, leadership and operational roles, from groups at Imperial College London, King’s College London, MRC Harwell, University of Oxford, the RSS and the Turing. Other ongoing research at the lab includes combining data from multiple testing sources to estimate COVID-19 prevalence at a local level; and assessing the effectiveness of non-pharmaceutical interventions such as lockdowns and mask-wearing. “The research and expertise provided by the Turing-RSS Lab has supported the creation of rapid evidence bases, enabling JBC to provide insights and recommendations in the fight against COVID-19. The lab has designed models and helped build a community which have significantly improved our ability to respond to the pandemic.” Johanna Hutchinson Director of Data and Data Science Joint Biosecurity Centre Section 1.6 The Turing’s response to COVID-19 Helping London to navigate lockdown safely “There was an urgency and a passion among our team to contribute to the fight against the virus. We wanted to help in whatever way we could.” Theo Damoulas Leader of Project Odysseus and Turing AI Fellow The Alan Turing Institute “This collaboration has not only succeeded in knowledge transfer, but has also created a lasting legacy that we intend to build on.” Paul Hodgson Senior Manager for City Data GLA’s City Intelligence Unit As the pandemic took hold in spring 2020, a team in the Turing’s data-centric engineering programme began to monitor activity on the streets of London. Their goal: to understand how lockdown was affecting city life, and what interventions were needed to allow the city’s nine million people to keep socially distanced. Named Project Odysseus, this was a rapid repurposing of an existing project that had been combining data from various sources in order to estimate and forecast the city’s air pollution. Working with the Greater London Authority (GLA) and Transport for London (TfL), the team modified its air pollution algorithms, feeding them with data from London’s traffic cameras and sensors to estimate pedestrian and vehicle densities and distances. The Turing’s Research Engineering team also played a crucial role by honing the infrastructure that had been developed for the air pollution project, so that the data could be processed quicker and more securely. The result of the work was a piece of software (an ‘Application Programming Interface’) that the authorities could use to analyse the anonymised, near real-time data, allowing them to monitor pedestrian density and make social distancing interventions where required. TfL says that it implemented over 700 such interventions at the height of the pandemic’s first wave, such as moving bus stops, widening pavements and closing parking bays, and that the Turing’s tool provided key data for those decisions. Looking forward, the researchers are hoping to work with the GLA to monitor high street activity as London recovers from the pandemic, to help understand how social and commercial activity has been affected. Section 1.6 The Turing’s response to COVID-19 Modelling the spread of COVID-19 in urban areas In spring 2020, the Royal Society announced its Rapid Assistance in Modelling the Pandemic (RAMP) initiative, which called on the UK’s diverse community of modelling experts to support with understanding the evolution of the pandemic and the effects of different lockdown strategies. The Turing responded by leading RAMP’s urban analytics workstream, which sought to augment epidemiological models with data relating to people’s movements and behaviour patterns around towns and cities. The workstream, led by Mark Birkin, the Turing’s Programme Director for Urban Analytics, rapidly convened a team of researchers from four Turing university partners – the Universities of Leeds, Cambridge and Exeter, and University College London – to repurpose an existing model developed at the Turing called SPENSER. This simulates the day-to-day movements of a ‘synthetic population’, which matches the real UK population in key characteristics such as age, ethnicity, sex and household composition. By linking this model to highly realistic data of people’s activities – such as the time they spend at home, shopping, at school and at work – the team could simulate COVID-19 transmission within a population at an individual level. The main outcome so far is a demonstration model of Devon’s entire 800,000-person population, which allows researchers to compare the impact of different intervention scenarios at a local authority level, such as closing schools or restricting the opening hours of shops or hospitality venues. The team is now scaling up its model to a national level, and is in dialogue with policy makers and the government about using this technology to inform decision-making in this pandemic and future health emergencies. “Our work is demonstrating the value of linking models of disease transmission to detailed data about people’s daily activities.” Mark Birkin Programme Director for Urban Analytics, The Alan Turing Institute Section 1.7 The year in numbers Section 1.7 The year in numbers Growing our community Highlights 300+ Turing Fellows renewed contracts for a further 12 months 15 Turing AI Acceleration Fellows recruited – in addition to the 5 Turing AI Fellows announced in 2019 18 new interest groups 4 Data Study Group challenges 39 participants 26 different institutions Data Science for Social Good 2 NGOs (non-governmental organisations) 9 participants Section 1.7 The year in numbers Growing our audience Top content Most Happy birthday to #AlanTuring Event The Turing Presents: 1. Jobs impressions 195k AI UK 5k Most Are you looking for an opportunity to Blog Updates to the 2. The Turing reactions 578 enrich your doctoral research? Apply to algorithm underlying Presents: 4.6k the Turing Enrichment scheme the NHS COVID-19 app AI UK Highest Call for COVID-19 rapid response data News Responding to the 3. Enrichment engagement 8.8% science taskforce COVID-19 pandemic scheme 3.5k rate Popular pages Most website clicks* Most views Social network analysis – Introduction to Impact Supercharging 4. COVID-19 rapid 9.4k structural thinking story sustainable response data 3.3k development science taskforce Top media Covid-19: NHS app has told 1.7 million to self-isolate (BBC Project Artificial intelligence for 5. Studentships coverage News) data analytics (AIDA) 3.1k *Excluding homepage Section 1.7 The year in numbers Highlight: AI UK March 2021 saw our first-ever AI UK event: a two-day virtual showcase of the best of UK research in AI. 1257 unique attendees 98% registrants attended 38 countries represented by attendees 13,500+ session attendees 8.7/10 average event rating 1.65m Twitter impressions 5.72m reach on social media 651 mentions of #AIUK Section 1.8 Engagement, outreach and skills Section 1.8 Engagement, outreach and skills Working with government and regulators The Turing’s public policy programme works alongside government and regulators to explore not only how data science and AI can improve policy-making, but also how these technologies should be governed and regulated. With over 75 researchers and more than 40 research projects, the programme has gained national and international recognition. This year, for example, the Turing’s Ethics Theme Lead David Leslie was elected to the nine-member Bureau of the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) – an influential body that is working to develop a legal framework for AI design, development and application. Over 85 public sector organisations have reached out to the programme since it was launched in May 2018, including government departments, regulators, non-ministerial departments, agencies and public bodies, local authorities, police forces and international organisations. Tackling online hate One of the public policy programme’s major research themes this year has been online hate – the problem of abusive messages targeting protected aspects of identity such as ethnicity, sexual orientation and disability. In May 2020, the Turing held a workshop for policy makers, regulators, academics and tech companies, with the aim of developing a research agenda for tackling online hate. Around 40 experts attended the event, and the outcome was a policy briefing that proposed a six-point agenda, calling for more research into the long-term effects of online hate, and more flexible, responsive research that’s driven by the needs of society. Alongside this work, the Turing was commissioned by Ofcom to produce a report on how to regulate online hate on video-sharing platforms. We also launched an ‘online hate research hub’, which collates resources for researchers and policy makers working in this area. Women in data science and AI The Turing’s ongoing ‘Women in data science and AI’ project, which aims to redress the gender imbalance in these areas, this year published two major reports. In August 2020, we published ‘The digital revolution: Implications for gender equality and women’s rights 25 years after Beijing’. This report addresses gender inequality in technology, from the gender biases encoded in algorithms to the masculine stereotypes within STEM fields and the underrepresentation of women in technical roles. Commissioned by UN Women, the report makes nine policy recommendations for addressing these systemic problems. To mark International Women’s Day on 8 March 2021, we published ‘Where are the women? Mapping the gender job gap in AI’ – a policy briefing that presents new evidence for the gender gap in the data science and AI workforce, in skills, status, pay, educational background and more. The report makes a number of recommendations for reducing this gap, which will be essential if we are to develop technologies that are free of social bias. Guidance for AI explanation In May 2020, we published the world’s most will help organisations to clearly explain comprehensive practical guidance on AI decisions made or assisted by AI, to those explanation. Co-badged with the Information affected by them. See our case study on Commissioner’s Office (ICO), the guidance page 35 for more information. Ethical AI for children The Turing has been working with UNICEF as a pilot partner in the ‘AI for children’ project. UNICEF recognises that “most national AI strategies and major ethical guidelines make only cursory mention of children and their specific needs.” Children interact with AI systems in myriad ways, from smart toys and virtual assistants to the algorithms that provide video, music and friend recommendations, and UNICEF is exploring how to protect child rights across the board. As part of this project, UNICEF has drafted a policy guidance that contains recommendations for governments and the business sector in developing AI systems that uphold child rights. As a partner, the Turing will use this guidance to engage with and inform the UK government, the third sector and the public on the topic. We hope to interview children and create a child­centred version of our ‘Understanding artificial intelligence ethics and safety’ guidance, to encourage policy makers who are developing AI projects to consider child ethics and safety as a first priority. Camden’s data charter Camden in London is set to be the first local authority in the UK to develop a data charter, which will provide a policy framework and set of principles for how residents’ data will be collected, processed and shared by Camden Council. As a project partner, the Turing has been advising the council on the ethical aspects of the charter. We are also working with the project’s other partner, the public participation charity Involve, to develop a ‘Residents’ Panel’, scheduled for September 2021. This will be a series of discussions and activities to explore issues related to data ethics, data for social good and legal aspects of data usage, and will support the residents as they help to write the charter. The Turing is proud to be part of this initiative, which is helping to put decisions about data in the hands of those who they most affect. Modelling the UK’s labour market The Turing has been collaborating with the Department for Business, Energy and Industrial Strategy (BEIS) to develop a model of labour mobility in the UK (the movement of workers between jobs), which provides a tool to explore the dynamics of the labour market and the drivers of mobility. The project is led by ESRC-Turing Fellow Omar Guerrero, and is based on his research agenda on ‘labour flow networks’. The team will use the model, which simulates changes in employment down to the individual level, to help understand how the labour market is responding to the shocks of Brexit and COVID-19, and to simulate the impacts of policy interventions aimed at helping the economy to recover. The model has received high visibility within government, being presented to more than 100 government analysts to date. The Turing and BEIS will be publishing a joint paper, alongside the model’s code, later in 2021. Section 1.8 Engagement, outreach and skills Public engagement COVID-19 conference The pandemic meant that all of our events who had become familiar faces during the this year were held online. Fittingly, our pandemic, including David Spiegelhalter, biggest public event of the year was ‘AI and Devi Sridhar, Christina Pagel and Neil data science in the age of COVID-19’: a Ferguson. We were also treated to not one free one-day conference in November 2020 but two talks from Robert Winston. exploring the response of the UK’s data After the conference, the Turing convened science and AI community to COVID-19. a series of workshops involving around 100 The event attracted over 1,700 registrants experts in data science, AI, healthcare and from 35 countries, from academia, industry, more, who drew up suggestions for how the public sector and the general public, who the data science and AI community might watched talks and discussions with scientists respond better to future emergencies. The resulting report can be read here. “I really enjoyed being part of this line-up. There was a lot of energy at the event, and it gave us a much-needed chance to reflect on lessons learnt during the pandemic.” Wendy Hall Conference session chair and Regius Professor of Computer Science University of Southampton ‘A is for algorithm’ In February and March 2021, in the run-up to AI UK, we ran a Twitter campaign to demystify some of the jargon in data science and AI. Every day over 26 days, we tweeted a definition of a word, phrase or name, from ‘A is for algorithm’ to ‘Z is for Zuse’ (Konrad Zuse created the Z3, the world’s first fully automatic digital computer). The campaign was warmly received and saw our Twitter followers increase by over 4,000 in five weeks. Alongside the A-Z, we worked with students from the Graphic Communication Design course at Central Saint Martins (a fellow London Knowledge Quarter partner) to turn some of our definitions into digital art. These included an interactive world map of internet usage for ‘D is for data science’ and a robot dress-up game for ‘U is for uncanny valley’. The artworks were hosted on our Instagram and also featured in an online gallery during AI UK. “Working with the Turing and seeing our projects come to life was such a rewarding experience – it exceeded all expectations.” Haylie Tsang Graphic Communication Design student Central Saint Martins Data science and feminism One of our most popular events of the year was ‘Challenging power in data science’ – an online seminar in June 2020 exploring the intersections between data science and feminism. Over 400 people watched our two speakers – Catherine D’Ignazio (Director of the Data + Feminism Lab at MIT) and Lauren Klein (Director of the Digital Humanities Lab at Emory University) – discuss how feminist thinking can help to develop more ethical and equitable data practices. Their ideas are echoed in the Turing’s ‘Women in data science and AI’ project (see page 50). The Turing Lectures Our flagship event series, The Turing Lectures, continued to draw in big names and big audiences. We made the most of the change to a virtual format by bringing in a more international line-up. Stuart Russell joined us from the University of California, Berkeley, to discuss how to build AI systems that benefit humans (our most popular lecture of the year, with over 700 attendees), while Marc Raibert, Chairman of Boston Dynamics, introduced his company’s work to build the next-generation of super-mobile robots, such as the four-legged Spot (who we showed being remotely operated from Edinburgh). The virtual format allowed us to increase the lectures’ reach, too, with viewers joining us from countries including Brazil, Australia, India, Nigeria and Canada. Other lectures this year included Rose Luckin on the role of AI in the future of education; Tabitha Goldstaub on her inspirational journey to becoming Chair of the AI Council; and Desmond Upton Patton on using AI to track abuse on social media. In the media Over the past year, Turing researchers have featured in media outlets including The Financial Times, MIT Technology Review, Sky News, Wired and The Economist. One of our most high-profile appearances was in summer 2020, when the BBC asked the Turing to comment on an investigation into reports of algorithmic bias in the UK’s passport photo checker. Two Turing researchers contributed to the story: David Leslie commented on the findings and wrote an explainer on causes of algorithmic bias for the BBC, while Kirstie Whitaker (the Turing’s Programme Lead for Tools, Practices and Systems) reviewed the code and independently reproduced the BBC’s data journalism, and provided input on the validity of the reported outcomes. “The labels we use to classify racial, ethnic and gender groups reflect cultural norms, and could lead to racism and prejudice being built into automated systems.” David Leslie Ethics Theme Lead The Alan Turing Institute The Turing Podcast The Turing now has its own podcast, with each episode featuring a special guest (or guests) from the world of data science and AI. Turing researchers have been the main focus so far, with episodes covering projects investigating the spread of antimicrobial resistance, mapping the UK’s solar panels (see page 32), building a computer model of the world’s first 3D printed steel bridge, and even creating the perfect fantasy football team. We’ve also scoped stories from further afield, hosting interviews with medical doctor and TV presenter Robert Winston, computer scientist Sue Black and science writer Tom Chivers. Since the podcast’s launch in March 2020, we have published 24 episodes, amassing over 14,000 downloads. Season two launched in December 2020. Section 1.8 Engagement, outreach and skills Convening academia, industry and policy makers AI UK We were delighted in March 2021 to hold our first-ever AI UK event: a two-day showcase of the best of UK research in AI. Following the postponement of the physical event in 2020, we switched to an online platform, welcoming over 1,250 attendees from academia, industry, government and the third sector. The two stages hosted presentations, panel discussions and live demonstrations from over 100 leading voices in AI. There were also plenty of opportunities to interact thanks to the event’s networking facility, plus virtual exhibition booths from organisations including Accenture, Intel and GCHQ. The event tapped into the public mood: climate change, diversity, COVID-19 and mental health were all common themes. The most popular sessions included ‘Modelling and predicting climate change’, ‘Safe and ethical priorities for AI’ and ‘Doing better in data science – from algorithmic fairness to diversity’. A special shout-out goes to stand-up comedian Matt Parker, who closed the first day by proving that all you need to capture people’s imagination is an interactive spreadsheet and some programmable fairy lights. “Attending this conference, where ethics, inclusivity, trust, equality, de-marginalisation and tackling climate change were all discussed, gives me hope.” AI UK attendee Data Study Groups go virtual This year, the Turing’s Data Study Groups (DSGs) moved online. These collaborative ‘hackathons’ allow organisations to pose real-world data science challenges to a team of carefully selected researchers. To make the events more suitable for remote collaboration, we lengthened the original five-day format, adding up to two weeks of part-time, preparatory workshops, followed by two weeks of full-time work. Our first virtual DSG, in September 2020, saw 39 participants take on four challenges, including using AI and machine learning to investigate the genetic interactions driving breast cancer development (a challenge from the Cancer Research UK Cambridge Institute) and developing data science techniques to predict high-street bakery sales (a challenge from catsAi). This was followed in February and March 2021 by our first Reinforcement Learning Study Group – a DSG spin-off. Over three weeks, 19 participants collaborated on a challenge from the government’s Defence Science and Technology Laboratory (Dstl), which involved exploring ways to train reinforcement learning systems to play competitive virtual games. The results, due to be published online later this year, could help Dstl to optimise its campaign planning. “We were really impressed with what the team achieved in just a few weeks. They embraced the anticorruption data challenge with enormous enthusiasm, creativity and skill.” Alexandra Habershon Senior Governance Specialist Data Science for Social Good From June to September 2020, the Turing delivered a Data Science for Social Good summer programme (DSSGx UK) in collaboration with the University of Warwick, the Office for National Statistics (ONS) and Imperial College London. This was an affiliate programme of the DSSG Summer Fellowships, which were founded in the US in 2013 by Rayid Ghani, former Chief Scientist for President Obama. After a successful in-person pilot in 2019, DSSGx UK ran in 2020 as a 12-week online placement. World Bank Nine students developed data science techniques for two projects with a social angle: assessing the risk of substandard childcare from early years providers (an Ofsted project) and analysing financial corruption risks in public administration (a World Bank project). We look forward to supporting the University of Warwick with the delivery of another online programme in summer 2021. CogX 2020 In June 2020, the Turing hosted the research stage at CogX for the second year in a row. This annual festival of AI and emerging technology went virtual this year, with the theme of ‘How do we get the next 10 years right?’. Our stage featured 15 sessions and 68 speakers exploring the present and future of AI, from collaborative robotics and data-centric engineering to AI in the arts and humanities, allowing us to bring the latest academic thinking to a diverse audience of industry movers and shakers. The Turing Way It’s been another busy year for The Turing Way – the Turing’s evolving online ‘handbook’ for data scientists. In April 2020, the project was expanded into five separate guides covering reproducible research, project design, communication, collaboration and ethical research. The community now boasts over 250 contributors, who have collectively written 39 chapters. In November 2020, the project held its first all-remote Book Dash. These events allow participants to work simultaneously on the book, and over five days 20 volunteers from around the globe, including Argentina, the Netherlands and India, updated the book and wrote new chapters on topics including activism and data anonymisation. Collaboration is at the heart of The Turing Way. The project is working with Open Life Science to train and mentor researchers in the principles of open research, and there are further Book Dashes planned in 2021 as The Turing Way continues its mission to make open research “too easy not to do”. Research Programme Showcases A new initiative in early 2021 was our first series of Research Programme Showcases. Each of the Turing’s research programmes held a 90-minute session, with the aim of giving researchers from the Turing’s university partners a chance to learn more about the programme and opportunities for engagement, while helping us to connect our programmes to the wider academic community. Each session consisted of a presentation from the programme team, followed by an audience Q&A and a networking session. We were thrilled to have over 500 attendees across the nine events. Interest groups The Turing’s interest groups bring together experts from an extensive range of disciplines and sectors, from the social sciences to theoretical mathematics, to tackle problems collaboratively. The groups act as forums for sharing ideas and knowledge, with the aim of sparking new collaborations and research. The relaunch of the scheme in 2020 saw the number of groups grow from 19 to 37 on topics such as media in the digital age, environment and sustainability, and trustworthy digital identity. The outputs of interest groups are wide-ranging, and over the last year groups have explored innovative ways of convening. For example, 160 people attended the visualisation interest group online symposium, where Ruth Rosenholtz from MIT provided the keynote ‘Human vision at a glance’. Additionally, the facilitating responsible participation in data science group participated in the successful public engagement event ‘Cabaret of Dangerous Ideas’, streamed live from The Stand Comedy Club. The white paper ‘Challenges and prospects of the intersection of humanities and data science’, produced by the humanities and data science group, received widespread positive feedback and engagement. Groups such as the entrepreneurship interest group, which has grown under the leadership of Turing doctoral students, are also providing opportunities for future leaders. Section 1.8 Engagement, outreach and skills Skills The national skills agenda The pandemic has demonstrated the crucial role of data skills in the future of the UK’s society and economy. The Turing is playing a key part in the national skills agenda and this year commissioned research for the Department for Digital, Culture, Media and Sport (DCMS) in support of the National Data Strategy: ‘Quantifying the UK data skills gap’. The report’s insights are helping to improve knowledge of the supply and demand of data skills. In addition, we supported the Office for AI/Ipsos MORI report ‘Understanding the UK AI labour market: 2020’ to help provide policy recommendations aimed at bridging the skills gap. The Turing also joined the Alliance for Data Science Professionals, a group of UK professional bodies and a government laboratory to establish industry-wide professional standards for data science to ensure an ethical and well-governed approach so that the public can have confidence in how data are being used. The alliance is working to define the standards expected of people who work with data and explore the issues of accreditation and certification to ensure data science and AI training meets the needs of the current and future workforce. The academic programmes and skills teams have continued to speak at a number of conferences on behalf of the Turing, focusing on both the development of standards and curricula in data science (with the RSS and BCS, The Chartered Institute for IT) and the wider data skills pipeline within the UK. Part of this work has also been focused on consulting with the government on access to skills training in the pandemic and with UKRI on the future of doctoral education and emerging needs for these skills in doctoral students from many backgrounds. Data Skills Taskforce Organisations in academia, industry, learned societies and professional bodies contribute to addressing the skills challenges in data science and AI. These challenges are complex and require collaboration and knowledge. The Turing’s convening of the Data Skills Taskforce represents a distinct forum bringing together organisations with a shared approach to tackling the data skills gap. The Turing, through its role co-chairing and convening the Taskforce, supported the development of an online skills portal to help organisations, particularly SMEs, to identify their organisational and technical readiness, and signpost them to training to develop their data capability. The portal has seen 21,500 users to date and is funded by DCMS, the Turing and Nationwide Building Society, with support from other Data Skills Taskforce members. This proof of concept has been identified as an area of importance as part of a forthcoming commitment of the government’s Digital Strategy and National Skills Fund. Engage@Turing With movement around the country and access to the British Library limited due to COVID-19, the Turing’s 2020 Enrichment scheme moved to a dynamic, new remote engagement model: Engage@Turing. The cohort have been committed to working with the Turing to ensure that they can meet one another and other members of the Turing community, with a highlight being a doctoral research showcase organised in February 2021. To support this engagement, the academic programmes team worked with the training community to move our flagship courses online and create a broad and accessible programme of activities. Engage@Turing is enabling students to find new ways to interact with other students and researchers and apply their skills to new research opportunities. Expanding our training reach We have pushed ahead with plans to run the Enrichment scheme in the coming year and are working towards our first-ever Enrichment centres based at our university partners in Bristol and Leeds. With almost 170 new applications, and a majority of the Engage@Turing students reapplying to the scheme, another strong cohort is anticipated in 2021. The Turing has also funded the development of a number of summer schools and training activities through its first online training call, offering researchers access to cutting-edge training activities and resources developed by academics from across the UK. Diversity in training The Turing continues to work towards increasing diversity within its programmes, focusing recruitment on underrepresented groups and working to increase access to opportunities across the country. This includes partnering with the Daphne Jackson Trust to recruit our first Daphne Jackson Fellow in 2021, which is a programme designed to promote a return to research for those who have had career breaks, helping them to retain their skills and expertise in the field. Turing Internship Network Launched in July 2020, the Turing Internship Network (TIN) is a national engagement scheme between our businesses and doctoral students across the UK who are studying any topic with a data science and/ or AI focus. The Turing’s role is to facilitate and convene, pairing internship projects put forward by industry with talented doctoral students. The business partners host, supervise and provide a salary for the successful interns. TIN is already a highly successful effort to share data science internship opportunities for doctoral students throughout the UK. With great support from industry, candidates have taken up opportunities at organisations including TRL and GCHQ, with further opportunities from Accenture available in the second round, running in 2021. Testimonial from one of our 2020/2021 interns. Section 2 Trustees’ and strategic report The Trustees present their annual and strategic report together with the consolidated financial statements for the Institute and its subsidiary for the year ended 31 March 2021. The financial statements comply with the Charities Act 2011, the Companies Act 2006, and the Statement of Recommended Practice (SORP) applicable to charities preparing their accounts in accordance with the Financial Reporting Standard applicable in the UK (FRS102) effective 1 January 2019 (Charity SORP 2nd Edition). The Charity is registered and is a company limited by guarantee governed by its Articles of Association dated March 2015 and a Joint Venture Agreement (JVA) with the Founder Members dated March 2015. Company Number: 09512457 Charity Number: 1162533 Directors/Trustees The Directors of the charitable company (the “Charity”), as registered at Companies House, are its Trustees for the purposes of charitable law and throughout this report are collectively referred to as the Trustees. The Trustees serving during the year and since the year end were as follows: Howard Covington Chair Nicola Blackwood-Bate Appointed 1 August 2020 Frank Kelly Richard Kenway Kerry Kirwan Appointed 28 April 2020 Vanessa Lawrence CB Appointed 1 August 2020 Thomas Melham Carina Namih Appointed 1 August 2020 Hitesh Thakrar Appointed 1 August 2020 Neil Viner Patrick Wolfe Key management as at 31 March 2021 Executive Team Adrian Smith Institute Director and Chief Executive Jonathan Atkins Chief Operating Officer Christine Foster Chief Collaboration Officer Senior Management Team Donna Brown Director of Academic Engagement Ian Carter Director of IT and Information Security Allaine Cerwonka Director of International and Associate Director of ASG Vanessa Forster General Counsel and Company Secretary Nicolas Guernion Director of Partnerships Catherine Lawrence Director of Programme Management Sophie McIvor Director of Communications and Engagement Martin O’Reilly Director of Research Engineering Clare Randall Director of People Programme Directors as at 31 March 2021 Mark Birkin Urban Analytics Mark Briers Defence and Security Mark Girolami Data-Centric Engineering Chris Holmes Health and Medical Sciences Anthony Lee Data Science at Scale Helen Margetts Public Policy Jonathan Rowe Data Science for Science and Humanities Lukasz Szpruch Finance and Economics Adrian Weller Artificial Intelligence Alan Wilson Special Projects Registered Office The British Library 96 Euston Road London, NW1 2DB Auditors Moore Kingston Smith LLP Chartered Accountants Devonshire House 60 Goswell Road London, EC1M 7AD Bankers Barclays Bank UK PLC Leicester Leicestershire, LE87 2BB Solicitors Bates Wells Braithwaite 10 Queen Street Place London, EC4R 1BE Mills & Reeve 100 Hills Road Cambridge, CB2 1PH Structure, governance and management Our legal structure The Alan Turing Institute was founded in March 2015 as a registered Charity (1162533) and a company limited by guarantee (09512457). The Institute is governed by its Articles of Association that were adopted on incorporation on 26 March 2015 and a Joint Venture Agreement with the Founder Members signed on 31 March 2015 (together the “Constitutional documents”). The Constitutional documents set out the governance of the Institute as the responsibility of the Board of Trustees with some reserved matters to the Founder Members. Purpose of the Institute and main activities As the UK’s national institute for data science and artificial intelligence, the charitable objects of the Institute, as set out in its Articles of Association, is the furtherance of education for the public benefit, particularly through research, knowledge exchange and public engagement in the fields of data sciences. In 2017, as a result of a government recommendation, the Institute added artificial intelligence to its remit. The Institute has the power to do anything that furthers its charitable objects. In particular, the Institute’s ambitions are to: – Produce world-class research in the foundations of data science and artificial intelligence. – Have a transformative impact on the way that data and algorithms are used in the economy, in government and in society. – Educate and train data scientists. The Trustees confirm that they have paid due regard to the Public Benefit Guidance published by the Charity Commission, including the guidance “Public benefit: running a charity (PB2)”, in undertaking their activities. Related parties The Institute’s Founder Members are the Engineering and Physical Sciences Research Council (EPSRC) and the Universities of Cambridge, Edinburgh, Oxford, Warwick and University College London (UCL). The Founder Members have entered into a Joint Venture Agreement which establishes, along with the Articles, the basis on which the Institute operates. On 1 April 2018, the Institute entered into 5-year partnership arrangements with eight additional universities: Birmingham, Bristol, Exeter, Leeds, Manchester, Newcastle, Queen Mary University of London, and Southampton. The Institute has a wholly owned subsidiary, Turing Innovations Limited (company registration number 10015591). Turing Innovations Limited has a minority shareholding in Quaisr Limited, a private limited company (company registration number 12704209). With effect from 1 April 2020, the Francis Crick Institute is deemed to be a related party. This is due to the shared associations of Hitesh Thakrar (Trustee) and Stephane Maikovsky (independent member of the Audit and Risk Committee). Board composition and responsibilities The Institute is governed by its Board of 11 Trustees, six of whom have been appointed by the six Founder Members and five of whom are independently appointed Trustees. The Board of Trustees has been established in accordance with the terms of the Joint Venture Agreement. The Board composition is determined as follows: – Each Founder Member may appoint one Trustee. – Founder Members may, by a unanimous decision, select and appoint an Independent Trustee who acts as Chair of the Board. Founder Members may also, from time to time, remove and replace such Independent Trustee by a unanimous decision. – The appointed Trustees may appoint further Independent Trustees such that, so far as possible, the total number of Trustees on the Board at any particular time will be an odd number. – The Trustees appointed by the Founder Members must always form a majority of the Board and may from time to time remove and replace Independent Trustees. Organisational management and responsibilities of the Board The Institute’s Board of Trustees is responsible for setting the aims and strategic direction of the Institute, approving key policies, monitoring risks, approving the annual budget and expenditure targets, and monitoring actual and forecast financial results. Trustees meet formally as a Board with the Executive Team, Senior Management Team, and relevant Programme Directors up to five times a year. In addition, Trustees normally attend at least two away days and undertake further meetings as and when needed. The Executive Team, Senior Management Team and relevant Programme Directors provide Trustees with regular reports on the Institute’s financial position, current activity, organisational news, and significant issues affecting the Institute. The Executive Team, led by the Institute Director and supported by the Senior Management Team and the Programme Directors, is responsible for the day-to-day management of the Institute’s operations and activities. The Institute Director is responsible for appointing members of the Organisational management and responsibilities of the Board (cont.) Committees of the Board Executive Team. The Executive Team, Senior Management Team and the Programme Directors are also responsible for implementing strategy and corporate policies and reporting on performance to the Board. The Board is supported by three formal Committees. Each Committee has processes in place for managing any conflicts of interest that may arise. Audit and Risk Committee This Committee is responsible for audit, finance and risk management as well as reviewing the effectiveness of the Institute’s internal control framework and risk-management process and compliance with reporting requirements and reports to the main Board on the same. It monitors the work of the external auditors and receives and reviews audit reports. It monitors the full external audit process and resulting financial statements, including overseeing the terms of appointment of the external auditors. Membership Neil Viner Chair Hitesh Thakrar Appointed 11 November 2020 Patrick Wolfe Stephane Maikovsky Nomination Committee This Committee is responsible for all aspects of the appointment of new non-Founder Member Trustees to the Board of Trustees. It also has responsibility for monitoring boardroom diversity and recommending appointments within the Audit and Risk Committee and the Remuneration, EDI and People Committee in consultation with the chairs of those committees. Membership Howard Covington Chair Nicola Blackwood-Bate Appointed 26 January 2021 Richard Kenway Vanessa Lawrence CB Appointed 26 January 2021 Thomas Melham Independent member/non-Trustee Remuneration, EDI and People Committee (Remuneration Committee pre-26 January 2021) This Committee advises the Board of Trustees and oversees the preparation of policies and procedures with respect to salaries, emoluments, and conditions of service. In line with these approved policies and procedures, the Committee approves the total remuneration package for the Chair of the Institute, the Institute Director and those senior staff reporting directly to the Institute Director. The criterion for setting pay is the market rate, considering industry standards. With effect from the 26 January 2021 Trustee Board meeting, this Committee extended its remit to include oversight of equality, diversity and inclusion which has included the review and challenge of the EDI Strategy and Action Plan. Membership Howard Covington Chair Frank Kelly Richard Kenway Carina Namih Appointed 26 January 2021 Advisory Groups Other advisory groups are set out below. The Joint Venture Agreement includes the following two advisory groups: Research and Innovation Advisory Committee (formerly the Programme Committee): The JVA sets out that this group is set up to support the Institute Director in preparing a scientific and innovation strategy. It also supports with research and training programmes and reports appropriately to the Institute’s stakeholders. Scientific Advisory Board: This is an independent group designed to be made up of international experts in academia, industry, and government. It provides strategic advice to the Institute’s Board of Trustees and the Executive Team on the development and implementation of the scientific research strategy. During the year, the Board of Trustees agreed that this Board would not be required to meet given the uncertainties surrounding the funding model and the national strategy for data science and AI. It is anticipated that this group may be reconvened during 2021-22. Other groups: Strategic Partners Board: This group is intended to advise the Board of Trustees on the content and translation of research generated at the Institute and is intended to collaborate across the Institute and its partners to identify new opportunities. University Partners Board: This group is intended to advise the Institute Director on the research direction of the Institute, the Institute’s relationship with its university partners and the higher education landscape as it relates to data science and AI. Recruitment and appointment of Trustees The Nomination Committee aims to undertake an open recruitment process, recommends new candidates for appointment when necessary and ensures appropriate recruitment and succession plans are in place for independently appointed Trustees (i.e. not Founder Member-appointed Trustees). During the year, four new Independent Trustees were successfully appointed following an open recruitment process led by the Nomination Committee. On appointment, each Trustee completes a declaration of interests form which is held within a register of interests and which is monitored and updated on a regular basis and reviewed annually. Trustee-related party transactions are disclosed in greater detail within the financial statements later in this report. All conflicts are actively managed through early identification of potential areas of conflict and appropriate action taken where necessary. Each new Trustee underwent a tailored induction programme for new Trustees that included a programme of meetings with the members of the Executive Team and relevant members of the Senior Management Team and other Trustees. New Trustees are provided with a Trustee Information Pack which includes initial information about the Institute and its work, a copy of the previous year’s annual report and accounts, a copy of the Institute’s Articles of Association, a copy of the Joint Venture Agreement, information about their powers as Trustees of the Institute, key corporate policies, and a copy of the Charity Commission’s guidance entitled “The essential trustee: what you need to know”. Trustees appointed during the year Baroness Nicola Blackwood-Bate Nicola’s career reflects an abiding belief that science, innovation and access are crucial to solving our greatest global challenges. Nicola is Chair of Genomics England, a member of the Lords Science & Technology Select Committee and Honorary Professor of Science and Public Policy at UCL. She has served as Minister for Innovation in the Department for Health and Social Care (UK) under two prime ministers. Her ministerial portfolio covered life sciences, mental health, data and the digital transformation of the NHS, cybersecurity, Brexit, trade, global health security and public health. Nicola oversaw NHS Digital, Genomics England and the NIHR, and was closely involved in the creation and development of NHSX. In policy terms, this included: implementing the government’s Life Sciences Strategy and the Tech Vision; setting up the AI Hub and the Data Innovation Hubs; and developing the Code of Conduct for Data-Driven Health and Care Technology, the Public-Private Data Principles, and the Genomic Healthcare Strategy. Nicola worked hard to improve access to medical treatment, setting up the Accelerated Access Collaborative, launching the NICE (National Institute for Health and Care Excellence) methods review and the Commercial Framework, and announcing a new £500m Innovative Medicines Fund. Prior to her elevation to the House of Lords, Nicola was the first female Member of Parliament for Oxford West and Abingdon (2010-2017) and she was elected by MPs of all parties to Chair the Commons Science and Technology Select Committee. The Science and Technology Committee exists to scrutinise Government policy and ensure decisions are based on good scientific advice and evidence. Nicola remains the youngest-ever select committee chair in British history and the only woman to have held that position. She was also Chair of the Human Tissue Authority (UK health regulator) and Board Member and Investment Committee Member of Oxford University Innovation, where she helped spin out companies from the University of Oxford. She has held further public and private advisory and board positions. Nicola is also a trained classical musician with degrees in music from the Universities of Oxford and Cambridge. Professor Kerry Kirwan Kerry Kirwan is Deputy Pro-Vice Chancellor (Research) at the University of Warwick and Chair of the Sustainable Manufacturing and Materials group at Warwick’s WMG department. He has considerable experience in novel sustainable materials and manufacturing and, to date, he has been awarded in excess of £35m of public and industrial research funding in the Circular Economy arena. He is the Director of the EPSRC Centre for Doctoral Training in Sustainable Materials and Manufacturing and was previously Director of the EPSRC Industrial Doctorate Centre in High Value, Low Environmental Impact Manufacturing. He currently leads the University of Warwick’s Global Challenges Research Fund programme, the Global Research Priority in Innovative Manufacturing and Future Materials and is a member of the ‘Connected Everything’ Network+ Executive Committee. He is also Editor-in-Chief of the Journal of Polymers and the Environment, was a member of the EPSRC’s Manufacturing the Future Strategic Advisory Team and served on the Midlands Engine Science and Innovation Audit Committee. He also regularly advises, reviews and chairs funding panels and meetings for UK, EU and international research agencies. Dr Vanessa Lawrence CB Vanessa Lawrence CB works internationally as a senior advisor to governments, inter­governmental organisations including the World Bank and large private sector organisations. She is a Director of Location International Ltd, which provides strategic advice and full operational capacity globally to the public and private sectors with respect to improving their own use of location information to enhance their decision-making and to meet the ever-changing needs of their customers and stakeholders. In addition, Vanessa is a Non-Executive Director of the Satellite Applications Catapult, the innovation centre for satellite use in the UK and is on the advisory boards of Seraphim Space LLP, the space venture capital fund backed by the British Business Bank, the Spatial Finance Initiative and the Urban Big Data Centre. She is also the Honorary Colonel of 135 Geographic Squadron Royal Engineers. Vanessa is a Trustee of the Royal Geographical Society, an Adjunct Professor at the University of Southampton and a Patron of MapAction, a UK-based international charity that specialises in supplying geographical information for humanitarian relief operations. From 2000 to 2014, she was the Director General and CEO of Ordnance Survey, Britain’s national mapping authority, and an advisor to the government for issues involving mapping, surveying and geographic information. She is the longest-serving Director General and CEO of Ordnance Survey since 1875. From 2011 to 2015, Vanessa was a founding co-chair of the United Nations Committee of Experts on Global Geospatial Information Management (UN-GGIM). Carina Namih Carina Namih is a General Partner at Episode1 Ventures, a venture capital firm that invests in the top early-stage technology businesses in the UK. The Episode1 team has backed some of the UK’s biggest success stories such as Shazam, Natural Motion and CloudNC. Carina joined Episode1 with a focus on investing in applied AI start-ups, where as a board director she helps the founding teams build category-defining businesses. Before joining Episode1, Carina was the co-founder and CEO of HelixNano, one of the first computational biology companies to successfully apply machine learning techniques to develop RNA vaccines (with a team from Harvard University and backed by the likes of Y Combinator, that company is now adapting its synthetic biology capabilities to tackle COVID-19). Carina also supports the wider UK business ecosystem on the advisory board of the British Venture Capital Association. Hitesh Thakrar Hitesh Thakrar is an experienced investor in the technology sector, having spent over 25 years investing in public equities in the life sciences, information technology and innovation sectors. Since 2015, he has moved into early-stage venture investing. Hitesh is currently a Partner at Syncona Limited (a Wellcome Trust backed early-stage venture fund), a Governance Board Member of KQ Labs at the Francis Crick Institute (an accelerator supporting next-generation businesses in data science and life sciences) and the Chair of the Investment Committee for Newable Ventures (a pre-Series A deep-tech fund). Hitesh is also an advisor to UKRI’s Science and Technology Funding Council (SFTC), which helps early-stage companies spin out from UK universities. He recently joined as a Trustee of the Royal National Orthopaedic Hospital Charity. Previous non-executive director roles include work with Desktop Genetics and Tropic Biosciences – both companies are using CRISPR gene-editing technology. Prior to 2015, Hitesh worked at various public market institutions in global equity research and fund management, including at ADIA (Abu Dhabi’s sovereign wealth fund), JP Morgan, Aviva Group, Dresdner Bank and New Star Asset Management. Hitesh has a degree in chemistry from King’s College London, an MBA from Cranfield University, and a CFA from the American Association of Investment Analysts. He has previously held a position as Innovation Fellow at the University of Cambridge. Trustee training During the year, the Trustees received externally facilitated training from the law firm Bates Wells Braithwaite on charity governance which covered directors’ duties, charitable purpose (“objects”), public benefit, charity structures, compliance with constitutional documents, the Charity Governance Code, and developments in related areas of law, regulation and practice. Financial review The Institute is funded through grants from Research Councils, Founder Members, University Partners and from strategic and other partnerships. Income of £37.3m (2019/20: £36m) has been received during the year of which £19m was received from Research Councils (2019/20: £13m), £6.5m from Founding Members and University Partners (2019/20: £13m), £7.7m from strategic and other research partnerships (2019/20: £7.8m), £3.2m from other trading activities and investment income (2019/20: £2.2m) and £0.8m from donations. Of the £19m received from Research Councils, £7.3m was provided specifically to address the impact of COVID-19. The Institute applied for this funding to cover two significant financial impacts during the year. First, the loss of Founding Member and University Partner funding due to pressures that the university sector has faced resulting from the COVID-19 pandemic. Second, the Institute had made a significant investment in a new collaboration agreement with two of our University Partners and their associated NHS Foundation Trusts to create a near real-time database of critical care patient records that could be made available to researchers to help answer COVID-19 related clinical questions (known as the DECOVID project). Expenditure of £31.7m (2019/20: £34.5m) has been incurred in the year. Grants payable to Founding Members and other University Partners represent 40% of total expenditure. Staff costs represent 42% (2019/20: 31%) of total expenditure, increasing from £10.7m in 2019/20 to £13.3m in 2020/21, as the Institute expands its research programme delivery. The remaining 18% (2019/20: 19%) of expenditure covers support costs and other direct costs. The Group made a surplus of £5.6m (2020: £1.5m). This has been transferred to reserves and will be used to fund research and Group costs during 2021/22 and beyond. Group net assets at 31 March 2021 are £30m (2020: £24.4m). Fixed asset values reduced by £799k. During the year £186k was spent on additions. This was offset by £985k of depreciation. Current assets: Debtors are £7.9m (2020: £6.8m) and represents an increase in project accrued income from the previous year. Cash, including current asset investments, has increased by £5.5m to £48.5m (2020: £43m). This is largely due to the upfront nature of cash receipts on many of the Institute’s grant awards. A current asset investment of £5m was made in 2020/21 and represents a cash deposit held in an interest bearing 95-day notice account with Barclays. Creditors: amounts falling due within one year increased by £2.9m to £26m. Grant creditors were £5.2m lower than last year as we have paid down the amounts due. However, accruals and deferred income were £3.8m higher than last year due to a year-on-year increase of £2.7m in accrued project expenditure and £1.1m of project funding received in the current year but deferred. Additionally, other creditors were £4.3m higher than in 2019/20 as they include an amount of £4.2m due to UKRI for the repayment of grant funding received in previous years for the AI for science and government programme (ASG). Creditors: amounts falling due in more than one year reduced by £2.7m to £1.2m as a number of grants became due within one year and there were fewer new grant agreements with liabilities due in more than one year than last year. Going concern The Trustees have assessed whether the use of going concern basis is appropriate and have considered possible events or conditions that might cast significant doubt on the ability of the charitable company to continue as a going concern. The Trustees have made this assessment for a period of at least one year from the date of the approval of these financial statements. In particular, the Trustees have considered the charitable company’s forecasts and projections and have taken account of pressures on income. After making enquiries, the Trustees have concluded that there is a reasonable expectation that the charitable company has adequate resources to continue in operational existence as set out below. The charitable company therefore continues to adopt the going concern basis in preparing its financial statements. The Trustees have assessed the consequences of the ongoing COVID-19 pandemic and recognise that, whilst this will impact the research funding landscape in the UK and internationally, the Institute’s reserves are such that it expects to maintain positive cash flows and reserves for at least one year from the date of approval of these financial statements and, as such, the Trustees are confident that the Institute will continue to operate as a going concern. Fundraising The Institute does not engage in fundraising activities with the general public. Costs of raising funds in the financial statements relate to sourcing of new institutional funders. The Institute does not use third parties to assist with fundraising and the Institute received no complaints in this year regarding its fundraising practices. Treasury Management Policy Treasury management activity monitors the timing and amounts of cash inflows and outflows, in particular monitoring and tracking those activities that result in significant cash movement. The Treasury Management Policy is confined to the management of short-to-medium term liquid funds (maximum investment term is 18 months). Assets are protected by investing with approved counterparties. Investments are risk-averse and non-speculative, and the Institute places no income reliance on interest earned. Grant-making policy The Institute’s grants will be subject to outputs being appropriately recorded and assessed. Data held will be in line with the grant guideline requirements issued by UK Research and Innovation. Fundamental principles have been established and adopted by the Institute. These are as follows: Grant-making policy (cont.) – The Institute will award grants that are in line with the charitable objects of the organisation. – The Institute intends to assess grants biannually to ensure compliance with the terms of the grant. – The Institute expects to assess the progress of each grant within three months of the end of the grant period. Reserves policy The Institute reviews its unrestricted reserves policy each year, taking account of its planned activities and the financial requirements for the forthcoming period. The Trustees believe that the Institute should have access to reserves appropriate to the scale, complexity, and risk profile of the Institute. To cover any shortfall in grants and to maintain the financial viability of the Institute, reserves are currently set at the equivalent of a minimum of 6 months of operating costs. In 2015, EPSRC awarded a grant of £42m to the Institute to carry out its charitable objectives. This grant was split between operating resource of £22m and capital of £20m. As at 31 March 2020, the full value of the operating resource grant had been received. The remaining capital grant expires on 31 March 2022. A further resource award of £10m was made by EPSRC in November 2019 to support core operating costs until 31 March 2022. £6m of this was received in 2020/21. The Institute’s unrestricted Fund as at 31 March 2021 was £21.1m (2020: £16.9m). This includes £0.7m (2020: £1.7m) of funding held to cover future years’ financial commitments and £1.7m (2020: £1.9m) of funds designated by the Board for the Institute’s Safe and Ethical AI programme, leaving £18.7m of free reserves (2020: £12.5m), being in line with the above reserves policy. As at 31 March 2021, the Institute holds £8.9m (2020: £7.5m) of restricted reserves. This is after allowance has been made for future years’ commitment under current researcher grant awards amounting to £1.7m (2020: £4.1m). Remuneration policy The Institute is committed to ensuring a proper balance between paying staff (and others who work for the Institute) fairly to attract and retain the best people for the job with the careful financial management of our charity funds. The Remuneration, EDI and People Committee oversees the overall remuneration of staff and specifically that of the Institute Director and those senior managers reporting directly to the Institute Director. The Remuneration Committee assumes the responsibilities of remuneration within the Institute and oversees the preparation of policies and procedures in respect of salaries, emoluments, and conditions of service. Formal consideration of remuneration matters takes place annually, usually at the Committee’s April meeting. However, remuneration matters may also be considered at other meetings if ad-hoc issues arise during the year. The Committee does not have full delegated authority to approve all matters relating to remuneration and any recommendation or decision must be ratified by the Board of Trustees. The Institute discloses all payments to Trustees and the number of staff with a total remuneration of £60,000 and above in accordance with the Charity Commission’s Statement of Recommended Practice 2019 (SORP). Risk management Significant risks to which the Institute and Turing Innovations Limited are exposed are reported formally to the Audit and Risk Committee, the Board of Trustees, and the Board of Directors of Turing Innovations Limited via the Institute’s corporate risk register. The Institute has a formal risk management framework embedded within the business that supports the identification and management of risk across the Institute. The Senior Management Team and the Programme Directors are responsible for managing and reporting risks in accordance with the Institute’s Risk Management Policy, while the Trustees retain overall responsibility for risk management. The risk management framework incorporates categories of risk which cover generic areas such as funding and growth, compliance and governance, security and controls, and brand and reputation. The Board of Trustees and the Board of Directors of Turing Innovations Limited seek to ensure that the risks are mitigated, so far as is reasonably possible, by actions taken by the Institute’s Executive Team, Senior Management Team, and the Programme Directors. The impacts of the COVID-19 pandemic have been felt across most if not all of the risks captured in the corporate risk register. Activities supporting the wellbeing of the Institute’s people and its members, customers and suppliers have been at the forefront of the Board and the Executive Team’s considerations during the year. With the exception of the pandemic, the main risks faced by the Institute, including its subsidiary Turing Innovations Limited, are captured on the corporate risk register which is regularly reviewed by the Board, the Audit and Risk Committee, and the Board of Turing Innovations Limited, as applicable. A summary of the key risks is included to the right. Risk description Risk mitigation Sources of funding for the Institute have been under review during the year, creating uncertainty around the long-term financial viability of the Institute. Prudent financial management of the Institute such that it can react to changes in external funding in an agile, controlled manner. Reviewing options for additional sources of funding. Inability to translate the benefits Building a network of delivery of research programmes into partners to increase the positives outcomes for the public Institute’s capacity for delivering benefit. translational impact. Failure to comply with legal and Significant focus on improving the charity commission requirements control environment during the such as data protection, serious year with policies and procedures incident reporting and export updated and introduced. regulations. The pipeline of commercial Deepening existing relationships opportunities reduces due to and focusing effort on the post-pandemic economic strategically important areas. environment. Loss of, or inappropriate handling Implementing robust security of, the Institute’s data. processes, both physical and virtual. AI applications developed by or Strengthening ethics processes, in partnership with the Institute ensuring they remain fit for being used for malicious or purpose and adequately unintended purposes. resourced. Section 172 Statement The Board of Trustees are aware of their duty under s.172 of the Companies Act 2006 to act in the way that they consider, in good faith, would be most likely to promote the success of the Institute for the benefit of its members as a whole. In this section you will find examples of how the Institute has considered its stakeholders when making decisions during the year. The Board has a duty to promote the success of the Institute for the benefit of the members, whilst also having due regard for the interests of its colleagues, and for the success of our relationships with partners and customers and for the impact of our activities on the wider community. The considerations of the Institute’s stakeholder groups are integral to the Institute’s decision-making. However, where decisions taken may adversely impact a particular stakeholder group, the Institute will always endeavour to treat them fairly. Board considerations All Board decisions are made with the success of the Institute for the long-term benefit of its members and stakeholders at the forefront of the minds of the Trustees. This year, for instance, we have agreed to reduce the contributions required from our university partners because of the difficult trading conditions faced through the impacts of the COVID-19 pandemic on their funding models. The Board is considering the future funding models, which is likely to have a transformational effect on the Institute’s capacity to fully deliver as the national institute for AI and data science, whilst maintaining the strong relationship built up with our university partners over the last five years. 1.Members Annual General Meeting The Annual General Meeting, held in October 2020, was well attended by our members with all proposed resolutions passed. This was the frst AGM theInstitute had held virtually. Annual report and accounts Whilst the Institute has a statutory obligation to provide certain information in the annual report, the information is presented in an engaging and understandable way. The Institute also looks to enhance its sharing of information throughout the year through the content made available on the Institute’s website. Founder Member approvals Throughout the year, the member representatives were asked to approve certain matters that were reserved for them as covered in the Constitutional Documents. This included the decision to extend the term of ofce of the Board Chair for a further two years until September 2023, providing the Board with continuity whilst it secures future funding and addresses the impacts of the resulting change in model in the context of the pandemic. 2. Colleagues The Board receives regular qualitative and quantitative updates on employee matters from the Director of People at Board meetings, including analysis received through employee engagement surveys, regular EDI updates and an annual update on the Performance Review and Performance Related Pay process. This provides the Board with oversight of the effects our people engagement have on our performance, and the continued strength of our culture. In addition, the committees also received various reports during the year relating to the wellbeing of our colleagues such as the annual report on health and safety, reviewed by the Audit and Risk Committee. Town Hall Enhancing employee engagement is an integral part of the culture of the Institute. Senior management are actively involved in the engagement of colleagues through weekly electronic communications, monthly staff meetings and Town Hall meetings which involve employees and full-time members of the wider Turing community, to provide updates on business developments. 3. Collaborative activity (customers and suppliers) The Board recognises the existence of several key external stakeholders (general public, founding members, university partners, strategic and project partners, government departments and agencies, charitable foundations, customers, and suppliers). The Board remains committed to effective engagement of all stakeholders and is mindful that the Institute’s success depends on its ability to engage effectively, work together constructively and to take stakeholder views into account when taking decisions. Shaping the way that The Institute achieves this by providing visible national leadership on setting sectoral best practices. For example, The Turing Way is an online, open source, community-driven guide promoting gold-standard reproducible, ethical and collaborative research in data science. Used by over 5,000 unique visitors in an average month, it covers skills in software engineering, data management, participatory design, and inclusive collaboration across geographic and disciplinary boundaries. Innovation in training and practice The Institute has introduced new approaches to training and research in data science and AI. For example, the Institute hosts Data Study Groups (DSGs) to solve real-world challenges in small teams. With over 650 participants having engaged from across the globe, DSGs enable researchers to put knowledge into practice and go beyond individual fields of research. There is also the Turing Enrichment scheme for PhD candidates, which has supported 97 students from 21 universities to spend between 9-12 months at the Institute. These initiatives promote collaborative working on mission-led challenges and strengthen networks and the use of methodologies across disciplinary boundaries for early career researchers. In turn, this strengthens the pipeline of skilled UK data science and AI talent and nurtures a strong, connected and diverse community. Supporting the national response to the COVID-19 pandemic The Institute’s interdisciplinary nature and culture of solving public problems enabled it to react quickly to the COVID-19 pandemic, acting as a locus for the data science and AI community and addressing the health and healthcare challenges faced. Advising the public sector The public policy programme (PPP) has overseen over 40 research projects since its inception, dedicated to using data science and AI to inform policy-making and improve public services as well as building ethical foundations for the use of these technologies in the public sector. It has provided expert advice on panels such as the Home Office Scientific Advisory Panel, represented the UK at the ‘Council of Europe’s Ad-hoc Committee on Artificial Intelligence’, and presented high-profile reports to partners based on original Institute research, for example, on women in data science. The PPP has been active in upskilling senior leaders through workshops with Permanent Secretaries, Chief Scientific Advisors and Chief Economists, as well as having over 85 public sector organisations reach out for advice. LUPC (London Universities In 2020, the Institute became a member of this consortium whose aim is to achieve value for money for its members in their procurement of Procurement Consortium) goods and services. Stakeholder engagement During the year, the Institute worked closely with its customer stakeholder groups across academia, industry and government, including: – University Partner Board meetings; – Strategic Partner Board meetings; – Regular meetings with UKRI/EPSRC; – Regular engagement with our university partners through our Turing University Leads (TULs) and University Liaison Managers (ULMs); – Research and Innovation Advisory Committee meetings and its Working Groups; – Hosting the inaugural AI UK showcase; – Holding the Annual General Meeting of member representatives. 4. Community and environment The Board appreciates the impact the Institute has on the community in which it operates and that this is a critical factor in its ongoing success as the national institute for data science and AI. The Institute operates in an environment where it brings together researchers convened through the research programmes and interest groups to address the big questions facing the UK, such as health, through collaboration in research and harnessing the power of data and AI to deliver positive change. Leading the public The Institute’s Events and Engagement programme is one way the Institute leads the public conversation on data science and AI, hosting conversation over 100 events for academia, industry and the wider public during the year, attracting over 4,700 attendees. Examples include The Turing Lectures series and the ‘AI and data science in the age of COVID-19’ one-day conference. In March 2021, the Institute hosted its first national AI UK event. This attracted over 1,200 attendees from across academia, industry, third sector and government with a combination of strategic and technical discussions and presentations with over 100 expert speakers. Community collaboration One of the Institute’s key contributions is bringing together experts with a range of skills and from an extensive range of disciplines – from the social sciences to theoretical mathematics – to tackle problems together. To facilitate this cross-pollination, the Institute has 37 interdisciplinary interest groups which act as forums for sharing ideas and knowledge, allowing new ideas for collaboration to be created and increasing the diversity of thought in the data science and AI sector. Links to industry, third sector and government The Institute has 30 active collaborations with organisations in industry, the third sector and government. The Institute currently has five of these Strategic Partners: Accenture, Bill and Melinda Gates Foundation, Dstl/GCHQ/Ministry of Defence, HSBC, and Lloyd’s Register Foundation; which are aligned to the Institute’s research programmes. The invaluable access to data provided by all partners allows researchers across the Institute’s network to test theories and methods with real-world data, helps the development of open-access software tools that the Institute’s partners can build on, and provides access to domain knowledge essential to delivering impact across sectors. 5. Principal decisions Principal decisions are those which are material to the Institute and significant to any of our key stakeholders. In making the following principal decisions, the Board considered the outcome from a stakeholder engagement perspective, as well as the need to act fairly on behalf of the members of the Institute. Principal decision 1: Revised interim funding model for the university partners. The Board acknowledged the financial impacts of the pandemic and the current uncertainty around the long-term funding models for our university partners. During the year, the original grant awards from the founding universities came to an end. Rather than seeking renewals on the same terms as the expiring agreements, it was agreed by the Board to implement a 12-month interim model reducing the contributions from all university partners. Principal decision 2: Pausing the Strategic Advisory Board (SAB) until clarity received on funding and model, and in turn strategy, aligned to the National AI Strategy and outcome from the Comprehensive Spending Review expected in 2021. The Board took the decision not to hold a meeting of SAB during the year. SAB is the group which advises the Board on implementing the Institute’s scientific strategy. However, the Board felt that SAB could not undertake its duties whilst the National AI Strategy was being developed with the current funding model of the Institute to be confirmed. The Board expects to reintroduce SAB during 2021-2022. Principal decision 3: Reappoint the Chair of the Board after completing six years’ service. Due to the impacts of the pandemic and the current review of the strategic direction of the Institute aligned with the national picture on data and AI, the Board agreed to recommend to the Founder Members the extension of the Chair of the Board’s tenure beyond six years. The Founder Members unanimously agreed to extend the tenure for an additional two years, subject to a suitable replacement being found earlier. Principal decision 4: Extension of existing office lease In advance of the expiry of a five-year lease signed with the British Library in 2016, the Board agreed to seek an extension from the landlord for a further two years to use space on the first and fourth floors of the British Library building in Kings Cross. Principal decision 5: Improvements to the control environment The Board instigated improvements to the control environment this year through reviewing, updating and approving key corporate policies including whistleblowing, serious incident reporting, delegation of authority, financial regulations, and risk management. The Board also approved a revised data protection policy, data subject requests policy and a data breach and security incident management policy, which were recommended as part of an externally led audit of data protection undertaken in January 2020. In addition, the Board requested an external review of governance be undertaken in 2022 where the impact of these changes to the control environment will be a part of the review. Charity Governance Code (the “Code”) The Code has been developed as an ‘aspirational’ model to support continuous improvement in governance. The Trustees confirmed in 2019-20 their support for the principles-based approach of the Code undertaking an internal review of governance practice at the Institute during the year. In January 2021, the Board of Trustees endorsed the need for an external review of governance to be scheduled for 2022, which will consider the progress of the Institute in delivering on continuous improvement in its governance practices. The internal review of governance, which considered current practice, concluded that there had been significant improvements made during the year in areas such as policy development, equality, diversity and inclusion, board diversity and board and committee composition, with further improvements identified that would form the basis of a governance continuous improvement plan to move the Institute towards best practice in governance as set out in the Code. Relevant areas of particular focus this year have included: Constitutional documents The Institute is a charity governed by its Articles of Association, adopted in 2015, and a Joint Venture Agreement (JVA) with the six founding members of the Institute. The JVA sets out the requirements for the appointment of the Trustees, who are the directors of the charity. This enables each of the Founding Members to appoint one Trustee to the Board with the requirement that the Founder Member-appointed Trustees are in the majority at all times on the Board. Given the current number of Founder Members, this allows for a maximum of five Independent Trustees who are selected based on skills, experience, and diversity of the Board. Board diversity To ensure that equality, diversity and inclusion is driven from the very top of the organisation, there has been a focus on working towards expanding the diversity of our Board. In August 2020, four new Independent Trustees were recruited from a variety of backgrounds to bring a diverse range of skills, expertise and insights from fields including technology, government, academia and finance to support the Turing’s wider mission to lead the UK in data science and AI. This new expertise has also augmented the committees of the Board to create a more diverse and refreshed membership. Internal controls: policy framework The Trustees have overseen the implementation and updating of key corporate policies which have strengthened the control environment of the Institute during the year. These have included: Delegations of Authority, Financial Regulations, Serious Incident Reporting, Data Protection, Risk Management, Whistleblowing, Contractual Policies, Gifts and Hospitality and Research Misconduct, Counter-Fraud and Anti-Bribery & Corruption policies amongst others. Internal controls: risk management and data protection Significant improvements have been made during the year to both the risk management and data protection processes used by the Institute. This has included implementing a process for regular senior management review of risks with 10 risk moderation meetings a year and the recruitment of a data protection specialist to manage data protection matters, deliver training and facilitate the management of subject rights requests and data breaches. Committee terms of reference The Board has undertaken a full review of the terms of reference of the committees during the year. This has resulted in the extension of the scope of the previously titled Remuneration Committee to include oversight of equality, diversity and inclusion and people matters. The review of the Nomination Committee has also resulted in recommendations being approved for governance matters to be incorporated into their scope for 2021-22 to assist the Board in delivering on the continuous improvement activities, and the Audit and Risk Committee to have oversight of data protection, cyber security and risks. Equality, diversity and inclusion (EDI) The Institute recognises that promoting and embedding EDI in our function as employer, research institute and national body is integral to achieving our mission. Over the years, the Turing community has shown a strong commitment to embedding EDI throughout the Institute, evidenced by the successes of the four volunteer-driven EDI working groups who launched many initiatives to raise awareness and supported the People Team in driving EDI forward. During the year, the Institute commissioned an internal audit which identified the need for a strategic approach to be developed to ensure EDI was fully embedded across the Institute. An EDI Strategy and SMART Action Plan have been developed as a result with consultation completed and the strategy due to launch during 2021. The ownership for delivery of the Strategy and Action Plan resides with the Senior Management Team with the newly expanded Remuneration, EDI and People Committee providing oversight and holding the Institute to account for delivery of performance against the plan. Areas of focus for 2021-22 Having undertaken its first annual review of governance practices against the Code, the Board acknowledges the need to focus on delivering improvements and embedding the good practice which has been put in place during this year. This will include the Board approving an action plan with support from the newly retitled Nomination and Governance Committee to oversee the implementation of the action plan with regular updates provided to the Board. The action plan will focus on: – Reviewing the governance structure to ensure it remains fit for purpose to deliver the vision, strategy and objectives of the Institute. – Progressing the review of corporate policies that has been substantially enhanced during 2020-21. security and ethics. Trustees’ responsibilities statement The Trustees are responsible for preparing the Trustees’ annual report and the financial statements in accordance with applicable law and regulations. Company law requires the Trustees to prepare financial statements for each financial year. Under that law, the Trustees have elected to prepare the financial statements in accordance with United Kingdom Accounting Standards (United Kingdom Generally Accepted Accounting Practice, GAAP) including FRS 102 – The Financial Reporting Standard Applicable in the UK and Ireland. Under company law, the Trustees must not approve the financial statements unless they are satisfied that they give a true and fair view of the state of affairs of the Institute and the result for that year. In preparing these financial statements, the Trustees are required to: – Select suitable accounting policies and then apply them consistently. – Comply with applicable accounting standards, including FRS 102, subject to any material departures disclosed and explained in the financial statements. – State whether a Statement of Recommended Practice (SORP) applies and has been followed, subject to any material departures which are explained in the financial statements. – Make judgements and estimates that are reasonable and prudent. – Prepare the financial statements on a going concern basis unless it is inappropriate to presume that the charitable company will continue in business. The Trustees are responsible for keeping adequate accounting records that are sufficient to show and explain the Institute’s transactions, disclose with reasonable accuracy at any time the financial position of the Institute and enable them to ensure that the financial statements comply with the Companies Act 2006. They are also responsible for safeguarding the assets of the Institute and hence for taking reasonable steps for the prevention and detection of fraud and other irregularities. Trustees are responsible for the maintenance and integrity of the corporate and financial information included on the Institute’s website. Legislation in the UK governing the preparation and dissemination of financial statements may differ from legislation in other jurisdictions. Disclosure of information to the auditor The Trustees who held office at the date of approval of this Trustees’ annual report confirm that, so far as they are each aware, there is no relevant audit information of which the Institute’s auditor is unaware. Each Trustee has taken all the steps that they ought to have taken as a Trustee to make themselves aware of any relevant information and to establish that the Institute’s auditor is aware of that information. Moore Kingston Smith were reappointed as auditors by the Board of Trustees in June 2020 for a two-year term. Signatory The Trustees’ annual report is approved by the Trustees of the Institute. The strategic report, which forms part of the annual report, is approved by the Trustees in their capacity as directors in company law of the Institute. Howard Covington Chair 22 June 2021 Financial statements Independent auditor’s report to the members of The Alan Turing Institute Opinion We have audited the financial statements of The Alan Turing Institute for the year ended 31 March 2021 which comprise the Group Statement of Financial Activities, the Group Summary Income and Expenditure Account, the Group and Parent Charitable Company Balance Sheets, the Group Cash Flow Statement and notes to the financial statements, including a summary of significant accounting policies. The financial reporting framework that has been applied in their preparation is applicable law and United Kingdom Accounting Standards, including Financial Reporting Standard 102 – The Financial Reporting Standard applicable in the UK and Republic of Ireland (United Kingdom Generally Accepted Accounting Practice). In our opinion the financial statements: — give a true and fair view of the state of the group’s and the parent charitable company’s affairs as at 31 March 2021 and of the group’s incoming resources and application of resources, including its income and expenditure, for the year then ended; — have been properly prepared in accordance with United Kingdom Generally Accepted Accounting Practice; and — have been properly prepared in accordance with the requirements of the Companies Act 2006. Basis for opinion We conducted our audit in accordance with International Standards on Auditing (UK) (ISAs(UK)) and applicable law. Our responsibilities under those standards are further described in the Auditor’s Responsibilities for the audit of financial statements section of our report. We are independent of the charitable company in accordance with the ethical requirements that are relevant to our audit of the financial statements in the UK, including the FRC’s Ethical Standard, and we have fulfilled our other ethical responsibilities in accordance with these requirements. We believe that the audit evidence we have obtained is sufficient and appropriate to provide a basis for our opinion. Conclusions relating to going concern In auditing the financial statements, we have concluded that the Trustees’ use of the going concern basis of accounting in the preparation of the financial statements is appropriate. Based on the work we have performed, we have not identified any material uncertainties relating to events or conditions that, individually or collectively, may cast significant doubt on the charitable company’s ability to continue as a going concern for a period of at least twelve months from when the financial statements are authorised for issue. Our responsibilities and the responsibilities of the Trustees with respect to going concern are described in the relevant sections of this report. Independent auditor’s report to the members of The Alan Turing Institute Other information The other information comprises the information included in the annual report, other than the financial statements and our auditor’s report thereon. The Trustees are responsible for the other information. Our opinion on the financial statements does not cover the other information and, except to the extent otherwise explicitly stated in our report, we do not express any form of assurance conclusion thereon. In connection with our audit of the financial statements, our responsibility is to read the other information and, in doing so, consider whether the other information is materially inconsistent with the financial statements or our knowledge obtained in the audit or otherwise appears to be materially misstated. If we identify such material inconsistencies or apparent material misstatements, we are required to determine whether there is a material misstatement in the financial statements or a material misstatement of the other information. If, based on the work we have performed, we conclude that there is a material misstatement in this other information, we are required to report that fact. We have nothing to report in this regard. Opinions on other matters prescribed by the Companies Act 2006 In our opinion, based on the work undertaken in the course of the audit: — the information given in the strategic report and the Trustees’ annual report for the financial year for which the financial statements are prepared is consistent with the financial statements; and — the strategic report and the Trustees’ annual report have been prepared in accordance with applicable legal requirements. Matters on which we are required to report by exception In the light of the knowledge and understanding of the group and parent charitable company and its environment obtained in the course of the audit, we have not identified material misstatements in the Trustees’ annual report. We have nothing to report in respect of the following matters where the Companies Act 2006 requires us to report to you if, in our opinion: — the parent charitable company has not kept adequate and sufficient accounting records, or returns adequate for our audit have not been received from branches not visited by us; or — the parent charitable company’s financial statements are not in agreement with the accounting records and returns; or — certain disclosures of Trustees’ remuneration specified by law are not made; or — we have not received all the information and explanations we require for our audit. Independent auditor’s report to the members of The Alan Turing Institute Responsibilities of Trustees As explained more fully in the Trustees’ responsibilities statement set out on page 87, the Trustees (who are also the directors of the charitable company for the purposes of company law) are responsible for the preparation of the financial statements and for being satisfied that they give a true and fair view, and for such internal control as the Trustees determine is necessary to enable the preparation of financial statements that are free from material misstatement, whether due to fraud or error. In preparing the financial statements, the Trustees are responsible for assessing the group and parent charitable company’s ability to continue as a going concern, disclosing, as applicable, matters related to going concern and using the going concern basis of accounting unless the Trustees either intend to liquidate the group or parent charitable company or to cease operations, or have no realistic alternative but to do so. Auditor’s responsibilities for the audit of the financial statements Our objectives are to obtain reasonable assurance about whether the financial statements as a whole are free from material misstatement, whether due to fraud or error, and to issue an auditor’s report that includes our opinion. Reasonable assurance is a high level of assurance, but is not a guarantee that an audit conducted in accordance with ISAs (UK) will always detect a material misstatement when it exists. Misstatements can arise from fraud or error and are considered material if, individually or in aggregate, they could reasonably be expected to influence the economic decisions of users taken on the basis of these financial statements. Irregularities, including fraud, are instances of non­compliance with laws and regulations. We design procedures in line with our responsibilities, outlined above, to detect material misstatements in respect of irregularities, including fraud. The extent to which our procedures are capable of detecting irregularities, including fraud is detailed below. Independent auditor’s report to the members of The Alan Turing Institute Explanation as to what extent the audit was considered capable of detecting irregularities, including fraud The objectives of our audit in respect of fraud, are; to identify and assess the risks of material misstatement of the financial statements due to fraud; to obtain sufficient appropriate audit evidence regarding the assessed risks of material misstatement due to fraud, through designing and implementing appropriate responses to those assessed risks; and to respond appropriately to instances of fraud or suspected fraud identified during the audit. However, the primary responsibility for the prevention and detection of fraud rests with both management and those charged with governance of the charitable company. — We obtained an understanding of the legal and regulatory requirements applicable to the charitable company and considered that the most significant are the Companies Act 2006, the Charities Act 2011, the Charity SORP, and UK financial reporting standards as issued by the Financial Reporting Council. — We obtained an understanding of how the charitable company complies with these requirements by discussions with management and those charged with governance. — We assessed the risk of material misstatement of the financial statements, including the risk of material misstatement due to fraud and how it might occur, by holding discussions with management and those charged with governance. — We inquired of management and those charged with governance as to any known instances of non­compliance or suspected non-compliance with laws and regulations. — Based on this understanding, we designed specific appropriate audit procedures to identify instances of non-compliance with laws and regulations. This included making enquiries of management and those charged with governance and obtaining additional corroborative evidence as required. As part of an audit in accordance with ISAs (UK) we exercise professional judgement and maintain professional scepticism throughout the audit. We also: — Identify and assess the risks of material misstatement of the financial statements, whether due to fraud or error, design and perform audit procedures responsive to those risks, and obtain audit evidence that is sufficient and appropriate to provide a basis for our opinion. The risk of not detecting a material misstatement resulting from fraud is higher than for one resulting from error, as fraud may involve collusion, forgery, intentional omissions, misrepresentations, or the override of internal control. — Obtain an understanding of internal control relevant to the audit in order to design audit procedures that are appropriate in the circumstances, but not for the purposes of expressing an opinion on the effectiveness of the group and parent charitable company’s internal control. — Evaluate the appropriateness of accounting policies used and the reasonableness of accounting estimates and related disclosures made by the Trustees. — Conclude on the appropriateness of the Trustees’ use of the going concern basis of accounting and, based on the audit evidence obtained, whether a material uncertainty exists related to events or conditions that may cast significant doubt on the group and parent charitable company’s ability to continue as a going concern. If we conclude that a material uncertainty exists, we are required to draw attention in our auditor’s report to the related disclosures in the financial statements or, if such disclosures are inadequate, to modify our opinion. Our conclusions are based on the audit evidence obtained up to the date of our auditor’s report. However, future events or conditions may cause the group or parent charitable company to cease to continue as a going concern. — Evaluate the overall presentation, structure and content of the financial statements, including the disclosures, and whether the financial statements represent the underlying transactions and events in a manner that achieves fair presentation. — Obtain sufficient appropriate audit evidence regarding the financial information of the entities or business activities within the group to express an opinion on the consolidated financial statements. We are responsible for the direction, supervision and performance of the group audit. We remain solely responsible for our audit report. We communicate with those charged with governance regarding, among other matters, the planned scope and timing of the audit and significant audit findings, including any significant deficiencies in internal control that we identify during our audit. Use of our report This report is made solely to the charitable company’s members, as a body, in accordance with Chapter 3 of Part 16 of the Companies Act 2006. Our audit work has been undertaken so that we might state to the charitable company’s members those matters which we are required to state to them in an auditor’s report and for no other purpose. To the fullest extent permitted by law, we do not accept or assume responsibility to any party other than the charitable company and charitable company’s members as a body, for our audit work, for this report, or for the opinions we have formed.