The Alan Turing Institute Annual Report 2019–20 Contents Section 1 A truly national institute Section 2 Trustees’ and strategic report 78 Section 3 Financial statements 94 Section 1 A truly national institute 1.1 Chair’s report 1.2 Institute Director’s report 1.3 Highlights of the year 1.4 Partnerships and collaborations 1.5 Research impact case studies 1.6 The year in numbers 1.7 Engagement, outreach and skills 1.8 The year ahead Section 1.1 Chair’s report In this very difficult time of pandemic, it has never been more important to demonstrate the Institute’s commitment to the exchange of research and scientific ideas nationally and globally. I am proud of our response, which has seen the Turing’s community rise swiftly to meet the urgent need for scientific innovation to tackle the spread and effects of COVID-19. We have quickly worked in tandem with local and national government on new research projects to support the national effort. Our business team did a remarkable job in ensuring a seamless transition to remote working right across the Institute. This year the Institute has, once again, been at the forefront of exciting developments in data science and AI in the UK and beyond. Despite concerns about Brexit and uncertainty about research funding, the Turing remains committed to enabling the UK to exercise leadership in AI and data science research. Our annual report highlights how the Turing is working with a range of partners to achieve far-reaching impact. The Institute continues to nurture and grow new projects, partnerships and collaborative research with universities, industry, government and third sector organisations. This year has seen the initiation of significant new projects with the Financial Conduct Authority, the Bill & Melinda Gates Foundation, the NHS, Ofsted and Alzheimer’s Research UK, among many others. The Institute has been building on its record of success to explore new research opportunities for innovation and impact across the UK. We have been actively supporting work to enhance the UK’s pipeline of data science talent and to ensure that the UK is at the forefront of emerging technologies. The new AI Fellowships are an example of how we are work with government to support a diverse AI research community and enable the UK to create a sustainable AI research and innovation ecosystem. The Institute is uniquely positioned to share best practice and help achieve strategic alignment around the data skills gap and we want to build further on this strength in future. The Turing works hard to be a truly national institute. This year has seen us increasing our engagement with institutions beyond our thirteen partner universities. We have formed an exceptionally strong and thriving network of academic institutions that makes a vital contribution to data science and AI through research projects, data study groups, open calls, workshops, seminars, and events. The number and range of students, from across the UK, applying to our enrichment scheme has also continued to grow. The Turing has continued to build its nationwide engagement programme. This ranges from events on specialist topics, through dynamic ‘Data Debates’ to engage the public with our work, to our prestigious series of Turing Lectures. This year, we were prevented from delivering our first national showcase of AI in the UK by COVID-19, but we are looking forward to this taking place in the spring of 2021. Finally, I would like both to congratulate our chief executive, Sir Adrian Smith, on his appointment as President Elect of The Royal Society, and to thank him for his continuing commitment to the Institute, notwithstanding the pressures of this new role. I also want to thank Adrian, the Board of Trustees and all our Turing colleagues for their work during this exceptionally challenging year. Howard Covington Chair of the Board of Trustees The Alan Turing Institute Section 1.2 Institute Director’s report Data science and AI are under the microscope, thrust into the public eye by the global pandemic. Current national concerns have served to confirm the Institute’s key role in ensuring data science and AI are used to impact on our society and economy positively. There has been a significant response from the Turing community to the call from national government and local organisations to help tackle the spread of the virus. The Institute is at the forefront of this response through projects such as DECOVID and Odysseus (alongside the Greater London Authority and others). During this crisis, the challenges associated with data science, data modelling, ethics and privacy have been apparent. There is now a wider recognition of the importance of collecting, organising and manipulating complex data at scale and the associated social, behavioural and ethical issues raised. The Institute’s data safe havens are crucial to unlocking the potential of data science. By creating safe and secure computing environments, the Institute contributes to transformative impacts in health, finance, engineering and science. The Institute also has a vital role in pushing the boundaries of science for public good. I am delighted that we are leading research to improve understanding and explore how AI can help to combat modern slavery. Additionally, our emerging digital identity project will explore inclusive and responsible foundational identification systems which could be transformative for the most vulnerable members of society. Inequality is a blight on society. We have continued to make strides to address these issues within our worlds of data science and AI and have established a set of values and behaviours that we expect our community to demonstrate: trust, inclusivity, respect, leadership, transparency, and integrity. These values permeate our work as researchers throughout our UK network collaborate across disciplines and beyond boundaries to generate impact in both theoretical development and in application to real-world problems. Through Theory and Methods Challenges, the Institute has stimulated foundational research across the Turing’s university partner network. This represents our first steps to formalise theoretical foundation research work at the Institute and also demonstrates an opportunity to create further UK and international impact. In terms of translation of research, our Research Engineering Group continues to make a significant contribution to pulling through research to impactful applications. This model of working ensures that the tools they develop are applicable to a wide range of research areas. With the combined challenges of Brexit and coronavirus, the UK economy faces uncertain times. It is essential that we develop and nurture appropriate data science and AI skills. The Engineering and Physical Sciences Research Council’s (EPSRC) is working with us to ensure the future sustainability of the Turing as the national centre for AI and data science research, delivering positive benefits for the UK. The Turing is a truly national institute that I am proud to lead with the support of the Board, my colleagues and our community. It has been a great personal honour to be elected as President of the Royal Society from this coming November, but my commitment to the Institute remains and I look forward to maintaining and building on our successes over the coming years. Adrian Smith Institute Director and Chief Executive The Alan Turing Institute Founding partners The Institute’s founding partners are the universities of Cambridge, Edinburgh, Oxford, University College London and Warwick and the Engineering and Physical Sciences Research Council (EPSRC). Answering a national need for investment in data science research, they formed the Institute as a joint venture in 2015, following an open competition run by the EPSRC. Each founding university has appointed a Turing University Lead who acts as an interface between the Institute and the founder university. University partners Our university network, expanded from the Turing’s five founding universities, enables the Institute to conduct even more ambitious collaborative research. Strategic partners The Institute’s unique position at the interface of academia, business, third sector and policy distinguishes it from other research institutions. In addition, it creates a wealth of potential collaborative opportunities through our dynamic relationships with our strategic partners. Our relationships with our strategic partners contribute to what sets the Institute apart from universities and other research organisations and drives our innovation and impact. As well as delivering ambitious programmes of research, the partnerships seek to build meaningful connections between academic excellence and real-world challenges in business and government. Section 1.3 Highlights of the year Section 1.3 + Highlights of the year Responding to the COVID-19 pandemic On 31 December 2019, Wuhan Municipal Health Commission, China, reported a cluster of cases of pneumonia in Wuhan, Hubei Province. A novel coronavirus was eventually identified. Since then, the impact of COVID-19 on our society, economy, health and wellbeing have been profound. Experts at the Institute and across the university network are addressing the urgent need for scientific innovation to tackle the spread and effects of COVID-19. The Turing is playing a key part in the nation’s efforts against the pandemic. There has been coordination of research considered a priority through requests from government. The Institute is providing scientific and technical advice and crucial insights to support government decision makers through a range of projects. DECOVID DECOVID aims to use near real-time health data from hospitals as the COVID-19 pandemic unfolds to allow researchers and clinicians to generate rapid and robust insights that can lead to more effective clinical treatment strategies, in a range of useful and actionable ways for key clinical, operational and regulatory decision makers. Alongside the Turing, the founding partners of the collaboration are University Hospitals Birmingham NHS Foundation Trust, University College London Hospital Foundation Trust, University College London and the University of Birmingham. DECOVID will be placed within the infrastructure of PIONEER, the HDR-UK Health Data Research Hub for Acute Care. Insights from DECOVID will help hospitals treat COVID-19 patients more effectively, reducing the strain on frontline staff (both emotionally and physically) and on the system as a whole. The project will also research the impact of the pandemic on non-COVID patients who may have had their treatment delayed or who did not seek care when they otherwise would have. Understanding the impact of the pandemic on ‘normal business’ will help the NHS plan for how to mitigate this impact in any future surges. Find out more at decovid.org. Understanding London’s movements during lockdown Researchers from the Institute, backed by and in partnership with Lloyd’s Register Foundation (LRF), have been mobilised to provide crucial insights to help London authorities during lockdown and support planning for the future after lockdown. Despite considerable disruption to everyday life and the economy, the public’s response to the ‘Stay at Home’ message (announced on 23 March 2020) has been crucial to slowing the transmission of coronavirus. The Institute was already working on an ambitious collaboration with the GLA (Greater London Authority) and TfL (Transport for London) through the data-centric engineering programme (funded by Lloyd’s Register Foundation) with additional support from Microsoft. Now, working alongside a team of researchers from the universities of Warwick, Cambridge and UCL, the team has repurposed their existing models, infrastructure and machine learning algorithms from the air quality work. They’re using them to understand how and when ‘busyness’ is changing across the capital in the wider context of COVID-19, and how positively the public are responding to interventions. Microsoft is a key partner, providing Azure cloud and AI services and expertise to the project. The project, codenamed ‘Odysseus’, aims to help manage the crisis, inform the return to normality and act as a springboard to London’s economy in the long term. The outputs from this research are already providing crucial insights to the GLA’s Strategic Coordination Group (SCG) and Public Health England. Rapid Assistance in Modelling the Pandemic (RAMP) Health Foundation initiative on policy Mark Birkin, the Turing’s Programme Director for Urban Analytics, is leading a key workstream of the Rapid Assistance in Modelling the Pandemic (RAMP) initiative which is bringing modelling expertise from a range of disciplines to support the pandemic modelling community working on Coronavirus. Mark’s expertise is helping to connect epidemic models to transport and urban analytics. As well as a Programme Director and Turing Fellow, Mark is Professor of Spatial Analysis and Policy in the School of Geography at the University of Leeds and Director of the Leeds Institute for Data Analytics. He said, “RAMP, and its distinct, cross-disciplinary approach, provides an important platform for predictive analytics supporting policy insight and enhanced decision-making. The Turing and I welcome the opportunity to continue to be actively involved in the UK’s fight against the pandemic.” The Turing is also providing project management support to RAMP. RAMP is designed to provide support for existing research groups and create new models or insights that can be used to inform the work of the Government’s scientific advisors. An important goal of RAMP is to enhance modelling capacity in time to create a clearer understanding of different exit strategies from the current lockdown. RAMP will operate beside Data Evaluation and Learning for Viral Epidemics (DELVE), a multi­disciplinary group convened by the Royal Society. Health Foundation initiative on policy interventions The Health Foundation is working with the Turing to find data-driven answers to how different policy interventions affect health, social, and economic outcomes. The initial focus of the project is on building databases and epidemiological models related to the COVID-19 pandemic. The tools will allow policy makers in the UK to learn from the experience of other countries and will enable the Government to reach informed decisions on, for example, relaxing social distancing measures, preventing a second peak of infections, or ensuring that the NHS can cope with further outbreaks. The Turing and Health Foundation are also developing a project on health-related misinformation during the pandemic. Helen Margetts, Programme Director for Public Policy, said, “Many studies have shown that deeply concerning misinformation has circulated during COVID-19. But we don’t understand whether, and which, people are likely to actually believe it. We’re addressing this with the Health Foundation, helping to unpick the real threat posed by false health-related online content.” Section 1.3 + Highlights of the year A year in review AI Fellowships Following a wider skills and talent package announced by the UK Government in 2019, the Turing appointmented five highly talented Turing AI Fellows in October 2019. The Office for Artificial Intelligence, the Turing and UK Research and Innovation worked together to attract some of the best research talent from around the world. Bringing more outstanding AI talent to the Institute, there was also a new call for the Turing AI Acceleration Fellowship, and the Turing AI World-Leading Researcher Fellowships. Together, both calls received £37.5 million of investment to support a number of fellows over five years. The Turing AI Fellows were appointed for five years and were drawn from a variety of disciplines and backgrounds. The Turing AI Fellows are tackling research challenges ranging from sustainable aviation to AI for discovery in data-intensive astrophysics. Machine learning in social care A review carried out by the University of Oxford’s Rees Centre and The Alan Turing Institute for What Works for Children’s Social Care found substantial reasons to be concerned about the ethics of using machine learning techniques in social care and that these can only be mitigated through care and transparency in their use. What Works for Children’s Social Care published the review in January 2020. The review concluded that these techniques should not be used without proper ethical oversight; that there are serious risks of reinforcing biases, or risk aversion in the system; and that low data quality may mean either that risks are missed, or that families are subjected to assessment or interventions that they don’t need. Combating modern slavery A research team called Hidden Figures has been formed and is drawing together members of three Turing programmes and fellows from several universities. The team works with colleagues from the Defence Science and Technology Laboratory (Dstl) and the defence and security community to link various strands of research, including GUARD (see page 19) and Bayesian Extremism, to look at the displacement of vulnerable people and the drivers causing this displacement, such as climate change and conflict. It has been a year of rapid progress with the team helping to establish a new Centre for Modern Slavery & Human Trafficking in October 2019. Anjali Mazumder became the Turing’s AI and Justice and Human Rights Theme Lead as part of the UKRI-funded Policy and Evidence Centre for Modern Slavery and Human Rights. Anjali was also interviewed, alongside Andrew Wallis (CEO and founder of Unseen, UK’s national slavery hotline), on the UK’s modern slavery act and the role of AI for combating modern slavery. In March 2020, a workshop was convened to develop an AI roadmap for combating modern slavery. Understanding online abuse The Institute’s public policy programme, and specifically its project ‘Hate speech: Measures and counter-measures’, published a new policy briefing in November 2019, ‘How much online abuse is there? A systematic review of evidence for the UK’, identifying some considerable shortfalls in the UK’s existing monitoring practices for online abuse. Recommendations included an annual survey into people’s experience of online abuse, a real-time monitoring platform for online abuse and a bulletin of government statistics. Ministerial visit by Baroness Blackwood In October 2019, Chris Holmes, the Institute’s Programme Director for Health and Medical Sciences, hosted a visit from Baroness Blackwood, then Parliamentary Under Secretary of State at the Department of Health and Social Care (DHSC). The visit aimed to demonstrate the Institute’s vision of how to accelerate the implementation of data science and AI innovation within the health sector. The Institute intends to build on this conversation around the regulation of AI in health and on DHSC’s vaccine strategy. The Research Engineering Group A culture of openness The Institute’s Research Engineering Group (REG) has grown significantly over the past year and now includes 28 full-time research data scientists and research software engineers, making it one of the largest such groups in the UK. The team has also been developing its secondment programme, embedding several PhD students and Civil Service Fast Streamers for six-month placements. Working with researchers across the Turing’s research programmes on over 40 projects, the REG has maintained and promoted a culture of openness and knowledge sharing, reflected in its commitment to open source software, its contributions towards shareable and reproducible research and in its collaborations with partners. The group is working with the tools, practices and systems programme to help embed open, transparent and reproducible research practices across the Turing community, including contributing to The Turing Way, the programme’s flagship open guide to reproducible data science. See page 43 for more. Wrattler The Defence Science and Technology Laboratory (Dstl) is evaluating Wrattler, a notebook system built to foster interaction and reproducibility, and is organising trials. While a proposal to the Institute’s tools, practices and systems strand of the AI for science and government (ASG) programme is also being written. This includes the use of Wrattler across the Turing community as a showcase for Turing research. A new journal Mark Girolami, Programme Director for Data-Centric Engineering, inspired a new Cambridge University Press ‘Data-Centric Engineering’ journal; a cutting edge, cross-disciplinary journal focussing on research at the intersection of data science and a broad range of engineering subjects. IET BCS Turing Talks Mark Girolami also delivered three prestigious IET (Institute of Engineering and Technology) BCS (British Computer Society) Turing Talks in Belfast, Manchester and London, exploring the exciting advances being made in digital twin technology. Applied research centre The Turing launched a new applied research centre (ARC) in collaboration with its UK Government defence and national security partners. See page 29 for more details. SPARRA The Turing’s partnership with Public Health Scotland has hit its stride in the last 12 months, bringing the latest advances in data science and machine learning to bear on the complex challenge of predicting who will require emergency hospital admissions. See page 41 for more details. Crick-Turing Biomedical Data Science Awards The Turing and The Francis Crick Institute launched a new call for Biomedical Data Science Awards for collaborative research between biomedical investigators and data scientists. The awards provide funding for post-doctoral AI and data science researchers to pilot research projects that have been collaboratively developed with a biomedical investigator, to apply data science approaches to biomedical challenges. Theory and Methods Challenge Fortnights Through the Institute’s ‘Theory and Methods Challenge Fortnights’ (TMCF) the Turing has been supporting, stimulating and promoting foundational research through its network. TMCF are intensive two-week events where teams of experts from across the Turing university partner network and external institutions worldwide collaborate to initiate work on tackling a foundational challenge. A series of successful events have produced results of wide theoretical and methodological importance in data science and AI. The establishment of TMCF represents the Turing’s first steps in formalising theoretical foundations research work at Turing and an opportunity to create further impact. Predicting conflict The ground-breaking ‘Global Urban Analytics for Resilient Defence (GUARD)’ project is working to fuse advanced statistical modelling techniques with historical geopolitical data to predict global conflict up to a year before it happens. In tests on recent historical data, the system that has been developed is now 82-94% accurate at predicting, 12 months in advance, which peaceful regions will erupt into conflict and which conflicted regions will find peace. Veronica Wardman, Technical Partner for the GUARD project at the Defence Science and Technology Laboratory (Dstl), said, “More welcoming doors are opening right now across a broad range of government sectors than I have ever seen for just one project.” The work is also being followed with interest by the United Nations which joined forces with the Turing to host a two-day workshop in July 2019 entitled ‘AI, Peace, and Security’. Project lead Weisi Guo, who is a Turing Data-Centric Engineering Group Lead, has seen his impressive work on the project featured by the BBC and Nature. Disaster response for Hurricane Dorian In September 2019, The Bahamas was hit by Hurricane Dorian, the most powerful storm to strike the islands since records began. The Planetary Response Network (PRN), a team led by Turing Fellow Steven Reece at the University of Oxford, was involved in supporting the response to the disaster. The PRN deployed its suite of techniques and technology, a mix of crowd-sourcing, machine learning and neural networks, to before-and-after satellite imagery of The Bahamas, aiding the agencies dealing with the storm’s immediate aftermath. “We repeated the damage assessment work we’d done previously for Hurricanes Irma and Maria,” said Reece, “to see if the ports were free of large debris to find out if they could support aid delivery.” Reece’s team worked closely with 24 Commando Royal Engineers, the British Army’s Military Engineers, helping them to understand the technology and, in turn, the Royal Engineers helped the PRN to understand the information requirements of emergency responders on the ground. Black box decisions A new paper published by researchers Sandra Wachter, Chris Russell and Brent Mittelstadt, ‘Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI’, offered an assessment of the compatibility of fairness metrics used by the EU Court of Justice and fairness metrics used in computer science. The paper explains which parts of AI fairness can and cannot (and should not) be automated and finally suggests ideas for legally compliant algorithmic bias audits. AI and inclusion Turing researchers have explored the way AI has been used to automate web accessibility checkers to support those with disabilities. The project team has also worked closely with local assistive technology and service providers to conduct research into a decision support system related to workplace assessments for disabled people. Research has also included a fascinating investigation into issues for those supporting augmentative and alternative communication (AAC) users who may have severe communication difficulties. ‘AI in Finance’ report reaches US House of Representatives The new financial year saw the Turing release a new report, ‘Artificial intelligence in finance’, exploring how AI is rapidly transforming the global financial services industry. Lukasz Szpruch, the Turing’s Programme Director for Finance and Economics said, “This report lays out the basic principles and key trends in the world of AI in finance and is an important step in improving literacy.” The report has had the highest number of clicks from Google of any one page on the Turing website in the past year and its impact has been felt as far as the US House of Representatives’ Committee on Financial Services. The Committee contacted the Turing and the report’s author, Bonnie Buchanan of the University of Surrey, to say that the report was “very much appreciated” particularly by two US Government task forces: one focused on financial technology and another on artificial intelligence, both recently formed in an effort to bring Congress up to speed on such issues. A new podcast The Turing Podcast, launched in spring 2020, is an exciting new podcast with intriguing discussions on all things data science, AI and machine learning-related with a focus on real-time research taking place at the Institute. Episodes so far have covered a broad range of topics from image analysis in neurodegenerative disease, astrophysics in the age of big data and how smartphones could be mobilised to track the COVID-19 pandemic. Executive education In partnership with UCL Consultants Ltd, the Turing has been delivering executive education, via ‘Introduction to AI’ briefings for senior staff within the Ministry of Defence, and a five-day AI masterclass for specialist practitioners within the Defence Science and Technology Laboratory. Together, dozens of these sessions have taken place, with over 100 participants. Equalities, diversity and inclusion The Institute continues to promote equality, diversity and inclusion for all, including its students, researchers and staff. This year a range of dynamic activities have helped progress EDI across the Turing community and beyond. The activities, organised through the Turing’s EDI Working Groups, have included hosting informal meetings and practice interviews for local long-term unemployed and disabled candidates, disability training and unconscious bias training. Under the Camden Council Ability Scheme, which supports Camden residents with long-term health conditions or disabilities into employment, the Turing has been extensively involved with the Business Disability Forum (BDF), the world’s leading employer’s organisation focused on disability. The Institute has been developing policies and procedures with BDF’s expert guidance, and, in September 2019, became a corporate member of the forum. Key areas of focus Amongst Knowledge Quarter organisations, the Institute has led the conversation on intersectionality and neurodiversity in the workplace by hosting a roundtable on intersectionality and setting up a local neurodiversity network. This in turn inspired the British Library’s new Knowledge Quarter LGBTQ+ local network. The LGBTQ+ working group participated in Pride for the first time alongside the British Library and the Royal Academy of Engineering. Campaigns to raise awareness this year included National Inclusion Week, Mental Health Awareness Week, International Women’s Day and Women in STEM, and Neurodiversity Celebration Week. The Turing sits on the Science Council and Royal Academy of Engineering diversity and inclusion progression framework steering group and is building promising links with Camden Council to further the importance of EDI in schools. The Turing is also represented on the UKRI Sector Immigration Forum. Through this forum and in association with the British Library, The Francis Crick Institute and The Wellcome Foundation, the Institute hosted a researcher day. This event gave an overview of the global and collaborative nature of science and research in action as well as informing some of the interdependencies between migration policy and the people skills and collaborations needed throughout the research and innovation section. The event was attended by representatives from the Department for Business, Energy and Industrial Strategy (BEIS), the Migration Advisory Committee and the House of Commons Home Affairs Committee. The Turing remains an active member of the Research Institute Advisory Group (RIAG) for Athena Swan. The Athena Swan Charter has undergone an independent review in the last year and has begun work on implementing a transformation plan. The Institute plans to apply for the bronze award under the new system and processes that the transformed Charter will introduce. Inspiring research The Institute’s three-year Women in Data Science project launched a hub to connect women with resources, news and research and gather feedback about the needs of the community. The project uses research to inform policy measures aimed at increasing the number of women in data science and AI. To ensure that recommendations arising from our own research are embedded within the Turing community, this project is represented on the EDI Advisory Group by researchers from the project. Understanding sensory processing differences in autism was one of the autistic community’s top ten research priorities, identified in the 2016 James Lind Alliance priority-setting partnership led by Autistica. To help investigate this issue, Turing researchers are working with Autistica to develop an online citizen science platform to gather information at scale on experiences of sensory processing and navigating different environments. This will increase understanding of sensory processing in a way which improves the daily lives of autistic people. Our values As the Institute continues to grow rapidly it becomes increasingly important to nurture an inclusive, supportive culture. The Turing has now defined a set of values and behaviours it expects the Turing community to demonstrate and lead. A new consultation group developed a set of core Turing values: trust, inclusivity, respect, leadership, transparency, and integrity. These were launched in July 2019 and to complement this, the Institute provided ‘active bystander’ training across the organisation to teach techniques to identify and address unacceptable behaviour at any level. For more about the Turing’s EDI-related activities visit: turing.ac.uk/EDI Section 1.4 Partnerships and collaborations Section 1.4 + Partnerships and collaborations Collaborations Openness and optimisation ‘Optimising flow within mobility systems with AI’ was an 18-month project between the Turing’s urban analytics programme and the Toyota Mobility Foundation that ended in December 2019. The project fostered fruitful collaborations with a variety of key transport organisations, including Transport for Greater Manchester, the Connected Places Catapult and the Transport Research Laboratory. One of the outputs, created in collaboration between two Turing Fellows and the Turing’s Research Engineering Group, was a prototype Mobility Data Toolkit for assessing and visualising a wide range of traffic data, from traffic flow and congestion to levels of harmful pollutants, at virtually any scale and slice of time. Such a versatile system could help city planners prepare for the future and manage current conditions. Another output was pioneering work in AI-augmented traffic lights. The system used neural network technology to slash waiting times at junctions, for now in simulation. However, the team are working with Vivacity Labs and Transport Research Laboratory to investigate the practical benefits of these policies. In January 2020, the partnership with TMF was cemented with new funding. This 18-month project aims to bring these and other mobility technologies to fruition and place them with TMF’s existing mobility partners and projects. “We have valued the Turing’s openness and desire to work with our mobility partners to anchor this research in real-world contexts at the local level,” said William Chernicoff, Senior Manager, Global Research & Innovation at Toyota Mobility. Public sector guidance In a bid to maximise AI’s benefits for the many, the Turing’s public policy programme, with its strong ties to many UK Government agencies, provides crucial insight and ethical guidance to the policy makers. Evidence of this is the partnership with the Office for Artificial Intelligence (OAI) and the Government Digital Service (GDS): the Turing produced guidance for the public sector on the responsible design and implementation of AI systems. The comprehensive guide, called ‘Understanding Artificial Intelligence Ethics and Safety’ was published in June 2019. Read the case study on page 53. Innovation mapping with Nesta The Turing’s finance and economics programme partnered the innovation foundation Nesta’s Innovation Mapping team to deliver HackSTIR, a Hack Week focused on applying novel data-driven methods to the problem of innovation mapping. The initiative helps support the development of the community of researchers and Research and Innovation (R&I) policy makers. The week was hosted by Nesta in October 2019 in partnership with the Turing, with support from Intellectual Property Office and SAGE. For the participants, HackSTIR became an immersive environment for learning, knowledge exchange and rapid project prototyping, with teams exploring a range of challenges including: mapping trends in digital social innovation using social media data; predicting success of research funding proposals: identifying policy themes from raw text and trying to understand what differentiates successful proposals from unsuccessful ones. The datasets included social media (Twitter), policy initiative databases, and funding proposal data. Turing X Crick For the first time the Turing, alongside The Francis Crick Institute, Entrepreneur First and supported by UK SPINE KE, hosted an inspiring new summer school which combined expertise in data science and biomedical science. Over four days, biomedical scientists, data scientists and technologists from across the UK explored some of the key techniques and approaches essential to translating research into real world solutions. Collaborating with NATS It’s been another productive year for the Turing’s collaboration with NATS, the UK’s leading air traffic control provider, to explore the performance of AI-based air traffic control agents and providing critical insights for the development of new tools and decision aids. Read the case study on page 51. New digital ID systems project funded by the Bill & Melinda Gates Foundation A new four-year research and development initiative launched in spring 2020 aims to enhance the privacy and security of national-scale digital identity systems. The project, funded through a £4.3million grant from the Bill & Melinda Gates Foundation, works to accelerate financial inclusion globally by broadening the reach of low-cost, digital financial services. According to the World Bank, approximately 1 billion people globally lack an officially recognised identification and consequently face barriers to accessing critical services and exercising political and economic rights. Robust, inclusive, and responsible foundational identification systems can be transformative for the most vulnerable by enabling financial inclusion, empowerment of women and girls, access to basic healthcare, education, social safety nets and assertion of rights. Proving your identity is also the basis for accessing rapidly expanding forms of digital financial services, such as digital payment systems, that are essential to advancing financial inclusion and women’s economic empowerment. The Turing will produce research that can enhance the safety and security of digital ID systems by drawing on the UK’s deep expertise in privacy enhancing-technologies, data-driven cyber security as well as evaluation and management of risk in such systems. A new applied research centre A new defence and security applied research centre (ARC), initially supported by £3.5 million, was launched in September 2019 and will focus on delivering real-world impact to enhance national security. It will directly address policy needs and enable the UK’s security forces to draw on the very best of academia to achieve high-impact solutions to the most pressing challenges in the field. Turing Programme Director for Defence and Security, Mark Briers, said, “The defence and security ARC will take the greatest innovations from around the world and apply them in the context of real-world security problems. This has the potential to revolutionise the defence and security community’s use of novel data science and artificial intelligence technologies and reduce the time to adoption, ultimately creating a safer environment for the national and international community. The data scientists and research software engineers in this new team will form part of our community of practice, delivering robust software and reliable analyses.” Public Health Scotland The Turing’s partnership with Public Health Scotland hit its stride in the last 12 months, bringing the latest advances in data science and machine learning to bear on the complex challenge of predicting who will require emergency hospital admissions. Read the case study on page 41. Section 1.4 + Partnerships and collaborations International engagement The Institute’s international reputation is very strong, in part because of its unique structure as a national research institute that operates through collaborations across industry, the university sector, charities and government. The Institute also attracts a great deal of interest abroad as a consequence of the UK’s reputation for innovation and excellence in research ranging from theoretical to highly applied. The Institute’s approach to international engagement is focused on the pursuit of research exchanges with other leading centres, meaningful collaborations and high-profile networking events. This year, Turing researchers and programmes have continued to add value to international dialogues concerning standards, regulation and ethics. The Institute has deepened its working relationship with countries with which it had already established collaborations, such as Japan, Singapore, France and Canada, and opened dialogues with other countries with strong AI ecosystems, including Germany, China, the US and South Korea. “International collaborations are a key way to strengthen the UK’s economy and benefit society through the innovation they enable across all sectors,” said Allaine Cerwonka, the Turing’s Director of International. “While the changes in research funding due to Brexit and the outbreak of an international pandemic have complicated the international landscape, the Turing remains committed to enabling the UK to exercise international leadership in AI and data science research innovation, ethics, regulation and training.” The following pages detail a few highlights of the year. Japan The UK and Japan came together in September 2019 for a formal collaboration involving a two-day workshop held at the Edinburgh Centre for Robotics to identify collaboration topics, followed by an industry workshop held at the Turing to align the high-TRL (technology readiness level) research focus. The Turing has initiated agreements with AIST (the National Institute of Advanced Industrial Science and Technology), NII (National Institute of Informatics) and the RIKEN Centre for Advanced Intelligence Project, as part of a wider UK Government announcement relating to new scientific collaborations between the UK and Japan in the fields of robotics, AI and the ethical use of data. The workshop culminated with a fantastic networking event at the Japanese Embassy in London, hosted by the Japanese Ambassador to the UK. In January 2020, UKRI announced the funding of six projects designed to uncover the uncertain and wide-ranging impacts of AI on our society, culture and economy. One of these is a collaboration between the Turing and RIKEN called ‘PATH-AI: Mapping an Intercultural Path to Privacy, Agency, and Trust in Human-AI Ecosystems’. “RIKEN and the Turing share many parallel ambitions in bringing together diverse disciplines and engaging in world-leading research activities,” said Masashi Sugiyama, Director of the RIKEN Center. France, Canada June 2019 saw the Turing collaborate on and host a trilateral event that convened leading researchers from Canada, France and the UK to explore the technical underpinnings of ‘AI in Society’, including AI fairness, interpretability and privacy, as well as policy issues including online harms. The Turing worked closely with CIFAR (Canadian-based global charitable organisation), CNRS (The French National Centre for Scientific Research), DATAIA Institute, University of Alberta, UK Science and Innovation Network and UKRI to organise the workshop, which established closer links between the research communities in the three countries and forged new collaborative projects. As a direct result of the workshop, many different research proposals by various consortia were submitted in response to a UKRI funding call for UK-Canada interdisciplinary collaborative projects. In addition, the initiative received great feedback from the Office for AI, the Department for Business, Energy & Industrial Strategy and other stakeholders in the UK AI Ethics ecosystem. Australia In October 2019, the Turing announced a collaboration with the new Australian Research Council Training Centre in Data Analytics for Resources and Environments (DARE). The centre aims to boost data science skills in the natural resources sector, in a bid to understand the ongoing impact and long-term consequences of the use of natural resources better. It will enable researchers to apply their data science models against pressing environmental challenges, such as water storage, biodiversity loss and the extraction of minerals. “Collaborating with DARE will allow research scientists to address global challenges affecting our natural environment,” said the Turing’s Programme Director for Data-Centric Engineering, Mark Girolami. “The diversity of scientific expertise the centre is bringing together will create an exciting opportunity for innovative and impactful research outcomes, which can translate into global solutions.” Finland It has also been a good year for international student exchanges. The data-centric engineering programme signed a memorandum of understanding with the Finnish Centre for AI in March 2019. Then, in November, it was announced that in 2020, there will be bilateral doctoral student exchanges that cut across the Turing’s health and medical sciences, data-centric engineering and urban analytics programmes. Ireland The Institute signed a memorandum of understanding with the Insight Centre for Data Analytics in Ireland. Insight agreed to fund 9-month placements for two of their students on the Turing’s enrichment scheme. Insight is one of Europe’s largest data analytics research centres and the Turing is the UK’s flagship institution for data science and artificial intelligence, so this relationship-building endeavour makes perfect sense. The participating students, the first of whom started in October 2019, are selected from the Science Foundation Ireland’s PhD Studentship scheme. Collaborating across continents The Institute continues to expand its international reach with a diverse range of collaborators across all continents. The breadth of the Institute’s international network and the world-class expertise of its researchers provide the best possible means to tackling shared global challenges. In September, the Institute organised an event called ‘Digital Aid: Understanding the Digital Challenges Facing Humanitarian Assistance’ with the UK’s Global Challenges Research Fund. The workshop convened academics to explore the challenges arising from the intersection of global digital developments and the provision of effective humanitarian assistance in low and middle-income countries affected by conflict and displacement. A report has been commissioned with the findings from the workshop providing recommendations and emergent priorities to the humanitarian sector. Several Institute-funded projects have already generated meaningful global impact to solve real-world problems. A notable example is a project by Turing Fellow Steve Reece which has developed novel machine learning approaches to enable swift and effective response to natural disasters. The broad applicability of this technology has already resulted in the use of satellite imagery to help resolve Kenya’s poaching conflicts and to support the Brazilian Government in identifying where mine operators have built illegal tailings dams which can poison and destroy natural habitats. Section 1.4 + Partnerships and collaborations Our university network Engaging with universities across the UK is key to the Turing’s role as a national institute. It stems from our recognition that, for UK data science and AI to reach its potential, the Institute must work alongside its university partners and actively pursue the biggest, most ambitious research collaborations possible. These collaborations bring tangible benefits to our university partners, the Institute and the wider UK society and economy. Our focus this year has been to continue to embed university partners across the Institute’s networks, structures, programmes and governance and Turing University Liaison teams have developed further the Turing’s presence at each university partner. In all, 392 Turing Fellows from 121 different university departments across the 13 Turing university partners benefited from access to the wider Turing network. In response to a question about the value added to Turing Fellow projects, 97% of Turing Fellows found their affiliation with the Institute added value to their research activity.* Turing Fellows led over 130 research projects across all of our research areas breaking new ground in science, policy and business, including 94 projects that were awarded Turing funding as part of the Fellowship programme, and interacted with a total of 44 Institute-brokered project partners from across the business, university, public and third sectors. *Based on a survey of 265 Turing Fellows in August 2019. We continue to engage with UK universities outside of the Turing university partner network. Researchers from such institutions are regular contributors to Institute programmes, Turing Interest Groups and Data Study Groups. They attend events and workshops and receive funding via open calls, most notably on the AI for science and government (ASG) programme. As interest in partnering with the Institute continues, we are putting a new priority on developing a more flexible university partner model to drive a more strategic approach to our network and scientific themes and challenges. Through our university partners and this new model, the Turing envisages more regional and local engagement that will contribute to increase capabilities in data science and AI nationwide, as well as increase the UK’s ability to compete at an international level. Section 1.4 + Partnerships and collaborations AI for science and government (ASG) Enhancing the UK economy and society through research innovation The ASG programme is moving at a rapid pace. The programme began in November 2018 and continues to complement the work of the Institute. It has continued its ambition to further AI across UK Government, enabling more efficiency and accountability and to supercharge scientific discovery through AI tools and applications. Today, the programme is perfectly placed to maximise the potential of the UK’s renowned governance and administrative systems, while maintaining its world-class scientific leadership and continue its commitment to social progress for the benefit of the population. ASG’s rich portfolio of projects is organised across a series of ‘themes’, each operating over a five-year cycle. Now in its second year, ASG has a significant amount of projects spanning domain areas. These domain areas, all of which also involve the development of tools, practices, and systems (TPS), cover: – Delivering personalised healthcare with early disease detection and machine learning based diagnosis. – Applying data science and AI to scientific discovery and experiment outputs. – Authoring AI ethics guidance and operations for the public sector, in particular the criminal justice system. – Developing the theory, application, and ecosystems of digital twins through complex systems engineering and urban analytics. Health Through its research on health, ASG’s collaboration with HDRUK and NHSX has a key role as an AI data analytics provider within the ecosystem of health data research. The work in health analytics has been enhanced by the programme’s TPS research which in Year 2 concluded work on important tools for working with sensitive personal data. This is a key challenge particularly in AI health analytics and with researchers working with demographic data. This research has included projects for creating anonymised, synthetic data out of real datasets and on developing a ‘data safe haven’. This work may be important to future work on COVID-19 management and, given the Turing’s commitment to open access, such innovations are also valuable to researchers in the UK and internationally. AI for science AI is offering exciting, new scientific techniques to help manage unwieldy quantities of data and generate new areas of scientific research. Through its AI for science theme, several collaborations with research councils and labs have emerged. This has included using Bayesian statistical methods to integrate diverse data in order to build better explanations and open up new research questions concerning a wide range of scientific objects, be they icebergs or molecules. The research on this theme is helping scientific disciplines benefit more from the efficiencies and capacities of AI and foster greater interdisciplinarity. Criminal justice system In Year 2, the criminal justice system theme has continued its work collaboratively with various government departments on applicable ethical frameworks and AI innovations for more effective, fair criminal justice operations. Digital twins Through the programme’s focus on digital twins, ASG is aiming for measurable outcomes in complex system engineering, urban planning and even hospital operations. The urban analytics models developed as part of this work could also be utilised as part of cross-theme research on COVID-19 due to its potential for planning for the national management of pandemics. Section 1.5 Research impact case studies Section 1.5 + Research impact case studies A new age of Arctic science discovery The Arctic is at the front line of climate change: complex feedback mechanisms between the atmosphere, ocean, ice and land mean that the Arctic is warming twice as fast as the rest of the planet. In September 2019, Arctic sea ice reached its second lowest minimum since satellite records began. Even if we curbed our greenhouse gas emissions sufficiently to limit the average global temperature rise to 2°C higher than pre-industrial levels (the upper limit of the Paris climate agreement) the Arctic could still warm by 4–5°C, with potentially serious regional and global consequences. A new era of polar exploration has begun, but this time through collaboration between climate scientists and data scientists. Funded by the Institute’s data science for science programme, researchers at the Turing and British Antarctic Survey teamed up in September 2019. They took AI algorithms formerly developed for use in the commercial sector and were among the first to apply them to climate science, aiming to uncover hidden relationships within sea ice data that are likely to be missed by traditional data analysis or simulation methods. With over 10 million data points in their satellite-derived dataset, the team is training its AI algorithms to forecast future sea ice at a resolution of 25 kilometres, with the ability to learn physical relationships between climate variables over both space and time. Sea ice is fundamentally important for Arctic wildlife and the indigenous communities that depend on them for food. Improving predictions of sea ice could aid local conservation efforts, as well as understanding how these fluctuations will affect weather patterns. Furthermore, using bespoke AI-explainability methods developed through collaboration with Turing researchers is allowing the collaborators to ‘open up the black box’ of the trained model and start to draw conclusions on what can be learned from the data, potentially providing novel scientific insights. These powerful new methods are being developed specifically for this project, but the researchers are already mapping out other avenues of climate research that could benefit from them, including understanding the drivers of urban heatwaves and predicting future regional water security. “Engaging with Turing Fellows has opened up our environmental datasets to new ways of exploration that we never thought possible,” said Scott Hosking, Head of the AI Lab at the British Antarctic Survey, and now Turing Senior Research Fellow. “The time has clearly come for the AI and environmental research communities to come together and tackle some of our greatest global challenges, including the climate emergency and the loss of biodiversity.” Section 1.5 + Research impact case studies Working with the NHS in Scotland The Turing’s partnership with NHS Scotland hit its stride in the last 12 months bringing the latest advances in data science and machine learning to bear on the complex challenge of predicting who will require emergency hospital admissions. If someone is known to be at a high risk of a health breakdown and emergency admission, GPs can often intervene to reduce this risk by, say, increasing appointments, adjusting medication or making targeted referrals. Such an anticipatory approach to the management of long-term conditions makes sense: an ounce of prevention is worth a pound of cure. But it is difficult for busy GPs to make the accurate assessments of a given patient’s risk of emergency admission. Scottish Patients At Risk of Readmission and Admission (SPARRA) is a tool created in 2006 that predicts an individual’s risk of emergency admission within 12 months. The current iteration, SPARRA version 3 (v3), has been in place since 2012, with risk scores now based on a wide range of hospital-based patient data and calculated for all Scottish residents who have had some hospital or prescribing activity in the previous three years. More than four million people have a SPARRA score at any given time, and this information is provided monthly to their GPs. It was late 2017 when the Turing first collaborated with NHS Scotland with a successful Data Study Group. In 2019, with a deeper partnership firmly established, the collaboration kicked into a high gear and in early 2020, SPARRA v4 was completed and is now ready for deployment across Scotland. It transforms SPARRA v3 with the addition of a cutting-edge suite of machine learning modelling techniques, including random forest, gradient boosting and neural networks. These models all sit within a ‘super­learner’ algorithm, which learns to make predictions based on the entire suite of technologies. Feeding into these models is a new engineered set of features generated by a latent Dirichlet allocation model, which automatically groups long-term condition diagnoses and pharmaceutical prescriptions into related ‘topics’ and scores each observation against them. All of the computation is performed securely within NHS data safe havens to preserve patient privacy. Tests on historic data illustrate that SPARRA v4 has the potential to pre-empt significantly more readmissions. In fact, when SPARRA v4 is compared with v3, 1,000 additional emergency admissions (of the 10,000 patients deemed most at risk) could have been pre-empted. What is more, the calibration of the SPARRA tool has now improved remarkably. In SPARRA v3, of the 10,000 patients deemed most at risk of emergency admission, the difference between the model’s overall predicted number of admissions and the reality was about 1,000, a significant gap. “In SPARRA v4, even for the 30,000 patients deemed most at risk of emergency admission, the difference between the overall prediction and reality was less than 100. A remarkable improvement in accuracy which will give healthcare providers increased confidence in the tool,” said the project’s Principal Investigator, Turing Health and Medical Sciences Programme Fellow Louis Aslett of Durham University. The plan was to deploy v4 in early 2020, although this has been delayed while NHS Scotland deals with the more immediate concerns of COVID-19. As well as improving the lives of more Scottish citizens, the enhanced SPARRA system should deliver a substantial secondary benefit of reducing pressure on Scottish hospital admissions, and the cost savings that go with that. The SPARRA model will continue to be refined during 2020 and future iterations of this project will include clinically relevant context to support the risk scores, improving interpretability for GPs. Greater interpretability will enable GPs to assess better which patients at high risk of hospital admission will respond best to primary care intervention, keeping them out of hospital. Changing the culture of data science The Turing Way is an online handbook, and global community, dedicated to fostering gold-standard reproducible, ethical and collaborative research. The crisis of reproducibility in science is well known. ‘Publish or perish’ incentives and secrecy around data can result in fragile advances and wasted time and money. Such siloed science is slow science. As data science progresses and pervades academic research and all sectors of industry, the need for new infrastructure, re­usable tools and codified best practices is growing. This new era needs a new cultural and practical approach, one which embraces openness and collaboration like never before. So a group of researchers in the Institute’s tools, practices and systems programme, which is funded primarily through the Turing’s AI for science and government integrated research programme, created The Turing Way. The project launched in December 2018, led by Kirstie Whitaker, Programme Lead for Tools, Practices and Systems, and has developed rapidly this year. The Turing Way is an evolving online ‘handbook’ on how to conduct world-leading, reproducible research in data science and artificial intelligence. Crucially, it is also a flourishing global community of research engineers, data librarians, industry professionals and research experts in various domains, at all levels of seniority, dedicated to capturing and sharing research best practices, tools and data. The Turing Way community currently consists of about 125 researchers, drawn from different Institutes worldwide and is coordinated by a community manager, Malvika Sharan. The handbook already has 18 chapters on a diverse range of topics on reproducibility and open research. The book is growing into a comprehensive ‘How To Guide for Data Science’ covering areas including research design, collaboration, visualising results, the ethics of data science and more. To foster The Turing Way community, Whitaker’s team organised a number of events this year, including training workshops and ‘book dashes’. Dashes begin with an introduction to the project followed by a dinner and lightning presentations for the invited participants in the evening, before an intense day of collaborative work the next morning. The team also contributes to the Binder project, a key research platform within the Jupyter ecosystem that enables highly shareable research. By the end of 2020, the team is expecting 20 new chapters of the book to be available, from over 200 contributors, and to have more than 1,000 data wranglers of all stripes on their newsletter mailing list. The Institute is committed to changing the world for the better through data science and artificial intelligence. With The Turing Way, we are committing to changing data science itself for the better. “The reuse of other people’s data provides useful insights for new research questions and products, and drives new scientific discoveries.” Susanna-Assunta Sansone Associate Professor in Data Readiness University of Oxford Section 1.5 + Research impact case studies The next generation of data safe havens The ability to collect, organise and manipulate complex data at scale is proving transformative across finance, healthcare, engineering and science. But this new age brings a pressing, dual-aspect challenge. One aspect is technical: researchers need ‘data safe havens’, secure computing environments for the analysis of sensitive data, such as NHS patient records. The other is policy: how should data owners decide, in collaboration with data scientists, on the sensitivity of the data they are dealing with, and what procedures should be followed to ensure its protection? To unlock the true potential of data science, its practitioners need a policy-and-process framework around which to unify: a workable common ground. So, the Turing’s Research Engineering Group is spearheading an ambitious project to bring the technical and policy strands together in just such a framework, to foster productive and secure research at scale in the cloud. The suggested policy framework has been pre-published on arXiv, an open research platform, as part of an ongoing consultation with the wider data science community. The paper proposes a detailed range of controls in areas such as data classification; how data enters and leaves the research environment; how software can be applied to the data; user access and the computing environment in which analyses happen. The scope of these controls depends on a set of well-defined data security tiers (see next page). To be transformative, the policy side must marry smoothly with the technical aspect of Turing data safe havens. The technical infrastructure created by Turing’s Research Engineering Group is software-defined. That means the IT infrastructure for a given project (its servers, storage, access policies, management processes) are made entirely of code. In this way, isolated research environments can be created for each project. “The researcher specifies the security controls they want to apply to a given project and the software builds the environment automatically,” said James Hetherington who led the project as the Turing’s Director of Research Engineering before becoming the Director of e-Infrastructure at UK Research and Innovation. “That’s the power of software-defined infrastructure. It completely changes the economics of data security.” An overview of the technical implementation of the system was shared at the 2020 UKRI Cloud Workshop and the code and documentation to allow others to deploy their own data safe haven instances will be published later this year. This has been developed on the Microsoft Azure cloud computing platform, supported by a gift from Microsoft to the Turing of $5 million in Azure credit. The ambitious project reflects the Turing’s commitment to work that supports the entire data science sector. “It is precisely because there is no consensus on this in the wider community that the Turing, as a national institute, has to stick its neck out and attempt to shape this landscape for the benefit of everyone involved,” said Hetherington. AI transforming the financial sector Advances in data science and AI are transforming business practices across the financial sector, including banking, insurance, and asset management. This means significant benefits to firms, consumers, and wider society, but also raises a wide range of ethical concerns, including issues of regulatory importance. To address these issues, the Turing organised a landmark, two-day symposium in July 2019 in London entitled: AI ethics in the financial sector. The symposium attracted over 200 attendees, including senior leadership, data and technology officers, and technical specialists from firms across all industries in the financial sector. It demonstrated the power of the Institute to convene industry leaders, as well as senior government regulators and leading scientists. The event opened with keynote speeches by Christopher Woolard, at the time the Director of Strategy & Competition at the Financial Conduct Authority (FCA), and Simon McDougall, Executive Director of Technology Policy & Innovation at the Information Commissioner’s Office (ICO), followed by an extensive Q&A session dedicated to their organisations’ perspectives on ethical issues around the use of AI in the financial sector. The event featured high-level contributions from leading academics, including Turing Fellow and Data Ethics Group Chair, Luciano Floridi, Professor of Philosophy and Ethics of Information at the University of Oxford. The symposium also featured a series of powerful presentations showcasing cutting-edge technical work, grouped into sessions dedicated to four themes: digital rights, privacy and data management; bias and fairness; transparency and explainability; market efficiency and financial stability. Crucially, at the event, the Turing and the FCA also announced a joint year-long project around the use of AI in the financial services sector, a collaboration marking the first formal partnership between the Turing and the FCA. Led by the Turing’s public policy programme, the project is examining current and future uses of AI across the financial services sector, analysing the ethical and regulatory questions that arise in this context and advising on potential strategies for addressing them. “We’ve partnered with The Alan Turing Institute to explore the transparency and explainability of AI in the financial sector,” said Woolard, announcing the collaboration. “Through this project we want to move the debate on, from the high-level discussion of principles, which most now agree on, towards a better understanding of the practical challenges on the ground that machine learning presents.” Building on the success of the conference, the Turing’s finance and economics programme also organised an event dedicated to ‘Explainability in finance’, which took place in October last year and convened a group of academics and industry practitioners at the British Library Knowledge Centre in London. Data science and AI in the financial sector is a complex and rapidly evolving technology. The Turing’s unique relationships with industry, regulators and scientists means it is well-placed to influence the technology’s direction of travel. Section 1.5 + Research impact case studies Revolutionary thinking Living with Machines is already establishing itself as a project that is changing the face of digital humanities. As astronomer Carl Sagan once said, “You have to know the past to understand the present”, and this intriguing project aims to rethink the impact of technology on ordinary lives during the Industrial Revolution. Taking a radical approach to collaboration, the project is breaking down barriers between academic traditions, bringing together data scientists and software engineers from the Turing and curators from the British Library, as well as computational linguists, digital humanities scholars and historians from universities including Exeter, East Anglia, Cambridge and Queen Mary. Led by the Arts and Humanities Research Council (AHRC) and funded from the UKRI’s Strategic Priorities Fund, in a productive first year, experimental proof-of­concept work and method-development has been delivered through five ‘Labs’. Ruth Ahnert, Principal Investigator, explained: “The Labs have been successful not only in creating a genuinely collaborative interdisciplinary environment, but also in generating a huge array of analytical approaches and methods that are now being developed into articles, conference papers and teaching materials. We developed innovative methods for facilitating collaboration, sharing values, knowledge and skills, and are disseminating these through self-reflective work, including blog posts and a project handbook.” Some of the first work to come out of the project is an innovative language model that has been developed to understand the contexts in which machines were being attributed human-like characteristics or behaviour. The model is used to identify sentences in which a machine occurs in contexts one normally expects to find a person. This linguistic affectation might be thought of as a 19th century precursor to the idea of artificial intelligence. Key areas of work – Working with the British Newspaper Archive to understand how the outcomes from the team’s analysis are shaped or biased by the data available. – Examining where animacy and agency are assigned by contemporary writers to a machine by developing a method of masking machine-related words in a sentence and asking the language model to predict the word. – Adapting methods from computer vision for extracting visual features from 19th century maps to understand how the built environment has changed over time and linking this to demographic changes registered by the census. – Creating a geographical knowledge base that combines different 19th century sources, enabling spaces to be linked across current and future data sources, such as newspapers, maps and census data. This allows for swift comparisons between the mapped representation of an area with its textual description, and demographic particulars. Outreach The Living with Machines website launched in 2019. 48 blog posts have been published and a dedicated Twitter account has cultivated over 1,400 followers. The first two crowdsourcing tasks on Zooniverse (an online space which gives people of all ages and backgrounds the chance to participate in real research) have been completed and two more are due to launch in 2020. Progress has been made with corpus building, having completed the digitisation of newspaper press directories, and a tranche of early 19th century newspapers. The project is also making an important contribution to the Turing’s data safe haven project (see page 45). Section 1.5 + Research impact case studies Machine learning in aviation COVID-19 brought UK aviation to a virtual standstill. But before the global pandemic struck, flight movements above the UK were at a historic high, with demand forecast to increase significantly in the coming decade. Whatever the short-term future holds for commercial aviation, the technological advancement of air traffic control (ATC) remains a critical long-term goal. The Turing’s deep collaboration with NATS, the UK’s leading air traffic control provider, began in 2018 through the Turing’s data-centric engineering (DCE) programme. One of the main aims was to explore the performance of AI-based air traffic control agents, and in doing so, provide critical insights for the development of new ATC tools and decision aids. It’s been a productive year. The collaboration has developed an open-source ATC simulation and experimentation platform affectionately named Simurgh. The platform allows AI-based ATC agents to interface with an existing open-source air traffic simulator, BlueSky, and with a proprietary simulator developed by NATS. An open-source approach aims to attract the wider AI community to the complex task of air traffic control and to enable researchers to experiment easily with algorithms. To that end, the platform was designed to integrate with OpenAI Gym, an interface used by reinforcement learning researchers, making it more accessible for machine learning researchers and lowering the entry barrier for AI researchers who are not experts in ATC. Another issue hindering previous research in this area is the absence of standardised benchmarks that allow direct comparison of the performance of different AI-based ATC agents. The Turing team worked with NATS to develop a set of algorithms to generate ATC scenarios which are both realistic and relevant for real-world operations and created evaluation criteria for AI agents that mimic some of the performance metrics used for human air traffic controllers. The collaboration is a shining example of a project which enabled the Institute to bring in both data science and software engineering expertise and help industry bridge the gap with meaningful academic research. Richard Cannon, Commercial Research Lead at NATS said, “NATS is incredibly proud to be part of this successful and truly collaborative research project. We have spent the year continuing to work as an integrated, agile team and growing our shared understanding of NATS’ operations and our future challenges. By doing so, the Turing Research Engineering Group has been able to identify and assess new opportunities for advancing air traffic control from across a wide range of research fields, doing so in both a rigorous and explainable way, creating great success for the group.” “NATS is incredibly proud to be part of this successful and truly collaborative research project. The Turing... has been able to identify and assess new opportunities for advancing air traffic control... in both a rigorous and explainable way, creating great success for the group.” Richard Cannon Commercial Research Lead NATS For more on the Turing’s work with NATS, see our full impact story at: turing.ac.uk/impact Section 1.5 + Research impact case studies Putting AI ethics at the heart of the public sector Government agencies and local authorities collect enormous amounts of data from citizens, and the application of AI and machine learning to such data is already improving aspects of social services such as healthcare, education, transportation and urban planning. With the speed of innovation in data science, it is challenging to plot a cohesive course ahead, and easy to miss potential pitfalls. The Turing’s public policy programme, with its strong ties to many UK Government agencies, provides crucial insight and ethical guidance to the policy makers navigating this rapidly changing landscape. One way we are doing this is with a world first: in partnership with the Office for Artificial Intelligence and the Government Digital Service (GDS), the Institute has produced guidance for the public sector on the responsible design and implementation of AI systems. The comprehensive guide, ‘Understanding Artificial Intelligence Ethics and Safety’, was published in June 2019. It quickly became official government policy. “It’s the first time that something like this has been adopted into the public sector, and an important step in leading the conversation around AI ethics and governance, in the UK and beyond,” said Turing Ethics Theme Lead and Ethics Fellow, David Leslie, who authored the guide with support from the public policy team. The guide covers the many aspects of applied ethics and best practices for organisations that are building AI or algorithmic systems, such as fairness, accountability, sustainability, safety and transparency. Also included are detailed processes for how to follow the guidance and how these should be governed to protect those affected by a given AI system. Leslie and his colleagues are developing and delivering work packages and workshops for government agencies, to bring to life the principles and practices detailed in the guide for the people building the technology. “Our data science team are finding this guide really useful in their work,” said Jon Roberts, Chief Data Scientist at the Ministry of Justice. “We’re especially pleased to see that some of our internal thinking on AI ethics, which we shared with the Turing earlier in the process, helped shape the guide’s development, we look forward to continuing to improve our ethical practices within our data science work.” The report garnered a lot of positive feedback from government officials, regulators, industry, and representatives of a number of foreign countries, according to the Office for AI. So, the benefits of this deep collaboration between the Turing’s public policy programme and the UK Government look set to ripple out around the world, with the UK leading the way. The application of AI and machine learning to citizen data is already improving aspects of the UK’s transportation and urban planning. Section 1.6 The year in numbers [Not included in this plain text version] Section 1.7 Engagement, outreach and skills Connecting to the public, the UK’s wider academic community, industry, government and beyond is a fundamental drive of The Alan Turing Institute. In these ways, we share our research, inspire others and shape the national conversation around data science and AI. This year, the Institute built on previous successes and found new ways to connect. Here are just a few highlights. Working with the Government and regulators The Institute’s public policy programme works alongside the Government and regulators to explore not only how data science and AI can improve policy-making, but also how these technologies should be governed and regulated. With over 65 senior AI researchers and more than 25 research projects, the programme has gained national and international recognition. In 2019, the European Commission highlighted the value of the programme in its flagship policy report on AI, while the OECD (Organisation for Economic Development and Cooperation) identified the programme as a highly successful model in its primer on AI for the public sector. An impressive 75 public sector organisations have reached out to the programme, including government departments, regulators, non-ministerial departments, agencies and public bodies, high profile groups, local authorities, police forces and international organisations. National Data Strategy The Institute has conducted thousands of hours of work alongside government. In July 2019, the public policy programme organised a roundtable as part of the Government’s consultation process on the National Data Strategy with the Department for Digital, Culture, Media & Sport. The roundtable’s primary aim was to give Turing researchers a chance to share their views on the Government’s plans for the strategy. Project Explain In collaboration with the Information Commissioner’s Office (ICO), the Institute created a new practical guidance to assist organisations with explaining AI decisions to the individuals affected. As part of this project, the Institute and the ICO conducted public and industry engagement research. This helped to understand different points of view on this complex topic. Key findings included: – The relevance of context for the importance, purpose and expectations of explanations. – The need for improved education and awareness around the use of AI for decision-making. – Challenges to deploying explainable AI such as cost and the pace of innovation. Women in data science (WIDS) The WIDS project created a dynamic, new UK hub website for women in data science and AI, gathering an extensive set of resources on getting into tech, building a data science or AI career, and creating equitable AI. The hub also gathers news from related organisations and presents our latest research. This has given the Turing important, initial and widespread exposure to tech companies and data science divisions within industry. The Institute is exploring new partnerships with several such companies. In addition, the project: – Developed a new Diversity Dashboard tool for quantitatively monitoring the inclusion of women in tech workplaces, to identify the factors predicting women’s entry into, and success within, data science and AI careers. – Conducted a literature review and project overview report. This report will include original research into the representation of women in data science and AI in UK academia, and on the demographics of users of online data science platforms. Understanding AI ethics The influential guide, ‘Understanding Artificial Intelligence Ethics and Safety’, was published providing the most comprehensive guidance on the topic of AI ethics and safety in the public sector to date. The Centre for Data Ethics and Innovation’s report on their ‘Approach to the governance of data-driven technology’ directly linked to the guide as “the best and most up-to­date thinking from the UK and beyond” upon which their work had drawn. See the case study on page 53 for more. Working with the financial sector In December 2019, the Institute partnered with the Financial Conduct Authority (FCA) to work on AI ethics and explainability. See the case study on page 47 for more. Public Policy Inference Researchers from the Institute are working with policy makers from around the world on data-driven policy innovation and problem-solving. One such project is led by ESRC-Turing Fellow and UCL Senior Research Fellow Omar A Guerrero, along with his research partner Gonzalo Castañeda, Professor at the Center for Research and Teaching in Economics, Mexico. Together, they have developed a suite of analytical tools that can successfully model the impact of a variety of policy decisions on development indicators. In collaboration with the United Nations Development Programme (UNDP), this technology, called ‘Policy Priority Inference’, is already being adopted by national and state governments in Latin America to support the effective prioritisation of their public policies to optimise sustainable development. For more on Public Policy Inference, see our full impact story at: turing.ac.uk/impact Section 1.7 + Engagement, outreach and skills Public engagement The Turing Lectures Our flagship event series, The Turing Lectures, continued to inspire and engage increasingly diverse audiences with its high-profile speakers. This year, three of the four lectures were delivered by women, including the popular mathematician, author and broadcaster, Hannah Fry; the entrepreneur and co-founder of the social enterprise STEMettes, Anne-Marie Imafidon; and Lillian Edwards, Professor of Law, Innovation and Society at Newcastle University. Perhaps, as a result, the 400-strong audiences had an even gender split, on average. Imafidon’s talk, ‘AI and the Future of Work’, resulted in a 30% turnout of attendees from BAME backgrounds, a Turing Lecture high. The Institute needs to continue to build on these significant improvements in order to reach audiences representative of society at large and to engage and inspire as many people as possible. Data debates Since joining forces with the British Library in 2017 for the Data Debates series, we have hosted debates on all the latest tech trends and hot topics in data science discourse. Last year saw a lively debate on the pros and cons of smart cities: are they innovative green utopia, or Orwellian surveillance hubs? The debate quickly grew beyond this simplistic binary framing, to encompass questions about creativity in an automated environment, and about who stands to profit from tighter tech controls on our cities. The second debate looked at how public opinion and democracy can be warped by fake news and what role AI can play both in spreading and fighting disinformation. Both of these debates saw an equal gender split on the panel and included speakers from a variety of industry and academic backgrounds, including our own Turing Fellows. Evening events The Turing also participated in Pint of Science: a worldwide science festival which brings researchers to your local pub to share their scientific discoveries. The festival takes place over three days and 2019 was the first year that the Turing participated. This event was a great way for the Institute to engage with the public: the Institute reached a new audience and gained new subscriptions to our monthly newsletter. ‘Driving data futures’ was another series of evening seminars. Run by the public policy programme, the series sought to bring cutting-edge research from the intersection of new technologies, ethics, and policy to the attention of the general public. The seminars also attracted researchers from the Turing and beyond, students, civil society organisations, civil servants and policy makers. Hackathons Changemakers was a hackathon to support young women in STEM, hosted at the University of Bristol in July 2019. The event, open to secondary school girls from Years 9 to 12, aimed to challenge stereotypes and highlight the opportunities available to young women to pursue careers in data science. The hackathon consisted of a project for participants to develop a piece of technology in support of a social or environmental campaign, thereby teaching software programming skills, as well as communication, teamwork and ideation development tools. The event also gave the girls a chance to be mentored by industry and academic leaders and be inspired by the ‘herstories’ of women who have excelled in the data science field. Stand-up comedy... “What do you get when you cross a pirate with a data scientist? Someone who specializes in Rrrr!” In September 2019, we offered stand-up comedy training to the Turing research community to help develop their public speaking and presentation skills. Of the 15 who attended the course, six went the whole hog and, in November 2019, performed in a live stand-up comedy finale open to the public: the Bright Club Comedy Showcase. On the night of the showcase, the venue was packed. Two professional comedians kept the evening on track, but the Turing performers were what really made the evening: not one person let stage fright get the better of them and everyone delivered like professionals, earning every laugh. The event showed the lighter side of data science. For a lot of people, receiving this kind of information in a fun environment is an easier way to approach what can be an intimidatingly complex topic. Section 1.7 + Engagement, outreach and skills Convening industry, policy makers and regulators The Turing’s deep engagement with businesses, NGOs, regulators, charities and policy makers continued apace this year through a wide range of partnerships and events. AI UK We put together the Institute’s first ever national showcase: AI UK. The two-day event, scheduled for March 2020, had a line-up of fascinating speakers from industry, government, academia, and a highly engaged group of sponsors including Accenture, GCHQ and HSBC. It was nearly sold out when the coronavirus pandemic resulted in the event’s postponement. We look forward to staging the event in March 2021. Data Science for Social Good For 12 weeks over summer, the Institute joined forces with the University of Warwick for the UK pilot of the Data Science for Social Good (DSSG) initiative, which supported five non-profit organisations and government bodies, including Homeless Link, Ofsted and the West Midlands Combined Authority. DSSG helped them achieve more with their data and to improve their services, interventions and outreach, which is in perfect alignment with the Institute’s mission to make great leaps in data science and AI research, and to nurture talent in the field to change the world for the better. Teams of talented data scientists tackled a range of issues, including improving outcomes for rough sleepers; early identification of fostering agencies requiring more intensive support; and accelerating the translation of healthcare findings into clinical practice. The pilot was a great success, and our priority now is to ensure that DSSG continues in the UK, with the Institute’s stewardship. “The professionalism and expertise of the team working on Ofsted’s DSSG project was impressive. Their dedication, not only to deliver a high-quality solution but also to understand how Ofsted works, was vital to make the project a success.” James Bowsher-Murray Head of Early Years and Social Care Data and Insight Ofsted Data Study Groups It was another busy year for the Institute’s Data Study Groups (DSGs), five day ‘hackathons’ that bring together top talent from data science to tackle real-world challenges. August 2019 saw a successful Network DSG at the University of Bristol which offered the opportunity for collaborative work and networking. In September 2019, the Turing worked with GSMA, a trade body that represents mobile network operators worldwide, to develop and host a DSG for mobile network companies, called GSMA Global AI Challenge. “The DSG allowed us to explore areas where AI can make a significant impact on operators’ business, and where we can also deliver global societal and economic benefits on a global basis,” said Laxmi Akkaraju, GSMA’s Chief Strategy Officer. The Turing and GSMA intend to collaborate further this year. The six projects in December 2019’s DSG included helping WWF develop smart monitoring for conservation areas, and working with the UK’s Defence Science and Technology Laboratory to boost the identification of hazardous contamination on surfaces using spectral signatures. CogX In 2019, the Turing once again supported CogX, the festival of AI and emerging technology, hosting the research stage and providing academic rigour to big picture discussions, convening experts and curating unique content. It was a fantastic opportunity to showcase our cutting-edge research in data science and AI, representing areas that will have a game-changing impact for science, society and the economy. A standout session focussed on AI for mental health and saw over 200 attendees hearing from Turing’s leading researchers on how we can revolutionise this area of healthcare through data science. We heard about the use of various data science fields for mental healthcare development, from machine learning, natural language processing all the way through to virtual reality. Symposium In July 2019, the Turing organised a two-day symposium in London, called ‘AI ethics in the financial sector’, which attracted over 200 attendees, including senior leadership, data and technology officers, and technical specialists from firms across the financial sector. For more on this event, see page 47. Section 1.7 + Engagement, outreach and skills Skills Supporting the national skills agenda Data science and AI are rapidly evolving fields with a complex skills landscape. Against a backdrop of increasing demand for skills in data science and AI across all sectors, the Institute is playing an important role in the national skills agenda: training and inspiring future generations. The Turing contributed to The Royal Society report on data skills published in 2019 and to the OECD (Organisation for Economic Cooperation and Development) expert group on data skills. The OECD report is due out mid-2020. The Institute has unparalleled access to skills in data science and AI. This has supported the Turing’s well-established national position and helped to develop an increasing number of exciting discussions with key European partners (including a recent V4 visit), and discussions with The Helmholtz Association of German Research Centers which is exploring how best to grow new skills and training collaborations. In addition, Institute Director Adrian Smith is also a key figure in the Government’s AI Council, a panel of experts helping to put in place the right skills, ethics and data so the UK can make the most of AI technologies. The Council, an independent expert committee, provides advice to government and high-level leadership of the AI ecosystem. Our thriving enrichment scheme The Turing has now expanded the successful enrichment scheme which allows doctoral students in the UK to come to the Institute for 6–12 months. They are more experienced researchers and come with a specific focus on developing new skills, new projects and building networks for future opportunities. Ben Murton, Head of Professional and Academic Development, said, “The enrichment scheme not only impacts on the students’ research today but is teaching them the value of cross-cutting collaboration for the years ahead. When the students are talking, either at lunch or in a seminar, it is building those connections out of which new ideas can come. The group represents so many academic disciplines which are focused on using data science to explore new questions, and it is great to know that those are continuing to develop after the students head back to their home universities.” Diversity across our student community Important strides have been made as the Turing continues to demonstrate its ability to attract and recruit an increasingly diverse student community to its enrichment scheme. This year to date: – Applications from 60 UK universities. – 28% of applications from non-partner universities and 20% of those awarded. – 10% of submitted applications indicated a disability, 18% of awards were made to individuals with a disability. – 31% of applicants were female, 36% of awards to females. – 36% of applicants were from a Black, Asian or Minority Ethnic background, 32% of awards. Doctoral programme The Turing’s doctoral programme ran from 2016 to 2019 and students will continue with their projects until 2023. There are currently 68 students undertaking Turing PhDs. The Turing continues to support this community of students actively and is looking forward to celebrating our first students completing in 2020. Inspiring young leaders The Institute has a critical role supporting and inspiring young leaders: many of its ambitious enrichment students taking part in outreach and engagement opportunities within and beyond the Turing’s own research programmes. “One of the greatest things about being at the Turing is the people you meet: experts from different universities and prestigious companies. They give free talks and seminars frequently, broadening your insight into data science and inspiring you to apply the most recent approaches to your own project.” Tiejun Wei Enrichment Student Case study Beatriz Costa Gomez is an Enrichment Student who started at the Turing in October 2019. Based at the University of Manchester, Bea is the creator of the ALFRED software (Advanced Labelling, Fitting, Recognition and Enhancement of Data), a tool to analyse pathological mutations in neurons. Bea is highly engaged and took part in an upbeat Christmas workshop for the public recreating festive songs using AI, algorithms and machine learning. “My time at the Turing has been very positive and active. Besides attending the workshops and seminars, I’ve been a student representative, one of the aspiring comedians for Turing’s Bright Club and a guest and co-host for The Turing Podcast. Perhaps most importantly, I had the opportunity to organise and lead an AI workshop at the Canopy Christmas markets, where we explained difficult concepts while attempting to orchestrate our own Christmas song. We had over 100 people drop-by and engage, over the course of almost three hours, with very positive feedback.” Collaboration with national skills programmes HDRUK in partnership with the Institute received funding to support a PhD programme in Health Data Science funded by the Wellcome Trust. In total, Wellcome awarded £127m of funding to support 23 new PhD programmes. Chris Yau, Turing Fellow and co-director of the programme, said, “This is an unprecedented opportunity to develop our future digital health leaders. Embedding our PhD students within an extensive cross-sector, multi-disciplinary collaborative network will enable them to address the greatest challenges in implementing health data science at scale. I am excited to see the new discoveries and methodologies that these students will make in the coming years which will lead to profound improvements in patient care and life quality.” In the last year, the Turing has actively supported the UK’s work to enhance the pipeline of talent and ensure that the UK remains at the forefront of emerging technologies. This is demonstrated by the Turing’s involvement with the Centres for Doctoral Training (CDT), with a number of the Turing’s university partners successful in their bids to deliver CDTs. The Turing has also commissioned the new Turing Internship Network, which will be rolled out in the coming months. This year has seen rapid progress as the Institute works with a wide range of stakeholders to understand the data science and AI skills landscape and to identify where and how the Turing can maximise impact. With this in mind, the Institute has now appointed its first National Skills Lead, Matthew Forshaw, to spearhead the technical skills agenda, with a focus on external training programmes, our emerging Turing Internship Network and steering the Turing’s involvement in national programmes. In his new role, Matthew works alongside Ben Murton who leads on professional and academic development. Data skills and strategy This year, the Institute has made a vital contribution to the Data Skills Taskforce, a national body working to help evaluate and close the data skills gap within the UK. The Turing, in collaboration with the Department for Digital, Culture, Media and Sport, has played a key role in the development of an online skills portal and self-assessment tool to help organisations, particularly SMEs, to identify their organisational and technical readiness with respect to data and signpost them to training to develop their data capability. Matthew Forshaw said, “The national data science and AI skills landscape is rapidly evolving as the UK develops its National Data Strategy and we see revitalised discussions around professionalisation of the data science occupation. The Institute is uniquely positioned to convene key academic, industry and policy stakeholders to share best practice and achieve strategic alignment around the data skills gap. I am excited to continue work which nurtures a culture and environment of diversity, and democratises access to training.” Section 1.8 The year ahead Making great leaps in data science and artificial intelligence research in order to change the world for the better In 2020/2021 the Institute’s research will continue its focus on ambitious challenges. The Institute, working with universities, businesses and public and third sector organisations, is applying this research to real-world problems with lasting, positive effects for science, the economy and the world. With emerging and unpredictable societal challenges and new opportunities on the horizon, the Institute is now looking ahead to building on existing and new collaborations across its thriving research programmes and projects. To find out more visit turing.ac.uk. Section 2 Trustees and strategic report The Trustees present their annual and strategic report together with the consolidated financial statements for the Institute and its subsidiary for the year ended 31 March 2020. The financial statements comply with the Charities Act 2011, the Companies Act 2006 and the Statement of Recommended Practice (SORP) applicable to charities preparing their accounts in accordance with the Financial Reporting Standard applicable in the UK (FRS102) which became effective in January 2015. Legal and administrative information The Institute is registered and is a charitable company (limited by guarantee) governed by its Articles of Association dated 26 March 2015. Company Number: 09512457 Charity Number: 1162533 Directors/Trustees The subscribers/directors of the Institute are its Trustees for the purposes of charitable law and throughout this report are collectively referred to as the Trustees. The Trustees serving during the year and since the year end were as follows: Howard Covington (Chair) Stephen Jarvis (Resigned 28 April 2020) Frank Kelly Richard Kenway Kerry Kirwan (Appointed 28 April 2020) Julie Maxton (Resigned 27 November 2019) Thomas Melham Wendy Tan-White (Resigned 27 November 2019) Neil Viner Patrick Wolfe Key management as at 31 March 2020 Adrian Smith Institute Director and Chief Executive Jonathan Atkins Chief Operating Officer Christine Foster Chief Collaboration Officer Donna Brown Director of Academic Engagement Allaine Cerwonka Director of International & Associate Director of ASG Vanessa Forster General Counsel Nicolas Guernion Director of Partnerships Catherine Lawrence Director of Programme Management Sophie McIvor Director of Communications and Engagement Martin O’Reilly Director of Research Engineering Clare Randall Director of People Programme Directors as at 31 March 2020 Mark Birkin Urban Analytics Mark Briers Defence and Security Mark Girolami Data-Centric Engineering Chris Holmes Health and Medical Sciences Anthony Lee Data Science at Scale Helen Margetts Public Policy Jonathan Rowe Data Science for Science Lukasz Spruch Finance and Economics Adrian Weller Artificial Intelligence Alan Wilson Special Projects Registered Office The British Library 96 Euston Road London, NW1 2DB Auditors Moore Kingston Smith LLP Chartered Accountants Devonshire House 60 Goswell Road London, EC1M 7AD Bankers Barclays Bank UK PLC Leicester Leicestershire, LE87 2BB Solicitors Bates Wells Braithwaite 10 Queen Street Place London, EC4R 1BE Veale Wasbrough Vizards 24 King Willian Street London, EC4R 9AT Mills & Reeve 24 King William Street Candlewick London, EC4R 9AT Structure, governance and management Our legal structure The Alan Turing Institute was founded in March 2015 as a registered Charity (1162533) and a Company Limited by Guarantee (09512457). The Institute is governed by its Articles of Association that were adopted on incorporation on 26 March 2015. The Articles of Association establish the governance of the Institute as the responsibility of the Board of Trustees who are Directors of the company and are its Trustees for the purposes of charitable law. Purpose of the Institute and main activities As the national institute for data science and artificial intelligence, the charitable object of the Institute, as set out in its Articles of Association, is the furtherance of education for the public benefit particularly through research, knowledge exchange, and public engagement, in the fields of data sciences. In 2017, as a result of a government recommendation, the Institute added artificial intelligence to its remit. The Institute has power to do anything which is calculated to further its object or is conducive and instrumental in doing so. In particular, the Institute’s ambitions are to: – Produce world-class research in the foundations of data science and artificial intelligence. – Have a transformative impact on the way that data and algorithms are used in the economy, in government and in society. – Educate and train data scientists. The Trustees confirm that they have paid due regard to the Public Benefit Guidance published by the Charity Commission, including the guidance Public benefit: running a charity (PB2), in shaping their aims and objectives for the year and in planning their future activities. Related parties The Institute’s Founder Members are the Engineering and Physical Sciences Research Council (EPSRC) and the Universities of Cambridge, Edinburgh, Oxford, University College London (UCL) and Warwick. The Founder Members have entered into a joint venture agreement which establishes the basis on which funding will be made available to the Institute. On 1 April 2018, the Institute entered into 5-year partnership agreements with eight additional universities: Birmingham, Bristol, Exeter, Leeds, Manchester, Newcastle, Queen Mary University of London and Southampton. The Institute has a wholly owned subsidiary, Turing Innovations Limited (company registration number 10015591) which exists to manage trading activity. Any surplus funds generated by this subsidiary will be transferred to the Institute as Gift Aid. Board composition and responsibilities The Institute is governed by its Board of Trustees whose members are also its Directors. The Board of Trustees has been established in accordance with the terms of the Joint Venture Agreement between the six Founder Members (Founders), dated 31 March 2015. The Board composition is determined as follows: – Each Founder may appoint one Trustee. – Founders may, by a unanimous decision, select and appoint an Independent Trustee who acts as Chair of the Board and may from time to time remove and replace such Independent Trustee by a unanimous decision of the Founders. – The appointed Trustees may appoint further Independent Trustees such that, so far as possible, the total number of Trustees on the Board at any particular time will be an odd number – The Trustees appointed by the Founders must always form a majority of the Board and may from time to time remove and replace Independent Trustees Biographies of all Trustees are available at turing.ac.uk/people/governance. Organisational management and responsibilities of the Board The Institute has a clear organisation structure with documented lines of responsibility and authority. The Institute’s Board of Trustees is responsible for setting the aims and strategic direction of the Institute. Trustees set the Institute’s strategy, establish funding policies, monitor risks, approve the annual budget and expenditure targets and monitor actual and forecast financial results. The Trustees are also responsible for developing and agreeing the overall strategy and policies related to research, knowledge and public engagement, in the fields of data science and artificial intelligence. Trustees meet formally as a Board with the senior management team up to four times a year. In addition, Trustees normally attend at least two away days and undertake further meetings as and when needed. The senior management team also provides Trustees with regular reports on the Institute’s financial position, current activity, organisational news and significant issues affecting the Institute. The senior management team, led by the Institute Director, is responsible for the day-to-day management of the Institute’s operations and activities. The Institute Director is responsible for appointing senior managers. The senior management team is also responsible for implementing the strategy and policies agreed with Trustees and reporting on its performance to the Board. Committees The Institute is supported by a range of committees, whose members include Trustees, the Institute Director, representatives from the Founder Members, Programme Directors and other individuals with appropriate expertise. The following three formal committees report directly to the Board of Trustees: Audit Committee This committee is a delegated body of the Board of Trustees, responsible for audit, finance, risk management and compliance. This committee reviews the effectiveness of the Institute’s internal control framework and risk management process and compliance with reporting requirements. It monitors the terms of appointment and the work of the external auditors and receives and reviews audit reports. It monitors the full external audit process and resulting financial statements, including overseeing the terms of appointment of the external auditors. Nomination Committee This committee is responsible for all aspects of the appointment of new non-Founder Trustees to the Board of Trustees. It also has responsibility for monitoring boardroom diversity and makes recommendations on appointments within the Audit and Remuneration Committees in consultation with the chairs of those committees. Remuneration Committee This committee advises the Board of Trustees and oversees the preparation of policies and procedures in respect of salaries, emoluments and conditions of service. In line with these approved policies and procedures, the committee approves the total remuneration package for the Chair of the Institute, the Institute Director and those senior staff reporting directly to the Institute Director. The criterion for setting pay is the market rate taking into account industry standards. The Trustees will set up other committees, as necessary, to provide assistance to the Board. Other committees established internally by the senior management team include: Research and Innovation Advisory Committee Formerly the Programme Committee, as defined under the Joint Venture Agreement. This committee supports the Institute Director in the preparation of the Institute’s scientific and innovation strategy. It also supports the Institute with research and training programmes and reports appropriately to the Institute’s stakeholders. Scientific Advisory Board This is an independent group made up of international experts in academia, industry and government. This group was established to provides strategic advice to the Institute’s Board of Trustees and senior leadership team on the development and implementation of its scientific research strategy. Strategic Partners Board This group advises the Board of Trustees on the content and translation of research generated at the Institute and collaborates across the Institute and its partners to identify new opportunities. University Partners Board This group advises the Institute Director on the research direction of the Institute, the Institute’s relationship with its university partners and the higher education landscape as it relates to data science and AI. Recruitment and appointment of Trustees The Nomination Committee undertakes an open recruitment process, recommends new candidates for appointment when necessary and ensures appropriate recruitment and succession plans are in place for non-Founder appointed Trustees. On appointment, each Trustee completes a declaration of interests form which is held within a register of interests and which is monitored and updated on a regular basis and reviewed annually. Trustee related party transactions are disclosed in greater detail within the financial statements later in this report. All conflicts are actively managed through early identification of potential areas of conflict and appropriate action taken where necessary. Trustee induction and training There is a tailored induction programme for new Trustees that includes a programme of meetings with the members of the senior management team and other Trustees. New Trustees are provided with a Trustee Information Pack which includes initial information about the Institute and its work, a copy of the previous year’s Annual Report and Accounts, a copy of the Institute’s Articles of Association, a copy of the Joint Venture Agreement, information about their powers as Trustees of the Institute, key corporate policies e.g. Code of Conduct, Gifts and Hospitality policy etc. and a copy of the Charity Commission’s guidance, “The essential trustee: what you need to know”. Equality, Diversity and Inclusion (EDI) It is the Institute’s policy to provide equal opportunities to job applicants and employees. The Institute recognises that everyone should be treated with respect and dignity and that a working environment must be provided which is free of any form of discrimination, harassment, bullying or victimisation. In addition, the Institute’s site at the British Library provides access arrangements in order to allow unrestricted employment of individuals who have special access needs. The Institute is committed to the effective implementation of this policy and will not condone any form of discrimination, whether engaged by employees or by third parties who interact with the organisation. The Institute has an established EDI programme which is guided by the following principles: – We will seek to understand the challenges and barriers to equality in order to eliminate discrimination, creating an environment where differences are valued. – We will promote a culture of inclusion, recognising and celebrating difference and acknowledging the benefits achieved by diversity of thought and experience. – We will create a safe, non-judgemental space where we can discuss arising issues relating to equality, diversity and inclusion and support one another to understand and acknowledge a range of perspectives. – We will embed equality, diversity and inclusion across all levels of the Institute and in everything that we do. – We will educate our community and raise awareness in all areas relating to equality, diversity and inclusion ensuring our commitment to EDI is understood by all. – We will support and enable our community to recognise and challenge behaviour at all levels which goes against these principles. The Institute has established an EDI Advisory Group to help inform the Board of Trustees and the senior management team to ensure that the Institute is demonstrating best practice in all matters regarding equality, diversity and inclusion. This group is made up of members from both the business team and the research community and represents all levels of the Turing. There are four diversity working groups which feed into this central group, comprised of diversity champions from across the Turing community representing the different EDI themes. Each group manages its individual action plan. The four working groups are: – Gender and LGBTQ+ Equality. – Race and Socio-economic Equality. – Health and Wellbeing. – Attracting Diversity and Retaining Talent. Financial review The Institute is funded through grants from Research Councils, Founder Members, University Partners and from strategic and other partnerships. Income of £36m (2018/19: £39.4m) has been received during the year of which £13m was received from Research Councils (2018/19: £14.7m), £13m from Founding Members and University Partners (2018/19: £13m), £7.8m from strategic and other research partnerships (2018/19: £9.5m) and £2.2m from other trading activities and investment income (2018/19: £2.2m). Expenditure of £34.5m (2018/19: £30.1m) has been incurred in the year. Grants payable to Founding Members and other University Partners represent 50% of total expenditure with spend in this area remaining broadly in line with last year. Staff costs represent 31% of total expenditure increasing by 53% from £7m in 2018/19 to £10.7m in 2019/20, as the institute expands its research programme delivery. The remaining 19% of expenditure covers support costs and other direct costs. The Institute made a surplus of £1.5m (2019: £9.3m). This has been transferred to reserves and will be used to fund research and Institute costs during 2020/21 and beyond. Group net assets at 31 March 2020 are £24.4m (2019: £22.9m). Fixed asset values reduced by £567k. During the year £451k was spent on additions. This was offset by £1m of depreciation and asset disposals of £9k. Current assets: Debtors are £6.8m and remain broadly in line with last year. Cash at bank and in hand has increased to £43m (2019: £41m). This is largely due to the up­front nature of cash receipts on many of the Institute’s grant awards coupled with delays in cash outflows to grant recipients, which are now being actively resolved. Creditors: amounts falling due within one year increased by £6.9m to £23.1m. Grant creditors were £5.4m higher than last year due to a combination of delays in paying grant awards and some of these grant awards moving from amounts due in more than one year to amounts due within one year. In addition, accruals were £1.5m higher than last year due to an increase in accrued project expenditure. Creditors: amounts falling due in more than one year reduced by £1.6m to £3.9m as a number of grants became due within one year and there were fewer new grant agreements with liabilities due in more than one year than last year. Going Concern The Trustees have assessed whether the use of going concern basis is appropriate and have considered possible events or conditions that might cast significant doubt on the ability of the charitable company to continue as a going concern. The Trustees have made this assessment for a period of at least one year from the date of the approval of these financial statements. In particular, the Trustees have considered the charitable company’s forecasts and projections and have taken account of pressures on income. After making enquiries, the Trustees have concluded that there is a reasonable expectation that the charitable company has adequate resources to continue in operational existence for the foreseeable future. The charitable company therefore continues to adopt the going concern basis in preparing its financial statements. The Trustees have assessed the consequences of the current COVID-19 pandemic and recognise that, whilst this will impact the research funding landscape in the UK and internationally, the Institute’s reserves are such that it expects to maintain positive cash flows and reserves for at least one year from the date of approval of these financial statements and, as such, the Trustees are confident that the Institute will continue to operate as a going concern. Fundraising The Institute does not engage in fundraising activities with the general public and no donations are sought from the public. Costs of raising funds in the financial statements relate to sourcing of new institutional funders. The Institute does not use third parties to assist with fundraising and the Institute received no complaints in this year regarding its fundraising practices. Treasury Management Policy Treasury management activity monitors the timing and amounts of cash inflows and outflows, in particular monitoring and tracking those activities that result in significant cash movement. The Treasury Management Policy is confined to the management of short to medium term liquid funds (maximum investment term is 18 months). Assets are protected by investing with approved counterparties. Investments are risk-averse and non-speculative and the Institute places no income reliance on interest earned. Grant making policy The Institute’s grants will be subject to outputs being appropriately recorded and assessed. Data held will be in line with the grant guideline requirements issued by UK Research and Innovation. Fundamental principles have been established and adopted by the Institute. These are as follows: – The Institute will award grants that are in line with the charitable object of the organisation. – The Institute intends to assess grants bi­annually to ensure compliance with the terms of the grant. – The Institute expects to assess the progress of each grant within three months of the end of the grant period. Reserves policy The Institute reviews its reserves policy each year, taking account of its planned activities and the financial requirements for the forthcoming period. The Trustees believe that the Institute should have access to reserves appropriate to the scale, complexity and risk profile of the Institute. To cover any shortfall in grants and to maintain the financial viability of the Institute, reserves are currently set at the equivalent of 3 to 6 months of operating costs. In 2015, the Engineering and Physical Research Council awarded a grant of £42m to the Institute to carry out its charitable objectives. This grant was split between operating resource of £22m and capital of £20m. As at 31 March 2020, the full value of the operating resource grant has been received. The remaining capital grant expires on 31 March 2022. A further resource award of £10m was made by EPSRC in November 2019 to support core operating costs until 31 March 2022. The Institute’s unrestricted General Fund as at 31 March 2020 was £16.9m (2019: £17.3m). This includes £2.5m (2019: £11.3m) of funding held to cover future years’ financial commitments, £1.9m (2019: £2m) of funds designated by the Board for the Institute’s Safe and Ethical AI programme, £4.2m to offset notionally the overdrawn University Partner Research Fund (2019: £4.6m) - (see note 20(g) for details) with the balance, £8.3m (2019: £4m), being in line with the above reserves policy. As at 31 March 2020, the Institute holds £7.5m (2019: £5.6m) of restricted reserves. This is after allowance has been made for future years’ commitment under current researcher grant awards amounting to £4.1 (2019: £4.4m). Remuneration policy The Institute is committed to ensuring a proper balance between paying staff (and others who work for the Institute) fairly to attract and retain the best people for the job with the careful financial management of our charity funds. The Remuneration Committee oversees the overall remuneration of staff and specifically that of the Institute Director and those senior managers reporting directly to the Institute Director. The Remuneration Committee assumes the responsibilities of remuneration within the Institute, oversees the preparation of policies and procedures in respect of salaries, emoluments and conditions of service. Formal consideration of remuneration matters takes place annually, usually at the committee’s April meeting. However, remuneration matters may also be considered at other meetings if ad-hoc issues arise during the year. The Committee does not have full delegated authority to approve all matters relating to remuneration and any recommendation or decision must be ratified by the Board of Trustees. The Institute discloses all payments to Trustees and the number of staff with a total remuneration of £60,000 and above in accordance with the Charity Commission’s Statement of Recommended Practice 2015 (SORP). Risk management Significant risks to which the Institute and Turing Innovations Limited are exposed are reported formally to the Audit Committee, the Board of Trustees and the Board of Directors of Turing Innovations Limited via the Institute’s corporate risk register. The Institute has a formal attitude to risk management with a risk framework embedded within the business that supports the identification and effective management of risks across the Institute. The senior management team is responsible for managing and reporting risks in accordance with the Institute’s risk management policy and standards, while the Trustees retain overall responsibility for risk. Examples of significant risks that the Institute currently faces include: – The continued economic instability impacting the funding of research and access to talent by the Institute and its university partners. – A breach of data security, malicious or otherwise. – Inadequate resources available to fulfil its national role. The Board of Trustees and the Board of Directors of Turing Innovations Limited seek to ensure that these risks are mitigated, so far as is reasonably possible, by actions taken by the Institute’s senior management team. This mitigation includes: – Prudent financial management of the Institute such that it can react to changes in external funding in an agile, controlled manner. – Working with funding bodies and government to secure longer-term funding. – Implementing robust security processes, both physical and virtual. – Building a network of delivery partners to increase the Institute’s capacity for engaging with industry and delivering translational impact. Section 172 Statement The Board of Trustees are aware of their duty under s.172 of the Companies Act 2006 to act in the way which they consider, in good faith, would be most likely to promote the success of the Company for the benefit of its members as a whole and, in doing so, to have regard (amongst other matters) to: – The likely consequences of any decision in the long term. – The interests of the Company’s employees. – The need to foster the Company’s business relationships with suppliers, customers and others. – The impact of the Company’s operations on the community and the environment. – The desirability of the Company maintaining a reputation for high standards of business conduct. – The need to act fairly as between members of the Company, (the “s.172(1) Matters”). Engagement with employees Enhancing employee engagement is an integral part of the culture of the Institute. Senior management are actively involved in the engagement of colleagues through weekly electronic communications, monthly staff meetings and quarterly Town Hall meetings which involve employees and full-time members of the wider Turing community, to provide updates on business developments and to answer questions directly. The Trustees receive regular qualitative and quantitative updates on employee matters from the People Director, who attends Trustee Board meetings, including analysis received through the annual employee engagement survey, regular EDI updates and an annual update on the Performance Review and Performance Related Pay process. This provides the Board with oversight of the effects our people engagement has on our performance, and the continued strength of our culture. This year, Trustees provided feedback on, and actively promoted, our newly launched values “The Rules of the Game”. More recently the Trustees have additionally been focused on mitigating employee risks arising from the COVID-19 pandemic, including how best to support employees physical and mental health and wellbeing whilst working remotely. The Institute’s risk register, which is reported on at each Board of Trustees meeting, has been updated to capture these new people risks, whilst existing risks have been adjusted to ensure that current mitigations factor in known implications of the pandemic. You can find out more about how the Institute engages with its stakeholders within Section 1 of this annual report. Engagement with stakeholders The Trustees recognise the existence of a number of key internal stakeholders (employees, Turing Fellows, Turing Research Fellows and students) and external stakeholders (general public, Founding Members, university partners, strategic and commercial partners, government agencies, public health bodies, charitable foundations, customers and suppliers) of The Alan Turing Institute. The Trustees remain committed to effective engagement of all stakeholders and are mindful that the Institute’s success depends on its ability to engage effectively, work together constructively and to take stakeholder views into account. The Trustees consider and discuss information from across the organisation to understand the impact of the Institute’s operations and the interests and views of our key stakeholders. The Trustees also review financial and operational performance, as well as information covering areas such as key risks, legal and regulatory compliance. This information is provided to the Trustees through routine reports circulated in advance of each Board of Trustees meeting and via in-person presentations. The Trustees therefore possess an overview of our engagement with stakeholders enabling them to comply collectively with their legal duty under section 172 of the Companies Act 2006. This year the Trustees also engaged positively with additional practical steps for Board of Trustee meetings to help ensure the voice and interests of the Institute’s stakeholders are considered during discussions, including: – The introduction of a standing forward planning agenda item to discuss emerging areas of interest associated with key stakeholders for future discussion/ consideration. – Inviting researchers from the Turing Community to present their research projects at Board of Trustees meetings. Subject to the current uncertainty involving the COVID-19 pandemic, the Trustees plan to undertake an assessment, for inclusion within next year’s annual report, which maps out the current engagement activities associated with each stakeholder group above, how the Trustees engage with each group and how they have influenced the Board of Trustees decision-making activities. Charity Governance Code The Trustees support the principles-based approach of the Charity Governance Code in its aim to develop high standards of governance within the charity sector. The Institute will continue to refine its practices and procedures to instil continuous improvement of its governance arrangements, as recommended under the Code. A Trustee recruitment campaign is currently underway to further enhance the skills composition, effectiveness and diversity of the Board of Trustees to further drive the promotion of good governance for the benefit of the Turing Community, internal and external beneficiaries and the wider charity sector. Trustees’ responsibilities statement The Trustees are responsible for preparing the Trustees’ annual report and the financial statements in accordance with applicable law and regulations. Company law requires the Trustees to prepare financial statements for each financial year. Under that law, the Trustees have elected to prepare the financial statements in accordance with United Kingdom Accounting Standards (United Kingdom Generally Accepted Accounting Practice, GAAP) including FRS 102 – The Financial Reporting Standard Applicable in the UK and Ireland. Under company law, the Trustees must not approve the financial statements unless they are satisfied that they give a true and fair view of the state of affairs of the Institute and the result for that year. In preparing these financial statements, the Trustees are required to: – Select suitable accounting policies and then apply them consistently. – Comply with applicable accounting standards, including FRS 102, subject to any material departures disclosed and explained in the financial statements. – State whether a Statement of Recommended Practice (SORP) applies and has been followed, subject to any material departures which are explained in the financial statements. – Make judgements and estimates that are reasonable and prudent. – Prepare the financial statements on a going concern basis unless it is inappropriate to presume that the charitable company will continue in business. The Trustees are responsible for keeping adequate accounting records that are sufficient to show and explain the Institute’s transactions, disclose with reasonable accuracy at any time the financial position of the Institute and enable them to ensure that the financial statements comply with the Companies Act 2006. They are also responsible for safeguarding the assets of the Institute and hence for taking reasonable steps for the prevention and detection of fraud and other irregularities. Trustees are responsible for the maintenance and integrity of the corporate and financial information included on the Institute’s website. Legislation in the UK governing the preparation and dissemination of financial statements may differ from legislation in other jurisdictions. Disclosure of information to the auditor The Trustees who held office at the date of approval of this Trustees’ annual report confirm that, so far as they are each aware, there is no relevant audit information of which the Institute’s auditor is unaware. Each Trustee has taken all the steps that they ought to have taken as a Trustee to make themselves aware of any relevant information and to establish that the Institute’s auditor is aware of that information. Moore Kingston Smith were appointed as auditors by the Board of Trustees in July 2017 for a three-year term. Signatory The Trustees’ annual report is approved by the Trustees of the Institute. The strategic report, which forms part of the annual report, is approved by the Trustees in their capacity as directors in company law of the Institute. Howard Covington Chair 18 June 2020 Section 3 Financial statements Independent auditor’s report to the members of The Alan Turing Institute Opinion We have audited the financial statements of The Alan Turing Institute for the year ended 31 March 2020 which comprise the Group Statement of Financial Activities, the Group Summary Income and Expenditure Account, the Group and Parent Charitable Company Balance Sheets, the Group Cash Flow Statement and notes to the financial statements, including a summary of significant accounting policies. The financial reporting framework that has been applied in their preparation is applicable law and United Kingdom Accounting Standards, including Financial Reporting Standard 102 The Financial Reporting Standard applicable in the UK and Republic of Ireland (United Kingdom Generally Accepted Accounting Practice). In our opinion the financial statements: — give a true and fair view of the state of the group’s and the parent charitable company’s affairs as at 31 March 2020 and of the group’s incoming resources and application of resources, including its income and expenditure, for the year then ended; — have been properly prepared in accordance with United Kingdom Generally Accepted Accounting Practice; and — have been properly prepared in accordance with the requirements of the Companies Act 2006. Basis for opinion We conducted our audit in accordance with International Standards on Auditing (UK) (ISAs(UK)) and applicable law. Our responsibilities under those standards are further described in the Auditor’s Responsibilities for the audit of financial statements section of our report. We are independent of the charitable company in accordance with the ethical requirements that are relevant to our audit of the financial statements in the UK, including the FRC’s Ethical Standard, and we have fulfilled our other ethical responsibilities in accordance with these requirements. We believe that the audit evidence we have obtained is sufficient and appropriate to provide a basis for our opinion. Conclusions relating to going concern We have nothing to report in respect of the following matters in relation to which the ISAs (UK) require us to report to you where — the trustees’ use of the going concern basis of accounting in the preparation of the financial statements is not appropriate; or — the trustees have not disclosed in the financial statements any identified material uncertainties that may cast significant doubt about the group’s and parent charitable company’s ability to continue to adopt the going concern basis of accounting for a period of at least twelve months from the date when the financial statements are authorised for issue. Other information The other information comprises the information included in the annual report, other than the financial statements and our auditor’s report thereon. The trustees are responsible for the other information. Our opinion on the financial statements does not cover the other information and, except to the extent otherwise explicitly stated in our report, we do not express any form of assurance conclusion thereon. In connection with our audit of the financial statements, our responsibility is to read the other information and, in doing so, consider whether the other information is materially inconsistent with the financial statements or our knowledge obtained in the audit or otherwise appears to be materially misstated. If we identify such material inconsistencies or apparent material misstatements, we are required to determine whether there is a material misstatement in the financial statements or a material misstatement of the other information. If, based on the work we have performed, we conclude that there is a material misstatement of this other information, we are required to report that fact. We have nothing to report in this regard. Opinions on other matters prescribed by the Companies Act 2006 In our opinion, based on the work undertaken in the course of the audit: — the information given in the strategic report and the trustees’ annual report for the financial year for which the financial statements are prepared is consistent with the financial statements; and — the strategic report and the trustees’ annual report have been prepared in accordance with applicable legal requirements. Matters on which we are required to report by exception In the light of the knowledge and understanding of the group and parent charitable company and its environment obtained in the course of the audit, we have not identified material misstatements in the trustees’ annual report. We have nothing to report in respect of the following matters where the Companies Act 2006 requires us to report to you if, in our opinion: — the parent charitable company has not kept adequate and sufficient accounting records, or returns adequate for our audit have not been received from branches not visited by us; or — the parent charitable company’s financial statements are not in agreement with the accounting records and returns; or — certain disclosures of trustees’ remuneration specified by law are not made; or — we have not received all the information and explanations we require for our audit. Responsibilities of trustees As explained more fully in the trustees’ responsibilities statement set out on page 92, the trustees (who are also the directors of the charitable company for the purposes of company law) are responsible for the preparation of the financial statements and for being satisfied that they give a true and fair view, and for such internal control as the trustees determine is necessary to enable the preparation of financial statements that are free from material misstatement, whether due to fraud or error. In preparing the financial statements, the trustees are responsible for assessing the group and parent charitable company’s ability to continue as a going concern, disclosing, as applicable, matters related to going concern and using the going concern basis of accounting unless the trustees either intend to liquidate the group or parent charitable company or to cease operations, or have no realistic alternative but to do so. Auditor’s responsibilities for the audit of the financial statements Our objectives are to obtain reasonable assurance about whether the financial statements as a whole are free from material misstatement, whether due to fraud or error, and to issue an auditor’s report that includes our opinion. Reasonable assurance is a high level of assurance, but is not a guarantee that an audit conducted in accordance with ISAs (UK) will always detect a material misstatement when it exists. Misstatements can arise from fraud or error and are considered material if, individually or in aggregate, they could reasonably be expected to influence the economic decisions of users taken on the basis of these financial statements. As part of an audit in accordance with ISAs (UK) we exercise professional judgement and maintain professional scepticism throughout the audit. We also: — Identify and assess the risks of material misstatement of the financial statements, whether due to fraud or error, design and perform audit procedures responsive to those risks, and obtain audit evidence that is sufficient and appropriate to provide a basis for our opinion. The risk of not detecting a material misstatement resulting from fraud is higher than for one resulting from error, as fraud may involve collusion, forgery, intentional omissions, misrepresentations, or the override of internal control. — Obtain an understanding of internal control relevant to the audit in order to design audit procedures that are appropriate in the circumstances, but not for the purposes of expressing an opinion on the effectiveness of the group and parent charitable company’s internal control. — Evaluate the appropriateness of accounting policies used and the reasonableness of accounting estimates and related disclosures made by the trustees. — Conclude on the appropriateness of the trustees’ use of the going concern basis of accounting and, based on the audit evidence obtained, whether a material uncertainty exists related to events or conditions that may cast significant doubt on the group and parent charitable company’s ability to continue as a going concern. If we conclude that a material uncertainty exists, we are required to draw attention in our auditor’s report to the related disclosures in the financial statements or, if such disclosures are inadequate, to modify our opinion. Our conclusions are based on the audit evidence obtained up to the date of our auditor’s report. However, future events or conditions may cause the group or parent charitable company to cease to continue as a going concern. — Evaluate the overall presentation, structure and content of the financial statements, including the disclosures, and whether the financial statements represent the underlying transactions and events in a manner that achieves fair presentation. — Obtain sufficient appropriate audit evidence regarding the financial information of the entities or business activities within the group to express an opinion on the consolidated financial statements. We are responsible for the direction, supervision and performance of the group audit. We remain solely responsible for our audit report. We communicate with those charged with governance regarding, among other matters, the planned scope and timing of the audit and significant audit findings, including any significant deficiencies in internal control that we identify during our audit. Use of our report This report is made solely to the charitable company’s members, as a body, in accordance with Chapter 3 of Part 16 of the Companies Act 2006. Our audit work has been undertaken so that we might state to the charitable company’s members those matters which we are required to state to them in an auditor’s report and for no other purpose. To the fullest extent permitted by law, we do not accept or assume responsibility to any party other than the charitable company and charitable company’s members as a body, for our audit work, for this report, or for the opinions we have formed. Shivani Kothari, Senior Statutory Auditor, 24 June 2020 For and on behalf of: Moore Kingston Smith LLP Devonshire House Statutory Auditor 60 Goswell Road, London, EC1M 7AD The financial statements of The Alan Turing Institute were approved and authorised for issue by the Board of Trustees on 18 June 2020 and signed on its behalf by: Howard Covington Chair, The Alan Turing Institute Company number - 09512457. [Consolidated Statement of Financial Activities, Balance Sheet, Consolidated Statement of Cash Flows and Notes to the financial statements not included in this plain text version]