Rutherford Visiting FellowsTen international fellows have joined The Alan Turing Institute as part of the government’s Rutherford Fellowship scheme that brings highly skilled researchers to the UK.
The Rutherford Visiting Fellowships are supported by the Rutherford Fund which aims to help maintain the UK’s position as a world leader in science and research by attracting highly skilled researchers to the UK. Launched in July 2017, the Fund includes more than £100 million over the next four years to support the provision of fellowships for international research talent.
These fellowships have been awarded to early-career and senior researchers who have begun their research by April 2018 (running for a duration of six to 12 months). The fellowships bring talented new researchers to the UK from the following leading research nations: Canada, China, Finland, France, Germany, Pakistan, Turkey and the USA.
The fund is named after Ernest Rutherford, one of the UK’s most distinguished scientists, the father of nuclear physics, a Nobel Laureate, holder of chairs at the Universities of Manchester and Cambridge, and, crucially, an immigrant – who came to the UK at the age of 24 from New Zealand.
Rutherford Visiting Fellows
Dr Becerik-Gerber is an associate professor at the Astani Department of Civil and Environmental Engineering of University of Southern California. Her research falls at the intersection of built environments, machine intelligence, and systems thinking. Specifically, her work focuses on the development of novel methods for the acquisition, modelling, and analysis of the data needed for cognitive (responsive and adaptive) built environments that can perceive, sense, reason and collaborate with their users, and support decision-making, problem solving, and management of resources. Using multi- dimensional data, she develops algorithms, frameworks and visualisation techniques to improve built-environment resiliency, efficiency, sustainability, and maintainability while increasing user satisfaction. She is the founding director of the Innovation in Integrated Informatics Lab: http://i-lab.usc.edu/ . Her work has received support worth approximately USD $5 million from a variety of sources. She serves as an associate editor for ASCE’s Journal of Computing in Civil Engineering since 2011. In 2012, she was appointed as the inaugural holder of the Stephen Schrank Early Career Chair in Civil and Environmental Engineering. She is also the recipient of MIT Technology Review’s TR35 Recognition (2012), NSF CAREER Award (2014), Viterbi Junior Research Award (2016), Mellon Mentoring Award (2017), and Celebration of Engineering & Technology Innovation Award (CETI) in the Outstanding Early Career Researcher category from FIATECH (2018).
Burcin will explore data-driven disaster-prepared buildings while at Turing. We see an increasing number of man-made and natural disasters striking our built infrastructure (e.g., building fires, acts of extreme violence, earthquakes). There is little insight in how people behave when exposed to these stressors; how behaviour is influenced by the building’s design, building type, person’s past experiences, people around them, people they are responsible for and so on. Most theories and models assume behaviour is similar under different stressors. However, the impact on a building’s structure, physical conditions inside the building and the surrounding environment are distinct from one another in different emergency scenarios. Work to date used traditional methods, such as unannounced fire drills, post-surveys, video-recordings, and so on. However, these methods do not trigger natural responses (like trauma, panic, etc.), they lack realistic scenario features (fire, smoke, explosion) or do not provide opportunities for controlled experiments. On the other hand, we cannot expose people to unsafe conditions for moral and legal reasons. Thus, she proposes to study behaviour in different building conditions and under different extreme events using immersive virtual environments and agent based modelling. The outcomes of this work will inform computational models that attempt to simulate emergency behaviour by predicting actions like evacuation, sheltering, decisions made during performing these actions, and the time it takes to take these actions more accurately. With more accurate data driven models, engineers and architects can develop safer and more secure building designs and operational procedures that are driven by empirical data.
Laurent Doyen is a research fellow of CNRS (Centre National de la Recherche Scientifique) affiliated with LSV (Laboratoire Specification et Verification) at ENS Cachan, France since 2009. Previously he was a researcher at EPFL (Ecole Polytechnique Federale de Lausanne, Switzerland). He holds a PhD in Computer Science from Universite Libre de Bruxelles, Belgium on robustness analysis of cyber-physical automata models, and a Habilitation from ENS Cachan on quantitative verification in games and automata. His research focuses on game and automata theory for the formal design and verification of digital systems, with an emphasis on stochasticity and partial observation.
His project is to develop scalable algorithmic methods and tools for the synthesis of safe, smart, and adaptive controllers for digital and cyber-physical systems. The objective is to design efficient algorithms for several fundamental controller synthesis problems that have applications such as smart cruise control, home automation, and intelligent traffic control. The safety, efficiency and adaptability criteria naturally lead to consideration of games and automata models with Boolean hard requirements, quantitative measures of performance, stochasticity, partial observation and combinations thereof.
Muhammad Moazam Fraz
Muhammad Moazam Fraz completed his PhD (Computer Science) from Kingston University, London in 2013. His area of research is applied deep learning, retinal image analysis, computer vision and pattern recognition. After completing his PhD, he worked as a research fellow at Kingston University in collaboration with St George’s University of London and UK BioBank on the development of an automated software tool for epidemiologists to quantify and measure retinal vessels morphology and size; determine the width ratio of arteries and veins as well as the vessel tortuosity index on very large datasets, to enable them to link systemic and cardiovascular disease to the retinal vessel characteristics. He has successfully supervised several Master’s theses. Prior to his PhD, he worked as a software development engineer at Elixir Technologies Corporation, a California-based software company. He served with Elixir from 2003-2010 in various roles and capacities. He now works as an Assistant Professor at the School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan.
Deep learning-based histopathological analytics for oral cancer detection and estimation. The aim is to develop novel deep learning methods for the analysis of whole-slide images (WSI) of oral cancer tissue slides, particularly for segmentation of tumour-rich regions in these images and quantification of important histological patterns (such as vascular invasion and perineural invasion). Accurate identification and quantification of histological patterns and tumour areas in the oral cancer WSIs are crucial for determining the cancer grade in an objective manner and for better, systematic stratification of patients for personalised treatment of cancer. This fellowship will contribute towards world-leading expertise in the area of cancer image analytics at the Turing, with health and wellbeing as the main targeted application area of the research activity.
Frederik was a visiting researcher at the University of Edinburgh collaborating with Ben Leimkuhler on adapting sampling methods to neural networks and exploring their loss manifolds.
He received his PhD in Applied Mathematics at the Rheinische Friedrich-Wilhelms-Universität, Bonn in Germany, in 2014 under the supervision of Michael Griebel. There, he learned a lot about material science, ab-initio methods in quantum chemistry, C++ programming, third-party projects, and proper experimental practice in computational sciences. He has held a post doc position in Thomas Schuster’s group on inverse problems in Saarbrücken from 2013 till 2016, where he worked on matrix factorization and hyperspectral imaging.
Frederik’s research interest are multitudinous but the common theme is the underlying problem of optimization of non-convex functions.
He worked on extending the well-known Conjugate Gradient method to Banach spaces and applied the resulting method to low-rank matrix factorization in the context of hyperspectral images, see the library BASSO at https://github.com/FrederikHeber/BASSO.
He has a strong interest in material sciences and biomolecules. He helped to realise a software library of various fast solvers for the Coulomb problem (http://www.scafacos.de/). Moreover, he maintains a large software suite (http://www.molecuilder.de/) on constructing and equilibrating molecular systems and boot-strapping molecular simulations. There, the holy grail is to explore the potential energy surface whose knowledge would allow to assess material stability, devise chemical reactions, and ascertain reaction paths in biological processes. While local minima are found by optimization, global minima require enhanced sampling and other strategies.
At the moment, he is bringing these sampling methods to the realm of data science, where they will (hopefully) help to explore the loss manifold of (deep) neural networks and unravel the baffling mystery why neural networks work so well.
Professor Kimmo Kaski finished his MSc (1973) and Licentiate in Technology (1977) degrees at Helsinki University of Technology in electrical and electronics engineering, then PhD (1981) at University of Oxford in theoretical physics. After this (1981 – 1984) he held a postdoctoral position at Temple University, Philadelphia, while spending time at the Universite de Geneve Switzerland and Forschung Centrum (earlier KFA) in Julich Germany. In 1984 he became a university lecturer at Tampere University of Technology, Finland and became an adjunct professor at University of Jyväskylä, Finland and associate professor at Temple University,
Philadelphia for 1986-1987, a full professor (microelectronics) at Tampere University of Technology, Tampere, Finland for 1987-1996, during which time (1992-1993) he acted as the Scientific Director of the Research Institute for Theoretical Physics at University of Helsinki and then in 1996 became a full professor (computational science) at Helsinki University of Technology which later (2010) transformed to Aalto University. Over the years he has held several visiting professorial positions at Oxford University, UNAM, Mexico, University of Southern Illinois, IL, USA, Northeastern University, Ma, USA, and The Tokyo University, Japan. His current research interests are in the complexity of physical, economic, social and information systems, computational and data science with a focus on social networks and human sociality, and digital data-driven health and wellbeing.
Study of social network structure and dynamics through extensive data analytics of large-scale mobile phone communication dataset. This aims to understand the role of different communication channels in maintaining social relationships, by focusing on the dynamics, mutuality, and frequency of interactions between people; how the evolution and maintenance depends on past interactions and on the type of communication channels; and how it is influenced by the structure of the social network. The aim is to get unprecedented insight into the correlations between the input conditions (emotional closeness, communication preferences, geo-localisation, language, gender, financial involvement, and time constraints) and the output (communication patterns, social engagement, and co-occurrence in offline and online space). Guided by these data-driven discoveries of social network structures and micro-scale dynamics in them, the next step is modelling the dynamics of and on social networks by using agent-based models to study multi-layered social network formation including community or group formation to explore the plausible social mechanisms explaining their formation. This part of the study will also focus on investigating various spreading mechanisms including information diffusion and social contagion as well as co-evolutionary opinion formation where opinion spreading affects the structure of the network.
Zhenming Liu is an assistant professor in the Computer Science Department at the College of William & Mary. He received his PhD in the theory of computation from Harvard University in 2012. He was a postdoctoral research associate at Princeton University in 2012 and 2014, and an alpha modeler at Two Sigma Investments from 2014 through 2016. Dr Liu received the Best Student Paper Award at ECML/PKDD 2010 and the Best Paper Award Runner-up at IEEE INFOCOM 2015. Currently, Dr Liu is using tools from applied probability, theoretical computer science, and optimisation to build scalable systems for analysing massive networked datasets
His lab focuses on building algorithmic foundations for large-scale end-to-end machine learning solutions.
His research programme consists of two thrusts:
1. Computational learning theory for graphs.
Graphs, the generic objects capturing the relationship/interaction between entities, are ubiquitous in data modelling and analysis. For instance, analysing the interactions in a social network can help researchers predict the circulation of (fake) news and its impacts. His lab is designing a suite of computationally efficient, statistically sound, learning algorithms for graphs, optimised for applications in social network analysis, topical models, and financial news analysis.
2. Large-scale learning system design and delivery.
Two barriers hinder the delivery of artificial intelligence to the public. One is the lack of computing infrastructure support for ‘applications in the wild’ (in terms of scalability and throughput) and the other is the computational weakness of today’s front-end devices. His lab is designing low-cost systems that can train on peta-scale data, and systems that can deliver high-throughput machine learning services.
In addition, to bridge the gap between theory and practice in machine learning Zhenming is collaborating with industry. Some of his current and past efforts include collaborations with AT&T on customer care related analysis, collaborations with Activision Blizzard to analyse players’ in-game behaviour and developing financial models for equity markets.
Anjali Mazumder is an Assistant Research Professor in the Department of Statistics and Data Science at Carnegie Mellon University (CMU). She has over 15 years’ experience at the interface of research, policy and practice in the UK, US, and Canadian institutions, having held positions at the Medical Research Council (UK), University of Warwick, Forensic Science Service (UK), and the Institute for Work & Health (Canada). At CMU, she co-ordinates the research activity of the CMU branch of the NIST Centre of Statistics and its Applications to Forensic Evidence (CSAFE), and an affiliate of the Analytics and AI for social good initiative of the new CMU Heinz Block Center for Technology & Society. She was appointed to Canada’s National DNA Databank Advisory Committee in 2012; actively contributes to initiatives that promote the understanding of uncertainty and data science informed decision-making in policy; and fosters collaboration to strengthen knowledge capacity and scientific innovation, particularly in the realms of data science for social good. Anjali holds a doctorate in Statistics from the University of Oxford and two masters’ degrees: one in Measurement and Evaluation and one in Statistics, both from the University of Toronto.
Anjali is motivated by a desire to solve real and fundamental data problems of societal importance, particularly in justice (criminal and social), human rights and the law, global development, public safety and security, health and education. Anjali’s research interests are in probabilistic graphical models, latent variable models, causality, decision analysis, information theory, Bayesian inference, clustering and trajectory models, change point analysis, social network analysis and their applications. In particular, her research focuses on developing and applying statistical methods to quantify the value of evidence; determine the optimal subset of information for inference or decision-making, combine models involving different scientific processes or disparate sources of data; characterize and assess change in processes or scenarios; and build decision support systems. Her research involves using data-driven approaches, developing and applying statistical methods and probabilistic reasoning to inform decision-making in policy and practice.
Anjali aims to work closely with the Defence and Security programme researchers and others at the Turing to further my current research work in building peace, security and trust in conflict areas. Perceptions of security, safety and trust in police/security forces can change rapidly in the midst of terrorist attacks, racially-charged violent or criminal outbreaks, and ethnic conflicts. Poor governance and mishandling of such ethnically and racially charged conflicts or events erode trust between security forces and communities, particularly with mob mentality or opinion spread, impeding peace building efforts and giving rise to social, economic and political instability of communities. With international collaborators, I will aim to (1) understand the complex data structure and causal mechanisms of on-going and potential conflict areas, and (2) develop data-driven approaches and decision-support systems to build peace, security and trust in conflict areas. The work will include accounting for complex spatial and temporal processes and causal mechanisms, identifying change point events and interaction forces, and combining (perhaps) disparate data sources in such a way that can inform actionable tasks for decision-making. I am interested to develop collaborations with other researchers to develop data science approaches (including understanding causal, spatial-temporal processes, mining unstructured data, combining data systems, building decision support systems) that not only contribute to building security and trust for defence and public safety but that can also be used to help other government services.
Ozan Oktem received his PhD in pure mathematics (several complex variables) in 1999 from the Department of Mathematics, Stockholm University. During his graduate studies, he decided to do more applied work, and in 1997 started to work full-time in industry (finance). After working 12 years as an applied mathematician in several industrial sectors (finance, life science/medical technology, and engineering), he returned to academia in 2009 to a full-time position at the Center for Industrial and Applied Mathematics within the Department of Mathematics, KTH Royal Institute of Technology (KTH), Stockholm.
In 2012 he became an associate professor in mathematics at KTH and since 2014, has led a group on mathematical imaging sciences at the department. The group currently involves six senior faculty, one research associate, four post-doctoral fellows, and four PhD students. Since 2016, he is also the Director for the KTH Life Science Technologies platform, which is one of five strategic initiatives at KTH for coordinating multidisciplinary research.
Ozan plans to work with Turing Fellow Dr Carola-Bibiane Schönlieb on 3D/4D image reconstruction methods relevant for phase contrast x-ray synchrotron tomography and cryo-electron microscopy. This collaboration will involve expertise from the UK Diamond Light Source and MRC Laboratory of Molecular Biology, Cambridge. An important part of the planned research is to share implementations. For this reason, we established a set of APIs for inverse problems (ODL) that separate applicationspecific parts from more generic parts. This enables us to easily setup reconstruction algorithms that combine domain specific physics driven simulators with generic software frameworks for machine learning.
Thierry Poibeau is a CNRS director of research and head of the LATTICE laboratory (Langues, Textes, Traitements informatiques et Cognition) since 2012. He is also an affiliated lecturer at
the Department of Theoretical and Applied Linguistics (DTAL) of the University of Cambridge. He mainly works on natural language processing (NLP), especially on the following topics: information extraction, question answering, knowledge acquisition from text and digital humanities. He is also interested in Finno-Ugric languages.
Thierry will focus on two different topics: digital humanities on the one hand and computational social science on the other. As for digital humanities, he is especially interested in studying the way natural language processing techniques can help analyse large corpora in humanities and social sciences. He is currently focusing on different kinds of literary texts: we now have collections of poetry and novels spanning over several centuries, which makes it possible to study how the notion of style has evolved, to characterise the style of an author or the specificities of a text genre. Within the area of computational social science, he will focus on modelling language change using different corpora, like newspapers (for example the New York Times’ corpus spanning over several decades) or Twitter (reflecting more recent changes and a more informal language register).
His main collaborator while at the Turing will be Barbara McGillivray.
Kathleen M. Vogel
Kathleen M. Vogel is an associate professor at the School of Public Policy, University of Maryland at College Park. She was previously an associate professor in the Department of Political Science, and Director of the Science, Technology, and Society Program at North Carolina State University (NC State). Vogel holds a PhD in biophysical chemistry from Princeton University. During 2016-2017 Vogel served as a Jefferson Science Fellow in the US Department of State’s Office to Monitor and Combat Trafficking in Persons. Prior to joining the NC State faculty, Vogel was a tenured associate professor at Cornell University. Her research has focused on studying the social and technical dimensions of bioweapons and emerging life science threats and the production of knowledge and big data in intelligence assessments. Her work has involved close engagement between academia, intelligence, and policy.
Overall interests relate the study of knowledge production on security and intelligence problems and at the Turing she plans to explore: 1) assessment of emerging biosecurity threats; 2) relating the study of knowledge production techniques to intelligence problems; and 3) understanding of human trafficking patterns. Since 1998, her longstanding research has been to apply social science methodologies and technical approaches to better assess the security threats from the life sciences. She has primarily used qualitative research methodologies, involving interviews and observations of detailed laboratory and technical practices. To further expand this work, she would like to explore bringing big data analytics into her research to provide further understanding of emerging biosecurity threats, such as gene editing. In addition, since 2014 she has been involved in a series of collaborative research projects at a big data research laboratory that aims to develop new technologies and tradecraft for the future of intelligence analysis. This research has involved interview and focus group data, surveys, and experimental data on prototype development and use. She would be interested to connect with Turing-affiliated researchers interested in these prototypes and how one thinks about the social and ethical implications of these technologies.