Summit on machine learning meets formal methods

Organisers: Marta Kwiatkowska (University of Oxford, UK); Nathanael Fijalkow (The Alan Turing Institute and University of Warwick, UK); Stephen Roberts (University of Oxford, UK)

Date: 13 July 2018

Time: To be announced

Venue: University of Oxford

Abstracts and biographies

This workshop is part of the Federated Logic Conference (FLoC) For registration and further information, visit the main event website

Machine learning has revolutionised computer science and AI: deep neural networks have been shown to match human ability in various tasks and solutions based on machine learning are being deployed in real-world systems, from automation and self-driving cars to security and banking. Undoubtedly, the potential benefits of AI systems are immense and wide ranging. At the same time, recent accidents of machine learning systems have caught the public’s attention, and as a result several researchers are beginning to question their safety. Traditionally, safety assurance methodologies are the realm of formal methods, understood broadly as the rigorous, mathematical underpinning of software and hardware systems. They are rooted in logic and reasoning, and aim to provide guarantees that the system is behaving correctly, which is necessary in safety-critical contexts. Such guarantees can be provided automatically for conventional software/hardware systems using verification technologies such as model checking or theorem proving. However, machine learning does not offer guarantees, and reasoning techniques necessary to justify safety of its autonomous decisions are in their infancy.

The summit on machine learning meets formal methods will bring together academic and industrial leaders who will discuss the benefits and risks of machine learning solutions. The overall aim is to identify promising future directions for research and innovation of interest to The Alan Turing Institute and UK research councils and government agencies, which will be summarised in a written report that will be made public.

Confirmed speakers:

  • Transpareny and accountability for machine learning
    Anupam Datta (Carnegie Mellon University, USA)
  • Programming from examples: PL meets ML
    Sumit Gulwani (Microsoft, USA)
  • Towards neural program synthesis and code repair
    Pushmeet Kohli (DeepMind, UK)
  • Provably beneficial artificial intelligence
    Stuart Russell (University of California at Berkeley, USA)
  • Data centric engineering: A new concept?
    Mark Girolami (Imperial College London and The Alan Turing Institute, UK)
  • Machine learning and logic: Fast and slow thinking
    Moshe Vardi (Rice University, USA)