The Alan Turing Institute

Summit on machine learning meets formal methods

This workshop is part of the Federated Logic Conference (FLoC)

The summit on machine learning meets formal methods will bring together academic and industrial leaders who will discuss the benefits and risks of machine learning solutions.

Learn more Find out more Add to Calendar 07/13/2018 09:00 AM 07/13/2018 09:30 PM Europe/London Summit on machine learning meets formal methods The summit on machine learning meets formal methods will bring together academic and industrial leaders who will discuss the benefits and risks of machine learning solutions. Location of the event
Friday 13 Jul 2018
Time: 09:00 - 21:30

Event type

Workshop

Audience type

Cross-disciplinary

Machine learning has revolutionised computer science and AI: deep neural networks have been shown to match human ability in various tasks and solutions based on machine learning are being deployed in real-world systems, from automation and self-driving cars to security and banking.

About the event

Organisers: Marta Kwiatkowska (University of Oxford, UK); Nathanael Fijalkow (The Alan Turing Institute and University of Warwick, UK); Stephen Roberts (University of Oxford, UK)  

Time: 9:00 - 21:30

Venue: University of Oxford

Abstracts and biographies

Undoubtedly, the potential benefits of AI systems are immense and wide ranging. At the same time, recent accidents of machine learning systems have caught the public’s attention, and as a result several researchers are beginning to question their safety. Traditionally, safety assurance methodologies are the realm of formal methods, understood broadly as the rigorous, mathematical underpinning of software and hardware systems. They are rooted in logic and reasoning, and aim to provide guarantees that the system is behaving correctly, which is necessary in safety-critical contexts. Such guarantees can be provided automatically for conventional software/hardware systems using verification technologies such as model checking or theorem proving. However, machine learning does not offer guarantees, and reasoning techniques necessary to justify safety of its autonomous decisions are in their infancy.

The summit on machine learning meets formal methods will bring together academic and industrial leaders who will discuss the benefits and risks of machine learning solutions. The overall aim is to identify promising future directions for research and innovation of interest to The Alan Turing Institute and UK research councils and government agencies, which will be summarised in a written report that will be made public.

Confirmed speakers:

  • Transparency and accountability for machine learning Anupam Datta (Carnegie Mellon University, USA)

  • Programming from examples: PL meets ML Sumit Gulwani (Microsoft, USA)

  • Towards neural program synthesis and code repair Pushmeet Kohli (DeepMind, UK)

  • Provably beneficial artificial intelligence Stuart Russell (University of California at Berkeley, USA)

  • Data centric engineering: A new concept? Mark Girolami (Imperial College London and The Alan Turing Institute, UK)

  • Ethically aligned AI systems Francesca Rossi (University of Padova, Italy)

  • Deep learning methods for large scale theorem proving Christian Szegedy (Google, USA)

  • Machine learning and logic: Fast and slow thinking Moshe Vardi (Rice University, USA)

  • AI2: AI safety and robustness with abstract interpretation Martin Vechev (ETH Zurich, Switzerland)

  • What just happened in AI Adnan Darwiche (University of California, Los Angeles, USA)

  • Safety verification for deep neural networks with provable guarantees Marta Kwiatkowska (University of Oxford, UK)

  • Safe reinforcement learning via formal methods André Platzer (Carnegie Mellon University, USA)

  • Program fairness – a formal methods perspective Aditya Nori (Microsoft Research Cambridge, UK)

Location

University of Oxford

Oxford, UK

51.7548164, -1.2543668000001