Introduction

Our society is being permeated by inter-connected intelligent machines that are becoming an important part our daily and most intimate lives.  Furthermore, the leaps in AI and Machine Learning innovations (e.g., in diagnosing cancer, predicting outcomes of court cases, or in robotic warehouses), mean that many tasks requiring specialised skills will soon be automated, thereby enabling the access to advanced services and products across the world at the tap of a screen.  These novel AI-driven services will involve the close partnership of humans and machines that operate in agile and flexible ways, whereby neither humans nor machines may always be in control of a system at all times (e.g., autonomous cars relinquish control in situations where sensor data is ambiguous, or teams of humans and machines working together in disaster relief operations). 

There is an urgent need to develop models, frameworks, algorithms, and methodologies to ensure such teams of humans and machines are designed for the benefit of society. This means that we need to ensure such systems are safe to use, act in a responsible way, and are able to balance the ethics of their actions against utility maximising ones.

About the event

This one day workshop will explore the areas of Responsible Artificial Intelligence, Explainable AI, human-centred design, robotics, and human robot interaction and invite leading UK researchers and innovators. The event will also include a keynote from a leading international speaker.

The event is jointly organised with the DSTL’s AI lab. The workshop will aim to cover a mix of theoretical and practical research questions centred around a specific number of application areas, including but not limited to: AI for  emergency response, defence and security, smart cities, and logistics.  

The event will comprise of two parts:
- An invitation-only workshop session which will include talks and cafe style discussion sessions.
- An open event with a high-profile keynote speaker and a panel on Responsible AI. This will be followed by a drinks reception.
 
 
This event will be suitable for individuals who are interested in inter-disciplinary and multi-disciplinary approaches to responsible human-machine teaming.  Specifically, this event is relevant to:

- Researchers from computer science, human-machine interaction, systems sciences, and social sciences.
- Users and Practitioners from diverse industries (e.g., defence, emergency response, energy systems, wildlife protection) where humans and autonomous systems  work in close partnership.
-  High-level decision-makers interested in the implications of human-machine teaming for their business or organisation.

Key topics to be covered include:
  * Explainability
  * Visualisation
  * Trust
  * Explainable AI
  * Accountability
  * Human-Agent Interaction
  * Responsible AI
  * Assurance
  * Flexible Autonomy
  * Agency Delegation
  * Self-organized Multi-robot Systems
  * Motion and Path Planning
  * Distributed Perception and Estimation

AXA Research Fund are co-sponsoring the event

Apply to attend

Applications for this event are now closed

Do feel free to sign up to attend the Keynote: The Moral Machine Experiment

Speakers

Organisers

Location

The Alan Turing Institute

1st floor of the British Library, 96 Euston Road, London, NW1 2DB