The software agent paradigm emerged in the late 1980s, when it was envisaged as a new mode of user interface. Instead of software acting as a dumb, passive recipient of our instructions, the dream was to build software applications that would instead act as pro-active assistants, working with us on our everyday tasks. This approach is now standard: we have software agents on our phones and in our homes – Siri, Alexa, and Cortana are all manifestations of the software agent dream. A natural development of the software agent paradigm is the idea that these agents will interact not just with humans, but with each other, and it is this idea that gives rise to the paradigm of multi-agent systems.
Traditionally, AI focussed on attributes of intelligent behaviour such as learning, planning, and problem solving. Multi-agent systems (MAS) emphasise a different set of skills. Specifically, to make the dream of multi-agent systems a reality, then we will need to build AI systems that have social skills – the ability to cooperate, coordinate, and negotiate with other software agents and with humans in order to autonomously achieve delegated goals. Considerable emphasis has recently been given to issues of AI safety, and multi-agent systems bring with them their own challenges – in particular, the dynamics of multi-agent systems can be unpredictable and difficult to understand. To make safe multi-agent systems a reality, we need to address these challenges head on.
- Understand how to engineer AI systems with social skills: the ability to autonomously cooperate, coordinate, and negotiate with each other.
- Understand how to engineer multi-agent systems with safe, predictable dynamics.
Multi-agent systems are now a standard technique in many areas, for example in agent-based financial modelling, agent-based epidemiological modelling, automated trading, business process modelling, security resource allocation, and multi-robot systems. The work of this programme will have impacts across this spread of applications.