Dr Paolo Turrini

Paolo Turrini


Turing Fellow

Partner Institution


Paolo Turrini is an Associate Professor in the Department of Computer Science of the University of Warwick, which he joined in 2017. He gained his PhD in Computer Science at the University of Utrecht, working on modal logics for game theory. He won a COFUND Marie Curie fellowship from the University of Luxembourg, where he worked on logical and game theoretic models of trust in coalition formation. He also won an Intra European Marie Curie Fellowship, moving to Imperial College London, to work on models of distributed negotiations. At Imperial he was a recipient of a Junior Research Fellowship. His research is in artificial intelligence, in particular, models of strategic behaviour. He publishes in the top venues in AI and multi-agent systems, e.g., IJCAI, AAAI, AAMAS, JAAMAS, JAIR.

Research interests

In distributed artificial intelligence, where autonomous entities freely interact to pursue potentially conflicting objectives, regulating their decision-making is a critical concern. Automated trading, traffic control, resource allocation, are all scenarios in which the decisions of self-interested agents, even when individually rational, might lead to outcomes that are detrimental for society. Paolo's research, sits at the crossroads of computer science and game theory, and aims to construct algorithms for the regulation of complex strategic interaction, spelling out the theoretical and computational guarantees under which individually rational decisions lead to socially optimal outcomes. Agents are modelled as autonomous entities acting in a dynamic and unknown environment, which they can learn and reason about, and which is typically inhabited by other agents pursuing potentially conflicting objectives.

The research methodology builds on rigorous game-theoretic modelling and asks algorithmic questions. Instances of such questions (extremely relevant to top publication venues in AI, algorithmic game theory and multiagent systems) are: - can we devise a protocol for collective decision-making that cannot be manipulated? Think for instance of the problem of biased reviews in online social networks. - can we prevent external attackers from learning the structure of a social network? Think of a service provider interested in gathering information on potential customers. - can we deploy these programs and augment the existing systems? Think of the changes that current recommender systems need to undergo in order to be trustworthy for the users. Typical techniques are equilibrium analysis, complexity analysis and validation against human data.