Reinforcement learning is a very simple machine learning scenario: an agent put in an unknown environment performs actions; actions gives rewards; thus the agent wants to learn how to maximise rewards. This pervasive approach, inspired by human reasoning, is nowadays used in a huge number of fields.
The typical limitation is that the environment is often very large, and classical computational methods do not scale up. Hence the need for new paradigms.
This project is about developing sound ways of reducing the size of the environment. In other words, we want to first construct a smaller and hopefully equivalent environment before proceeding to the reinforcement learning task itself on the smaller environment.
Different heuristics are known to achieve model reductions, often through statistical methods. The goal of this project is to provide theoretical notions to approach model reduction in an algorithmic way.
A first application and test case for these techniques is for playing games, like chess, go or poker. The environment for such games is beyond computer capabilities, hence model reduction may be a life changer, as recent progress has proved.
The main challenge and long-term objective is to apply these techniques to program verification and synthesis.