Part 1: Reinforcement Learning in Algorithmic Trading
Reinforcement learning aims to solve certain stochastic control problems without making explicit assumptions on the dynamics of the environment or on the effect that an agent’s actions has on its dynamics. In this talk, I will provide an overview of two approaches: (i) double deep Q-learning, and (ii) reinforced deep Kalman filters for algorithmic trading. Deep Q-learning approximates the action-value function with a neural net and aims to solve the Bellman equation through learning by acting in the environment and updating the network parameters. Reinforced Deep Kalman Filters on the other hand, takes a batch reinforcement learning perspective and aims to maximize the rewards directly by learning a latent model and updating that model as data arrives and the agent takes actions. Some sample results on real data will be shown.
Part 2: Mean-Field Games with Differing Beliefs for Algorithmic Trading
Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents' performance criteria is computed under a different probability measure.