
MultiAgent Adversarial Inverse Reinforcement Learning
Reinforcement learning agents are prone to undesired behaviors due to re...
read it

NeuroevolutionBased Inverse Reinforcement Learning
The problem of Learning from Demonstration is targeted at learning to pe...
read it

Inferring agent objectives at different scales of a complex adaptive system
We introduce a framework to study the effective objectives at different ...
read it

Adversarial recovery of agent rewards from latent spaces of the limit order book
Inverse reinforcement learning has proved its ability to explain statea...
read it

Inverse Reinforcement Learning via Deep Gaussian Process
We propose a new approach to inverse reinforcement learning (IRL) based ...
read it

Inverse Reinforcement Learning with Multiple Ranked Experts
We consider the problem of learning to behave optimally in a Markov Deci...
read it

Inverse Reinforcement Learning in Contextual MDPs
We consider the Inverse Reinforcement Learning (IRL) problem in Contextu...
read it
Towards Inverse Reinforcement Learning for Limit Order Book Dynamics
Multiagent learning is a promising method to simulate aggregate competitive behaviour in finance. Learning expert agents' reward functions through their external demonstrations is hence particularly relevant for subsequent design of realistic agentbased simulations. Inverse Reinforcement Learning (IRL) aims at acquiring such reward functions through inference, allowing to generalize the resulting policy to states not observed in the past. This paper investigates whether IRL can infer such rewards from agents within real financial stochastic environments: limit order books (LOB). We introduce a simple onelevel LOB, where the interactions of a number of stochastic agents and an expert trading agent are modelled as a Markov decision process. We consider two cases for the expert's reward: either a simple linear function of state features; or a complex, more realistic nonlinear function. Given the expert agent's demonstrations, we attempt to discover their strategy by modelling their latent reward function using linear and Gaussian process (GP) regressors from previous literature, and our own approach through Bayesian neural networks (BNN). While the three methods can learn the linear case, only the GPbased and our proposed BNN methods are able to discover the nonlinear reward case. Our BNN IRL algorithm outperforms the other two approaches as the number of samples increases. These results illustrate that complex behaviours, induced by nonlinear reward functions amid agentbased stochastic scenarios, can be deduced through inference, encouraging the use of inverse reinforcement learning for opponentmodelling in multiagent systems.
READ FULL TEXT
Comments
There are no comments yet.