Variational Latent Branching Model for Off-Policy Evaluation

01/28/2023
by   Qitong Gao, et al.
0

Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model's robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.

READ FULL TEXT

page 7

page 14

research
03/28/2017

Factoring Exogenous State for Model-Free Monte Carlo

Policy analysts wish to visualize a range of policies for large simulato...
research
02/21/2023

Adversarial Model for Offline Reinforcement Learning

We propose a novel model-based offline Reinforcement Learning (RL) frame...
research
10/07/2022

Latent Neural ODEs with Sparse Bayesian Multiple Shooting

Training dynamic models, such as neural ODEs, on long trajectories is a ...
research
02/13/2021

PerSim: Data-Efficient Offline Reinforcement Learning with Heterogeneous Agents via Personalized Simulators

We consider offline reinforcement learning (RL) with heterogeneous agent...
research
12/08/2022

Model-based trajectory stitching for improved behavioural cloning and its applications

Behavioural cloning (BC) is a commonly used imitation learning method to...
research
11/09/2020

Reward Conditioned Neural Movement Primitives for Population Based Variational Policy Optimization

The aim of this paper is to study the reward based policy exploration pr...
research
10/26/2022

Optimizing Pessimism in Dynamic Treatment Regimes: A Bayesian Learning Approach

In this article, we propose a novel pessimism-based Bayesian learning me...

Please sign up or login with your details

Forgot password? Click here to reset