Planning with Expectation Models

04/02/2019
by   Yi Wan, et al.
20

Distribution and sample models are two popular model choices in model-based reinforcement learning (MBRL). However, learning these models can be intractable, particularly when the state and action spaces are large. Expectation models, on the other hand, are relatively easier to learn due to their compactness and have also been widely used for deterministic environments. For stochastic environments, it is not obvious how expectation models can be used for planning as they only partially characterize a distribution. In this paper, we propose a sound way of using approximate expectation models for MBRL. In particular, we 1) show that planning with an expectation model is equivalent to planning with a distribution model if the state value function is linear in state features, 2) analyze two common parametrization choices for approximating the expectation: linear and non-linear expectation models, 3) propose a sound model-based policy evaluation algorithm and present its convergence results, and 4) empirically demonstrate the effectiveness of the proposed planning algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/17/2021

Planning with Expectation Models for Control

In model-based reinforcement learning (MBRL), Wan et al. (2019) showed c...
research
10/12/2019

Regularizing Model-Based Planning with Energy-Based Models

Model-based reinforcement learning could enable sample-efficient learnin...
research
01/24/2023

Minimal Value-Equivalent Partial Models for Scalable and Robust Planning in Lifelong Reinforcement Learning

Learning models of the environment from pure interaction is often consid...
research
06/20/2019

Exploring Model-based Planning with Policy Networks

Model-based reinforcement learning (MBRL) with model-predictive control ...
research
06/29/2020

True to the Model or True to the Data?

A variety of recent papers discuss the application of Shapley values, a ...
research
04/04/2014

Scalable Planning and Learning for Multiagent POMDPs: Extended Version

Online, sample-based planning algorithms for POMDPs have shown great pro...
research
06/13/2012

Dyna-Style Planning with Linear Function Approximation and Prioritized Sweeping

We consider the problem of efficiently learning optimal control policies...

Please sign up or login with your details

Forgot password? Click here to reset