Latent State Marginalization as a Low-cost Approach for Improving Exploration

10/03/2022
by   Dinghuai Zhang, et al.
0

While the maximum entropy (MaxEnt) reinforcement learning (RL) framework – often touted for its exploration and robustness capabilities – is usually motivated from a probabilistic perspective, the use of deep probabilistic models has not gained much traction in practice due to their inherent complexity. In this work, we propose the adoption of latent variable policies within the MaxEnt framework, which we show can provably approximate any policy distribution, and additionally, naturally emerges under the use of world models with a latent belief state. We discuss why latent variable policies are difficult to train, how naive approaches can fail, then subsequently introduce a series of improvements centered around low-cost marginalization of the latent state, allowing us to make full use of the latent state at minimal additional cost. We instantiate our method under the actor-critic framework, marginalizing both the actor and critic. The resulting algorithm, referred to as Stochastic Marginal Actor-Critic (SMAC), is simple yet effective. We experimentally validate our method on continuous control tasks, showing that effective marginalization can lead to better exploration and more robust training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2019

Improving Exploration in Soft-Actor-Critic with Normalizing Flows Policies

Deep Reinforcement Learning (DRL) algorithms for continuous action space...
research
07/01/2019

Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model

Deep reinforcement learning (RL) algorithms can use high-capacity deep n...
research
01/29/2022

Zeroth-Order Actor-Critic

Zeroth-order optimization methods and policy gradient based first-order ...
research
02/27/2017

Reinforcement Learning with Deep Energy-Based Policies

We propose a method for learning expressive energy-based policies for co...
research
03/04/2023

Wasserstein Actor-Critic: Directed Exploration via Optimism for Continuous-Actions Control

Uncertainty quantification has been extensively used as a means to achie...
research
12/17/2021

Symmetry-aware Neural Architecture for Embodied Visual Navigation

Visual exploration is a task that seeks to visit all the navigable areas...
research
06/09/2021

Bayesian Bellman Operators

We introduce a novel perspective on Bayesian reinforcement learning (RL)...

Please sign up or login with your details

Forgot password? Click here to reset