Muesli: Combining Improvements in Policy Optimization

04/13/2021
by   Matteo Hessel, et al.
0

We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.

READ FULL TEXT

page 1

page 2

page 3

page 4

06/25/2019

Uncertainty-aware Model-based Policy Optimization

Model-based reinforcement learning has the potential to be more sample e...
10/12/2020

Local Search for Policy Iteration in Continuous Control

We present an algorithm for local, regularized, policy improvement in re...
09/07/2019

Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning

Model-free deep reinforcement learning (RL) algorithms have been widely ...
01/27/2021

OffCon^3: What is state of the art anyway?

Two popular approaches to model-free continuous control tasks are SAC an...
09/17/2018

Policy Optimization via Importance Sampling

Policy optimization is an effective reinforcement learning approach to s...
07/13/2020

Structured Policy Iteration for Linear Quadratic Regulator

Linear quadratic regulator (LQR) is one of the most popular frameworks t...

Please sign up or login with your details

Forgot password? Click here to reset