Muesli: Combining Improvements in Policy Optimization

04/13/2021
by   Matteo Hessel, et al.
0

We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2019

Uncertainty-aware Model-based Policy Optimization

Model-based reinforcement learning has the potential to be more sample e...
research
10/12/2020

Local Search for Policy Iteration in Continuous Control

We present an algorithm for local, regularized, policy improvement in re...
research
09/07/2019

Regularized Anderson Acceleration for Off-Policy Deep Reinforcement Learning

Model-free deep reinforcement learning (RL) algorithms have been widely ...
research
01/27/2021

OffCon^3: What is state of the art anyway?

Two popular approaches to model-free continuous control tasks are SAC an...
research
09/17/2018

Policy Optimization via Importance Sampling

Policy optimization is an effective reinforcement learning approach to s...
research
08/04/2020

Faded-Experience Trust Region Policy Optimization for Model-Free Power Allocation in Interference Channel

Policy gradient reinforcement learning techniques enable an agent to dir...
research
07/13/2020

Structured Policy Iteration for Linear Quadratic Regulator

Linear quadratic regulator (LQR) is one of the most popular frameworks t...

Please sign up or login with your details

Forgot password? Click here to reset