Models, Pixels, and Rewards: Evaluating Design Trade-offs in Visual Model-Based Reinforcement Learning

12/08/2020
by   Mohammad Babaeizadeh, et al.
7

Model-based reinforcement learning (MBRL) methods have shown strong sample efficiency and performance across a variety of tasks, including when faced with high-dimensional visual observations. These methods learn to predict the environment dynamics and expected reward from interaction and use this predictive model to plan and perform the task. However, MBRL methods vary in their fundamental design choices, and there is no strong consensus in the literature on how these design decisions affect performance. In this paper, we study a number of design decisions for the predictive model in visual MBRL algorithms, focusing specifically on methods that use a predictive model for planning. We find that a range of design decisions that are often considered crucial, such as the use of latent spaces, have little effect on task performance. A big exception to this finding is that predicting future observations (i.e., images) leads to significant task performance improvement compared to only predicting rewards. We also empirically find that image prediction accuracy, somewhat surprisingly, correlates more strongly with downstream task performance than reward prediction accuracy. We show how this phenomenon is related to exploration and how some of the lower-scoring models on standard benchmarks (that require exploration) will perform the same as the best-performing models when trained on the same training data. Simultaneously, in the absence of exploration, models that fit the data better usually perform better on the downstream task as well, but surprisingly, these are often not the same models that perform the best when learning and exploring from scratch. These findings suggest that performance and exploration place important and potentially contradictory requirements on the model.

READ FULL TEXT
research
12/09/2019

Learning Latent State Spaces for Planning through Reward Prediction

Model-based reinforcement learning methods typically learn models for hi...
research
03/22/2019

DQN with model-based exploration: efficient learning on environments with sparse rewards

We propose Deep Q-Networks (DQN) with model-based exploration, an algori...
research
09/19/2022

Rewarding Episodic Visitation Discrepancy for Exploration in Reinforcement Learning

Exploration is critical for deep reinforcement learning in complex envir...
research
12/04/2020

Planning from Pixels using Inverse Dynamics Models

Learning task-agnostic dynamics models in high-dimensional observation s...
research
02/07/2020

Causally Correct Partial Models for Reinforcement Learning

In reinforcement learning, we can learn a model of future observations a...
research
04/26/2023

FLEX: an Adaptive Exploration Algorithm for Nonlinear Systems

Model-based reinforcement learning is a powerful tool, but collecting da...
research
11/22/2019

Optimizing Data Usage via Differentiable Rewards

To acquire a new skill, humans learn better and faster if a tutor, based...

Please sign up or login with your details

Forgot password? Click here to reset