Towards Deeper Deep Reinforcement Learning

06/02/2021
by   Johan Bjorck, et al.
21

In computer vision and natural language processing, innovations in model architecture that lead to increases in model capacity have reliably translated into gains in performance. In stark contrast with this trend, state-of-the-art reinforcement learning (RL) algorithms often use only small MLPs, and gains in performance typically originate from algorithmic innovations. It is natural to hypothesize that small datasets in RL necessitate simple models to avoid overfitting; however, this hypothesis is untested. In this paper we investigate how RL agents are affected by exchanging the small MLPs with larger modern networks with skip connections and normalization, focusing specifically on soft actor-critic (SAC) algorithms. We verify, empirically, that naïvely adopting such architectures leads to instabilities and poor performance, likely contributing to the popularity of simple models in practice. However, we show that dataset size is not the limiting factor, and instead argue that intrinsic instability from the actor in SAC taking gradients through the critic is the culprit. We demonstrate that a simple smoothing method can mitigate this issue, which enables stable training with large modern architectures. After smoothing, larger models yield dramatic performance improvements for state-of-the-art agents – suggesting that more "easy" gains may be had by focusing on model architectures in addition to algorithmic innovations.

READ FULL TEXT

page 4

page 15

page 16

02/16/2021

Training Larger Networks for Deep Reinforcement Learning

The success of deep learning in the computer vision and natural language...
05/17/2021

Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning

Offline Reinforcement Learning promises to learn effective policies from...
06/24/2022

Value Function Decomposition for Iterative Design of Reinforcement Learning Agents

Designing reinforcement learning (RL) agents is typically a difficult pr...
06/12/2021

Recomposing the Reinforcement Learning Building Blocks with Hypernetworks

The Reinforcement Learning (RL) building blocks, i.e. Q-functions and po...
12/17/2021

Symmetry-aware Neural Architecture for Embodied Visual Navigation

Visual exploration is a task that seeks to visit all the navigable areas...
10/21/2021

Is High Variance Unavoidable in RL? A Case Study in Continuous Control

Reinforcement learning (RL) experiments have notoriously high variance, ...