Improving Performance in Reinforcement Learning by Breaking Generalization in Neural Networks

03/16/2020
by   Sina Ghiassian, et al.
17

Reinforcement learning systems require good representations to work well. For decades practical success in reinforcement learning was limited to small domains. Deep reinforcement learning systems, on the other hand, are scalable, not dependent on domain specific prior knowledge and have been successfully used to play Atari, in 3D navigation from pixels, and to control high degree of freedom robots. Unfortunately, the performance of deep reinforcement learning systems is sensitive to hyper-parameter settings and architecture choices. Even well tuned systems exhibit significant instability both within a trial and across experiment replications. In practice, significant expertise and trial and error are usually required to achieve good performance. One potential source of the problem is known as catastrophic interference: when later training decreases performance by overriding previous learning. Interestingly, the powerful generalization that makes Neural Networks (NN) so effective in batch supervised learning might explain the challenges when applying them in reinforcement learning tasks. In this paper, we explore how online NN training and interference interact in reinforcement learning. We find that simply re-mapping the input observations to a high-dimensional space improves learning speed and parameter sensitivity. We also show this preprocessing reduces interference in prediction tasks. More practically, we provide a simple approach to NN training that is easy to implement, and requires little additional computation. We demonstrate that our approach improves performance in both prediction and control with an extensive batch of experiments in classic control domains.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/19/2019

Deep Tile Coder: an Efficient Sparse Representation Learning Approach with applications in Reinforcement Learning

Representation learning is critical to the success of modern large-scale...
research
05/21/2017

Shallow Updates for Deep Reinforcement Learning

Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQ...
research
07/10/2023

Measuring and Mitigating Interference in Reinforcement Learning

Catastrophic interference is common in many network-based learning syste...
research
11/09/2020

Testbeds for Reinforcement Learning

We present three problems modeled after animal learning experiments desi...
research
07/07/2020

Towards a practical measure of interference for reinforcement learning

Catastrophic interference is common in many network-based learning syste...
research
11/15/2018

The Utility of Sparse Representations for Control in Reinforcement Learning

We investigate sparse representations for control in reinforcement learn...
research
08/15/2023

Distilling Knowledge from Resource Management Algorithms to Neural Networks: A Unified Training Assistance Approach

As a fundamental problem, numerous methods are dedicated to the optimiza...

Please sign up or login with your details

Forgot password? Click here to reset