Two geometric input transformation methods for fast online reinforcement learning with neural nets

05/18/2018
by   Sina Ghiassian, et al.
0

We apply neural nets with ReLU gates in online reinforcement learning. Our goal is to train these networks in an incremental manner, without the computationally expensive experience replay. By studying how individual neural nodes behave in online training, we recognize that the global nature of ReLU gates can cause undesirable learning interference in each node's learning behavior. We propose reducing such interferences with two efficient input transformation methods that are geometric in nature and match well the geometric property of ReLU gates. The first one is tile coding, a classic binary encoding scheme originally designed for local generalization based on the topological structure of the input space. The second one (EmECS) is a new method we introduce; it is based on geometric properties of convex sets and topological embedding of the input space into the boundary of a convex set. We discuss the behavior of the network when it operates on the transformed inputs. We also compare it experimentally with some neural nets that do not use the same input transformations, and with the classic algorithm of tile coding plus a linear function approximator, and on several online reinforcement learning tasks, we show that the neural net with tile coding or EmECS can achieve not only faster learning but also more accurate approximations. Our results strongly suggest that geometric input transformation of this type can be effective for interference reduction and takes us a step closer to fully incremental reinforcement learning with neural nets.

READ FULL TEXT

page 14

page 17

page 19

research
11/08/2017

Lower bounds over Boolean inputs for deep neural networks with ReLU gates

Motivated by the resurgence of neural networks in being able to solve co...
research
11/02/2020

Fast Reinforcement Learning with Incremental Gaussian Mixture Models

This work presents a novel algorithm that integrates a data-efficient fu...
research
10/02/2018

GINN: Geometric Illustration of Neural Networks

This informal technical report details the geometric illustration of dec...
research
11/15/2018

The Utility of Sparse Representations for Control in Reinforcement Learning

We investigate sparse representations for control in reinforcement learn...
research
02/19/2022

Transformation Coding: Simple Objectives for Equivariant Representations

We present a simple non-generative approach to deep representation learn...
research
03/06/2022

Strongly Consistent Transformation of Partial Scenarios

We present a formal approach for partial transformation of scenario-base...
research
02/14/2016

Benefits of depth in neural networks

For any positive integer k, there exist neural networks with Θ(k^3) laye...

Please sign up or login with your details

Forgot password? Click here to reset