ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

09/06/2018
by   Yeah-Hua Wu, et al.
0

This work provides a thorough study on how reward scaling can affect performance of deep reinforcement learning agents. In particular, we would like to answer the question that how does reward scaling affect non-saturating ReLU networks in RL? This question matters because ReLU is one of the most effective activation functions for deep learning models. We also propose an Adaptive Network Scaling framework to find a suitable scale of the rewards during learning for better performance. We conducted empirical studies to justify the solution.

READ FULL TEXT
research
10/14/2020

Effects of the Nonlinearity in Activation Functions on the Performance of Deep Learning Models

The nonlinearity of activation functions used in deep learning models ar...
research
05/11/2021

Return-based Scaling: Yet Another Normalisation Trick for Deep RL

Scaling issues are mundane yet irritating for practitioners of reinforce...
research
03/05/2023

Swim: A General-Purpose, High-Performing, and Efficient Activation Function for Locomotion Control Tasks

Activation functions play a significant role in the performance of deep ...
research
03/26/2019

Improved robustness of reinforcement learning policies upon conversion to spiking neuronal network platforms applied to ATARI games

Various implementations of Deep Reinforcement Learning (RL) demonstrated...
research
02/26/2019

Using Ternary Rewards to Reason over Knowledge Graphs with Deep Reinforcement Learning

In this paper, we investigate the practical challenges of using reinforc...
research
03/11/2021

On Finite-Sample Analysis of Offline Reinforcement Learning with Deep ReLU Networks

This paper studies the statistical theory of offline reinforcement learn...
research
12/06/2019

Does Knowledge Transfer Always Help to Learn a Better Policy?

One of the key approaches to save samples when learning a policy for a r...

Please sign up or login with your details

Forgot password? Click here to reset