ANS: Adaptive Network Scaling for Deep Rectifier Reinforcement Learning Models

09/06/2018
by   Yeah-Hua Wu, et al.
0

This work provides a thorough study on how reward scaling can affect performance of deep reinforcement learning agents. In particular, we would like to answer the question that how does reward scaling affect non-saturating ReLU networks in RL? This question matters because ReLU is one of the most effective activation functions for deep learning models. We also propose an Adaptive Network Scaling framework to find a suitable scale of the rewards during learning for better performance. We conducted empirical studies to justify the solution.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset