ES Is More Than Just a Traditional Finite-Difference Approximator

12/18/2017
by   Joel Lehman, et al.
0

An evolution strategy (ES) variant recently attracted significant attention due to its surprisingly good performance at optimizing neural networks in challenging deep reinforcement learning domains. It searches directly in the parameter space of neural networks by generating perturbations to the current set of parameters, checking their performance, and moving in the direction of higher reward. The resemblance of this algorithm to a traditional finite-difference approximation of the reward gradient in parameter space naturally leads to the assumption that it is just that. However, this assumption is incorrect. The aim of this paper is to definitively demonstrate this point empirically. ES is a gradient approximator, but optimizes for a different gradient than just reward (especially when the magnitude of candidate perturbations is high). Instead, it optimizes for the average reward of the entire population, often also promoting parameters that are robust to perturbation. This difference can channel ES into significantly different areas of the search space than gradient descent in parameter space, and also consequently to networks with significantly different properties. This unique robustness-seeking property, and its consequences for optimization, are demonstrated in several domains. They include humanoid locomotion, where networks from policy gradient-based reinforcement learning are far less robust to parameter perturbation than ES-based policies that solve the same task. While the implications of such robustness and robustness-seeking remain open to further study, the main contribution of this work is to highlight that such differences indeed exist and deserve attention.

READ FULL TEXT

page 5

page 6

page 8

page 9

page 10

page 11

research
12/18/2017

Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning

Deep artificial neural networks (DNNs) are typically trained via gradien...
research
05/25/2022

An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation

Gradient descent or its variants are popular in training neural networks...
research
05/31/2016

Information Theoretically Aided Reinforcement Learning for Embodied Agents

Reinforcement learning for embodied agents is a challenging problem. The...
research
05/14/2022

Cliff Diving: Exploring Reward Surfaces in Reinforcement Learning Environments

Visualizing optimization landscapes has led to many fundamental insights...
research
02/21/2020

Accelerating Reinforcement Learning with a Directional-Gaussian-Smoothing Evolution Strategy

Evolution strategy (ES) has been shown great promise in many challenging...
research
10/14/2022

A Scalable Finite Difference Method for Deep Reinforcement Learning

Several low-bandwidth distributable black-box optimization algorithms ha...
research
07/27/2022

PI-ARS: Accelerating Evolution-Learned Visual-Locomotion with Predictive Information Representations

Evolution Strategy (ES) algorithms have shown promising results in train...

Please sign up or login with your details

Forgot password? Click here to reset