Evolution Strategies Converges to Finite Differences

12/27/2019
by   John C. Raisbeck, et al.
0

Since the debut of Evolution Strategies (ES) as a tool for Reinforcement Learning by Salimans et al. 2017, there has been interest in determining the exact relationship between the Evolution Strategies gradient and the gradient of a similar class of algorithms, Finite Differences (FD).(Zhang et al. 2017, Lehman et al. 2018) Several investigations into the subject have been performed, investigating the formal motivational differences(Lehman et al. 2018) between ES and FD, as well as the differences in a standard benchmark problem in Machine Learning, the MNIST classification problem(Zhang et al. 2017). This paper proves that while the gradients are different, they converge as the dimension of the vector under optimization increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2022

Rewriting results in the language of compatibility

This is a reply to Muff, S. et al. (2022) Rewriting results sections in ...
research
10/24/2020

Stable ResNet

Deep ResNet architectures have achieved state of the art performance on ...
research
08/25/2021

Dropout against Deep Leakage from Gradients

As the scale and size of the data increases significantly nowadays, fede...
research
06/02/2017

Facies classification from well logs using an inception convolutional network

The idea to use automated algorithms to determine geological facies from...
research
06/28/2013

Investigation of "Enhancing flexibility and robustness in multi-agent task scheduling"

Wilson et al. propose a measure of flexibility in project scheduling pro...
research
10/05/2020

Why Older Adults (Don't) Use Password Managers

Password managers (PMs) are considered highly effective tools for increa...
research
07/03/2022

How to Sample Approval Elections?

We study the multifaceted question of how to sample approval elections i...

Please sign up or login with your details

Forgot password? Click here to reset