Correspondence between neuroevolution and gradient descent

08/15/2020
by   Stephen Whitelam, et al.
0

We show analytically that training a neural network by stochastic mutation or "neuroevolution" of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise. Averaged over independent realizations of the learning process, neuroevolution is equivalent to gradient descent on the loss function. We use numerical simulation to show that this correspondence can be observed for finite mutations. Our results provide a connection between two distinct types of neural-network training, and provide justification for the empirical success of neuroevolution.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2022

Langevin algorithms for Markovian Neural Networks and Deep Stochastic control

Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which a...
research
05/15/2021

Gradient Descent in Materio

Deep learning, a multi-layered neural network approach inspired by the b...
research
12/08/2020

Convergence Rates for Multi-classs Logistic Regression Near Minimum

Training a neural network is typically done via variations of gradient d...
research
05/16/2022

Training neural networks using Metropolis Monte Carlo and an adaptive variant

We examine the zero-temperature Metropolis Monte Carlo algorithm as a to...
research
03/16/2021

Learning without gradient descent encoded by the dynamics of a neurobiological model

The success of state-of-the-art machine learning is essentially all base...
research
09/08/2020

Empirical Strategy for Stretching Probability Distribution in Neural-network-based Regression

In regression analysis under artificial neural networks, the prediction ...
research
07/27/2020

Universality of Gradient Descent Neural Network Training

It has been observed that design choices of neural networks are often cr...

Please sign up or login with your details

Forgot password? Click here to reset