Limited Evaluation Evolutionary Optimization of Large Neural Networks

06/26/2018
by   Jonas Prellberg, et al.
0

Stochastic gradient descent is the most prevalent algorithm to train neural networks. However, other approaches such as evolutionary algorithms are also applicable to this task. Evolutionary algorithms bring unique trade-offs that are worth exploring, but computational demands have so far restricted exploration to small networks with few parameters. We implement an evolutionary algorithm that executes entirely on the GPU, which allows to efficiently batch-evaluate a whole population of networks. Within this framework, we explore the limited evaluation evolutionary algorithm for neural network training and find that its batch evaluation idea comes with a large accuracy trade-off. In further experiments, we explore crossover operators and find that unprincipled random uniform crossover performs extremely well. Finally, we train a network with 92k parameters on MNIST using an EA and achieve 97.6 test accuracy compared to 98 Adam. Code is available at https://github.com/jprellberg/gpuea.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2018

Evolutionary Stochastic Gradient Descent for Optimization of Deep Neural Networks

We propose a population-based Evolutionary Stochastic Gradient Descent (...
research
10/01/2022

NeuroEvo: A Cloud-based Platform for Automated Design and Training of Neural Networks using Evolutionary and Particle Swarm Algorithms

Evolutionary algorithms (EAs) provide unique advantages for optimizing n...
research
05/15/2019

Regularized Evolutionary Algorithm for Dynamic Neural Topology Search

Designing neural networks for object recognition requires considerable a...
research
05/15/2012

Distribution of the search of evolutionary product unit neural networks for classification

This paper deals with the distributed processing in the search for an op...
research
12/21/2018

Slimmable Neural Networks

We present a simple and general method to train a single neural network ...
research
09/12/2021

A Scalable Continuous Unbounded Optimisation Benchmark Suite from Neural Network Regression

For the design of optimisation algorithms that perform well in general, ...
research
07/12/2019

An Evolutionary Algorithm of Linear complexity: Application to Training of Deep Neural Networks

The performance of deep neural networks, such as Deep Belief Networks fo...

Please sign up or login with your details

Forgot password? Click here to reset