Lamarckian Evolution of Convolutional Neural Networks

06/21/2018
by   Jonas Prellberg, et al.
0

Convolutional neural networks belong to the most successul image classifiers, but the adaptation of their network architecture to a particular problem is computationally expensive. We show that an evolutionary algorithm saves training time during the network architecture optimization, if learned network weights are inherited over generations by Lamarckian evolution. Experiments on typical image datasets show similar or significantly better test accuracies and improved convergence speeds compared to two different baselines without weight inheritance. On CIFAR-10 and CIFAR-100 a 75 observed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/23/2014

One weird trick for parallelizing convolutional neural networks

I present a new way to parallelize the training of convolutional neural ...
research
03/03/2017

Large-Scale Evolution of Image Classifiers

Neural networks have proven effective at solving difficult problems but ...
research
09/10/2018

Finding Better Topologies for Deep Convolutional Neural Networks by Evolution

Due to the nonlinearity of artificial neural networks, designing topolog...
research
06/24/2022

Predicting the Stability of Hierarchical Triple Systems with Convolutional Neural Networks

Understanding the long-term evolution of hierarchical triple systems is ...
research
01/02/2019

Evolutionary Construction of Convolutional Neural Networks

Neuro-Evolution is a field of study that has recently gained significant...
research
05/16/2019

Joint Learning of Neural Networks via Iterative Reweighted Least Squares

In this paper, we introduce the problem of jointly learning feed-forward...
research
03/06/2015

Deep Clustered Convolutional Kernels

Deep neural networks have recently achieved state of the art performance...

Please sign up or login with your details

Forgot password? Click here to reset