Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability

05/31/2023
by   Robert Tjarko Lange, et al.
0

Is the lottery ticket phenomenon an idiosyncrasy of gradient-based training or does it generalize to evolutionary optimization? In this paper we establish the existence of highly sparse trainable initializations for evolution strategies (ES) and characterize qualitative differences compared to gradient descent (GD)-based sparse training. We introduce a novel signal-to-noise iterative pruning procedure, which incorporates loss curvature information into the network pruning step. This can enable the discovery of even sparser trainable network initializations when using black-box evolution as compared to GD-based optimization. Furthermore, we find that these initializations encode an inductive bias, which transfers across different ES, related tasks and even to GD-based training. Finally, we compare the local optima resulting from the different optimization paradigms and sparsity levels. In contrast to GD, ES explore diverse and flat local optima and do not preserve linear mode connectivity across sparsity levels and independent runs. The results highlight qualitative differences between evolution and gradient-based learning dynamics, which can be uncovered by the study of iterative pruning procedures.

READ FULL TEXT

page 2

page 8

page 12

research
07/21/2018

Towards Distributed Coevolutionary GANs

Generative Adversarial Networks (GANs) have become one of the dominant m...
research
01/14/2020

On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks

We examine how recently documented, fundamental phenomena in deep learni...
research
07/07/2023

Distilled Pruning: Using Synthetic Data to Win the Lottery

This work introduces a novel approach to pruning deep learning models by...
research
06/09/2020

Pruning neural networks without any data by iteratively conserving synaptic flow

Pruning the parameters of deep neural networks has generated intense int...
research
05/14/2020

Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers

We present a novel network pruning algorithm called Dynamic Sparse Train...
research
04/24/2019

Differentiable Pruning Method for Neural Networks

Architecture optimization is a promising technique to find an efficient ...
research
07/06/2020

Bespoke vs. Prêt-à-Porter Lottery Tickets: Exploiting Mask Similarity for Trainable Sub-Network Finding

The observation of sparse trainable sub-networks within over-parametrize...

Please sign up or login with your details

Forgot password? Click here to reset