[Reproducibility Report] Rigging the Lottery: Making All Tickets Winners

03/29/2021
by   Varun Sundar, et al.
0

RigL, a sparse training algorithm, claims to directly train sparse networks that match or exceed the performance of existing dense-to-sparse training techniques (such as pruning) for a fixed parameter count and compute budget. We implement RigL from scratch in Pytorch and reproduce its performance on CIFAR-10 within 0.1 CIFAR-10/100, the central claim holds – given a fixed training budget, RigL surpasses existing dynamic-sparse training methods over a range of target sparsities. By training longer, the performance can match or exceed iterative pruning, while consuming constant FLOPs throughout training. We also show that there is little benefit in tuning RigL's hyper-parameters for every sparsity, initialization pair – the reference choice of hyperparameters is often close to optimal performance. Going beyond the original paper, we find that the optimal initialization scheme depends on the training constraint. While the Erdos-Renyi-Kernel distribution outperforms the Uniform distribution for a fixed parameter count, for a fixed FLOP count, the latter performs better. Finally, redistributing layer-wise sparsity while training can bridge the performance gap between the two initialization schemes, but increases computational cost.

READ FULL TEXT

page 7

page 12

research
02/15/2019

Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

Deep neural networks are typically highly over-parameterized with prunin...
research
03/13/2022

AugShuffleNet: Improve ShuffleNetV2 via More Information Communication

Based on ShuffleNetV2, we build a more powerful and efficient model fami...
research
11/25/2019

Rigging the Lottery: Making All Tickets Winners

Sparse neural networks have been shown to be more parameter and compute ...
research
10/12/2022

Towards Theoretically Inspired Neural Initialization Optimization

Automated machine learning has been widely explored to reduce human effo...
research
06/09/2020

Pruning neural networks without any data by iteratively conserving synaptic flow

Pruning the parameters of deep neural networks has generated intense int...
research
02/18/2022

Amenable Sparse Network Investigator

As the optimization problem of pruning a neural network is nonconvex and...
research
02/24/2022

Rare Gems: Finding Lottery Tickets at Initialization

It has been widely observed that large neural networks can be pruned to ...

Please sign up or login with your details

Forgot password? Click here to reset