[Reproducibility Report] Rigging the Lottery: Making All Tickets Winners

03/29/2021
by   Varun Sundar, et al.
0

RigL, a sparse training algorithm, claims to directly train sparse networks that match or exceed the performance of existing dense-to-sparse training techniques (such as pruning) for a fixed parameter count and compute budget. We implement RigL from scratch in Pytorch and reproduce its performance on CIFAR-10 within 0.1 CIFAR-10/100, the central claim holds – given a fixed training budget, RigL surpasses existing dynamic-sparse training methods over a range of target sparsities. By training longer, the performance can match or exceed iterative pruning, while consuming constant FLOPs throughout training. We also show that there is little benefit in tuning RigL's hyper-parameters for every sparsity, initialization pair – the reference choice of hyperparameters is often close to optimal performance. Going beyond the original paper, we find that the optimal initialization scheme depends on the training constraint. While the Erdos-Renyi-Kernel distribution outperforms the Uniform distribution for a fixed parameter count, for a fixed FLOP count, the latter performs better. Finally, redistributing layer-wise sparsity while training can bridge the performance gap between the two initialization schemes, but increases computational cost.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 12

02/15/2019

Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

Deep neural networks are typically highly over-parameterized with prunin...
06/09/2020

Pruning neural networks without any data by iteratively conserving synaptic flow

Pruning the parameters of deep neural networks has generated intense int...
11/25/2019

Rigging the Lottery: Making All Tickets Winners

Sparse neural networks have been shown to be more parameter and compute ...
07/05/2021

One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

Introducing sparsity in a neural network has been an efficient way to re...
04/30/2021

Studying the Consistency and Composability of Lottery Ticket Pruning Masks

Magnitude pruning is a common, effective technique to identify sparse su...
03/11/2021

Emerging Paradigms of Neural Network Pruning

Over-parameterization of neural networks benefits the optimization and g...
10/21/2021

Towards strong pruning for lottery tickets with non-zero biases

The strong lottery ticket hypothesis holds the promise that pruning rand...

Code Repositories

rigl-reproducibility

Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.