Pruning Randomly Initialized Neural Networks with Iterative Randomization

06/17/2021
by   Daiki Chijiwa, et al.
0

Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis. Ramanujan et al. (2020) empirically showed that only pruning the weights can achieve remarkable performance instead of optimizing the weight values. However, to achieve the same level of performance as the weight optimization, the pruning approach requires more parameters in the networks before pruning and thus more memory space. To overcome this parameter inefficiency, we introduce a novel framework to prune randomly initialized neural networks with iteratively randomizing weight values (IteRand). Theoretically, we prove an approximation theorem in our framework, which indicates that the randomizing operations are provably effective to reduce the required number of the parameters. We also empirically demonstrate the parameter efficiency in multiple experiments on CIFAR-10 and ImageNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2020

Pruning Neural Networks at Initialization: Why are We Missing the Mark?

Recent work has explored the possibility of pruning neural networks at i...
research
03/28/2023

Randomly Initialized Subnetworks with Iterative Weight Recycling

The Multi-Prize Lottery Ticket Hypothesis posits that randomly initializ...
research
02/14/2023

Data pruning and neural scaling laws: fundamental limitations of score-based algorithms

Data pruning algorithms are commonly used to reduce the memory and compu...
research
10/24/2022

Weight Fixing Networks

Modern iterations of deep learning models contain millions (billions) of...
research
07/05/2021

Why is Pruning at Initialization Immune to Reinitializing and Shuffling?

Recent studies assessing the efficacy of pruning neural networks methods...
research
12/07/2020

The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics

The pressing need to reduce the capacity of deep neural networks has sti...
research
08/30/2019

Learning Digital Circuits: A Journey Through Weight Invariant Self-Pruning Neural Networks

Recently, in the paper "Weight Agnostic Neural Networks" Gaier & Ha util...

Please sign up or login with your details

Forgot password? Click here to reset