DeepAI AI Chat
Log In Sign Up

Pruning Randomly Initialized Neural Networks with Iterative Randomization

06/17/2021
by   Daiki Chijiwa, et al.
0

Pruning the weights of randomly initialized neural networks plays an important role in the context of lottery ticket hypothesis. Ramanujan et al. (2020) empirically showed that only pruning the weights can achieve remarkable performance instead of optimizing the weight values. However, to achieve the same level of performance as the weight optimization, the pruning approach requires more parameters in the networks before pruning and thus more memory space. To overcome this parameter inefficiency, we introduce a novel framework to prune randomly initialized neural networks with iteratively randomizing weight values (IteRand). Theoretically, we prove an approximation theorem in our framework, which indicates that the randomizing operations are provably effective to reduce the required number of the parameters. We also empirically demonstrate the parameter efficiency in multiple experiments on CIFAR-10 and ImageNet.

READ FULL TEXT

page 1

page 2

page 3

page 4

09/18/2020

Pruning Neural Networks at Initialization: Why are We Missing the Mark?

Recent work has explored the possibility of pruning neural networks at i...
07/05/2021

Why is Pruning at Initialization Immune to Reinitializing and Shuffling?

Recent studies assessing the efficacy of pruning neural networks methods...
02/14/2023

Data pruning and neural scaling laws: fundamental limitations of score-based algorithms

Data pruning algorithms are commonly used to reduce the memory and compu...
10/24/2022

Weight Fixing Networks

Modern iterations of deep learning models contain millions (billions) of...
12/07/2020

The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics

The pressing need to reduce the capacity of deep neural networks has sti...
07/08/2020

RicciNets: Curvature-guided Pruning of High-performance Neural Networks Using Ricci Flow

A novel method to identify salient computational paths within randomly w...
07/14/2022

DropNet: Reducing Neural Network Complexity via Iterative Pruning

Modern deep neural networks require a significant amount of computing ti...