Renormalized Sparse Neural Network Pruning

06/21/2022
by   Michael G. Rawson, et al.
0

Large neural networks are heavily over-parameterized. This is done because it improves training to optimality. However once the network is trained, this means many parameters can be zeroed, or pruned, leaving an equivalent sparse neural network. We propose renormalizing sparse neural networks in order to improve accuracy. We prove that our method's error converges to zero as network parameters cluster or concentrate. We prove that without renormalizing, the error does not converge to zero in general. We experiment with our method on real world datasets MNIST, Fashion MNIST, and CIFAR-10 and confirm a large improvement in accuracy with renormalization versus standard pruning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2019

Structural Pruning in Deep Neural Networks: A Small-World Approach

Deep Neural Networks (DNNs) are usually over-parameterized, causing exce...
research
12/20/2022

Constructing Organism Networks from Collaborative Self-Replicators

We introduce organism networks, which function like a single neural netw...
research
04/27/2018

CompNet: Neural networks growing via the compact network morphism

It is often the case that the performance of a neural network can be imp...
research
03/27/2022

On the Neural Tangent Kernel Analysis of Randomly Pruned Wide Neural Networks

We study the behavior of ultra-wide neural networks when their weights a...
research
11/28/2020

FreezeNet: Full Performance by Reduced Storage Costs

Pruning generates sparse networks by setting parameters to zero. In this...
research
03/11/2021

Emerging Paradigms of Neural Network Pruning

Over-parameterization of neural networks benefits the optimization and g...
research
07/12/2021

Structured Directional Pruning via Perturbation Orthogonal Projection

Structured pruning is an effective compression technique to reduce the c...

Please sign up or login with your details

Forgot password? Click here to reset