Neural Network Panning: Screening the Optimal Sparse Network Before Training

09/27/2022
by   Xiatao Kang, et al.
17

Pruning on neural networks before training not only compresses the original models, but also accelerates the network training phase, which has substantial application value. The current work focuses on fine-grained pruning, which uses metrics to calculate weight scores for weight screening, and extends from the initial single-order pruning to iterative pruning. Through these works, we argue that network pruning can be summarized as an expressive force transfer process of weights, where the reserved weights will take on the expressive force from the removed ones for the purpose of maintaining the performance of original networks. In order to achieve optimal expressive force scheduling, we propose a pruning scheme before training called Neural Network Panning which guides expressive force transfer through multi-index and multi-process steps, and designs a kind of panning agent based on reinforcement learning to automate processes. Experimental results show that Panning performs better than various available pruning before training methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/02/2022

Cyclical Pruning for Sparse Neural Networks

Current methods for pruning neural network weights iteratively apply mag...
research
12/05/2017

Automated Pruning for Deep Neural Network Compression

In this work we present a method to improve the pruning step of the curr...
research
07/18/2023

Neural Network Pruning as Spectrum Preserving Process

Neural networks have achieved remarkable performance in various applicat...
research
12/09/2017

Peephole: Predicting Network Performance Before Training

The quest for performant networks has been a significant force that driv...
research
08/19/2023

To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks

Reducing the size of a neural network (pruning) by removing weights with...
research
03/30/2020

DHP: Differentiable Meta Pruning via HyperNetworks

Network pruning has been the driving force for the efficient inference o...
research
04/03/2023

Self-building Neural Networks

During the first part of life, the brain develops while it learns throug...

Please sign up or login with your details

Forgot password? Click here to reset