Layer-wise Model Pruning based on Mutual Information

08/28/2021
by   Chun Fan, et al.
0

The proposed pruning strategy offers merits over weight-based pruning techniques: (1) it avoids irregular memory access since representations and matrices can be squeezed into their smaller but dense counterparts, leading to greater speedup; (2) in a manner of top-down pruning, the proposed method operates from a more global perspective based on training signals in the top layer, and prunes each layer by propagating the effect of global signals through layers, leading to better performances at the same sparsity level. Extensive experiments show that at the same sparsity level, the proposed strategy offers both greater speedup and higher performances than weight-based pruning methods (e.g., magnitude pruning, movement pruning).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/12/2020

Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning

Magnitude-based pruning is one of the simplest methods for pruning neura...
research
10/15/2020

A Deeper Look at the Layerwise Sparsity of Magnitude-based Pruning

Recent discoveries on neural network pruning reveal that, with a careful...
research
01/07/2021

Max-Affine Spline Insights Into Deep Network Pruning

In this paper, we study the importance of pruning in Deep Networks (DNs)...
research
06/08/2015

Fast ConvNets Using Group-wise Brain Damage

We revisit the idea of brain damage, i.e. the pruning of the coefficient...
research
10/08/2022

Advancing Model Pruning via Bi-level Optimization

The deployment constraints in practical applications necessitate the pru...
research
10/08/2021

Performance optimizations on deep noise suppression models

We study the role of magnitude structured pruning as an architecture sea...
research
07/21/2023

3D Skeletonization of Complex Grapevines for Robotic Pruning

Robotic pruning of dormant grapevines is an area of active research in o...

Please sign up or login with your details

Forgot password? Click here to reset