Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM)

10/01/2021
by   Jie Bu, et al.
12

A central goal in deep learning is to learn compact representations of features at every layer of a neural network, which is useful for both unsupervised representation learning and structured network pruning. While there is a growing body of work in structured pruning, current state-of-the-art methods suffer from two key limitations: (i) instability during training, and (ii) need for an additional step of fine-tuning, which is resource-intensive. At the core of these limitations is the lack of a systematic approach that jointly prunes and refines weights during training in a single stage, and does not require any fine-tuning upon convergence to achieve state-of-the-art performance. We present a novel single-stage structured pruning method termed DiscriminAtive Masking (DAM). The key intuition behind DAM is to discriminatively prefer some of the neurons to be refined during the training process, while gradually masking out other neurons. We show that our proposed DAM approach has remarkably good performance over various applications, including dimensionality reduction, recommendation system, graph representation learning, and structured pruning for image classification. We also theoretically show that the learning objective of DAM is directly related to minimizing the L0 norm of the masking layer.

READ FULL TEXT

page 22

page 23

page 24

page 25

research
03/09/2022

Data-Efficient Structured Pruning via Submodular Optimization

Structured pruning is an effective approach for compressing large pre-tr...
research
06/14/2019

Towards Compact and Robust Deep Neural Networks

Deep neural networks have achieved impressive performance in many applic...
research
02/18/2019

Single-shot Channel Pruning Based on Alternating Direction Method of Multipliers

Channel pruning has been identified as an effective approach to construc...
research
12/17/2018

A Layer Decomposition-Recomposition Framework for Neuron Pruning towards Accurate Lightweight Networks

Neuron pruning is an efficient method to compress the network into a sli...
research
07/15/2021

Only Train Once: A One-Shot Neural Network Training And Pruning Framework

Structured pruning is a commonly used technique in deploying deep neural...
research
04/25/2022

Fine-tuning Pruned Networks with Linear Over-parameterization

Structured pruning compresses neural networks by reducing channels (filt...
research
06/02/2020

Shapley Value as Principled Metric for Structured Network Pruning

Structured pruning is a well-known technique to reduce the storage size ...

Please sign up or login with your details

Forgot password? Click here to reset