Structural Pruning in Deep Neural Networks: A Small-World Approach

11/11/2019
by   Gokul Krishnan, et al.
0

Deep Neural Networks (DNNs) are usually over-parameterized, causing excessive memory and interconnection cost on the hardware platform. Existing pruning approaches remove secondary parameters at the end of training to reduce the model size; but without exploiting the intrinsic network property, they still require the full interconnection to prepare the network. Inspired by the observation that brain networks follow the Small-World model, we propose a novel structural pruning scheme, which includes (1) hierarchically trimming the network into a Small-World model before training, (2) training the network for a given dataset, and (3) optimizing the network for accuracy. The new scheme effectively reduces both the model size and the interconnection needed before training, achieving a locally clustered and globally sparse model. We demonstrate our approach on LeNet-5 for MNIST and VGG-16 for CIFAR-10, decreasing the number of parameters to 2.3 respectively.

READ FULL TEXT
research
05/27/2019

CGaP: Continuous Growth and Pruning for Efficient Deep Learning

Today a canonical approach to reduce the computation cost of Deep Neural...
research
09/17/2020

Holistic Filter Pruning for Efficient Deep Neural Networks

Deep neural networks (DNNs) are usually over-parameterized to increase t...
research
06/21/2022

Renormalized Sparse Neural Network Pruning

Large neural networks are heavily over-parameterized. This is done becau...
research
05/27/2019

Efficient Network Construction through Structural Plasticity

Deep Neural Networks (DNNs) on hardware is facing excessive computation ...
research
08/16/2017

BitNet: Bit-Regularized Deep Neural Networks

We present a novel regularization scheme for training deep neural networ...
research
09/30/2019

Spread-gram: A spreading-activation schema of network structural learning

Network representation learning has exploded recently. However, existing...
research
02/15/2019

Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

Deep neural networks are typically highly over-parameterized with prunin...

Please sign up or login with your details

Forgot password? Click here to reset