Learning Instance-wise Sparsity for Accelerating Deep Models

07/27/2019
by   Chuanjian Liu, et al.
5

Exploring deep convolutional neural networks of high efficiency and low memory usage is very essential for a wide variety of machine learning tasks. Most of existing approaches used to accelerate deep models by manipulating parameters or filters without data, e.g., pruning and decomposition. In contrast, we study this problem from a different perspective by respecting the difference between data. An instance-wise feature pruning is developed by identifying informative features for different instances. Specifically, by investigating a feature decay regularization, we expect intermediate feature maps of each instance in deep neural networks to be sparse while preserving the overall network performance. During online inference, subtle features of input images extracted by intermediate layers of a well-trained neural network can be eliminated to accelerate the subsequent calculations. We further take coefficient of variation as a measure to select the layers that are appropriate for acceleration. Extensive experiments conducted on benchmark datasets and networks demonstrate the effectiveness of the proposed method.

READ FULL TEXT

page 1

page 4

page 6

research
04/26/2023

Filter Pruning via Filters Similarity in Consecutive Layers

Filter pruning is widely adopted to compress and accelerate the Convolut...
research
12/24/2018

Dynamic Runtime Feature Map Pruning

High bandwidth requirements are an obstacle for accelerating the trainin...
research
04/12/2017

Energy Propagation in Deep Convolutional Neural Networks

Many practical machine learning tasks employ very deep convolutional neu...
research
03/10/2021

Manifold Regularized Dynamic Network Pruning

Neural network pruning is an essential approach for reducing the computa...
research
08/12/2023

Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?

Pruning is a widely used technique for reducing the size of deep neural ...
research
05/22/2017

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon

How to develop slim and accurate deep neural networks has become crucial...

Please sign up or login with your details

Forgot password? Click here to reset