Pushing the Efficiency Limit Using Structured Sparse Convolutions

10/23/2022
by   Vinay Kumar Verma, et al.
0

Weight pruning is among the most popular approaches for compressing deep convolutional neural networks. Recent work suggests that in a randomly initialized deep neural network, there exist sparse subnetworks that achieve performance comparable to the original network. Unfortunately, finding these subnetworks involves iterative stages of training and pruning, which can be computationally expensive. We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter. This leads to improved efficiency of convolutional architectures compared to existing methods that perform pruning at initialization. We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in “efficient architectures.” Extensive experiments on well-known CNN models and datasets show the effectiveness of the proposed method. Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.

READ FULL TEXT
research
02/18/2019

Speeding up convolutional networks pruning with coarse ranking

Channel-based pruning has achieved significant successes in accelerating...
research
08/28/2023

A Generalization of Continuous Relaxation in Structured Pruning

Deep learning harnesses massive parallel floating-point processing to tr...
research
03/30/2021

The Elastic Lottery Ticket Hypothesis

Lottery Ticket Hypothesis raises keen attention to identifying sparse tr...
research
01/08/2019

Spatial-Winograd Pruning Enabling Sparse Winograd Convolution

Deep convolutional neural networks (CNNs) are deployed in various applic...
research
03/01/2021

Convolutional Normalization: Improving Deep Convolutional Network Robustness and Training

Normalization techniques have become a basic component in modern convolu...
research
06/06/2019

One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers

The success of lottery ticket initializations (Frankle and Carbin, 2019)...
research
06/28/2022

Deep Neural Networks pruning via the Structured Perspective Regularization

In Machine Learning, Artificial Neural Networks (ANNs) are a very powerf...

Please sign up or login with your details

Forgot password? Click here to reset