Dependency Aware Filter Pruning

05/06/2020
by   Kai Zhao, et al.
2

Convolutional neural networks (CNNs) are typically over-parameterized, bringing considerable computational overhead and memory footprint in inference. Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost. For this purpose, identifying unimportant convolutional filters is the key to effective filter pruning. Previous work prunes filters according to either their weight norms or the corresponding batch-norm scaling factors, while neglecting the sequential dependency between adjacent layers. In this paper, we further develop the norm-based importance estimation by taking the dependency between the adjacent layers into consideration. Besides, we propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity. In this way, we can identify unimportant filters and search for the optimal network architecture within certain resource budgets in a more principled manner. Comprehensive experimental results demonstrate the proposed method performs favorably against the existing strong baseline on the CIFAR, SVHN, and ImageNet datasets. The training sources will be publicly available after the review process.

READ FULL TEXT
research
04/26/2023

Filter Pruning via Filters Similarity in Consecutive Layers

Filter pruning is widely adopted to compress and accelerate the Convolut...
research
12/02/2021

Batch Normalization Tells You Which Filter is Important

The goal of filter pruning is to search for unimportant filters to remov...
research
07/14/2020

REPrune: Filter Pruning via Representative Election

Even though norm-based filter pruning methods are widely accepted, it is...
research
08/08/2023

D-Score: A Synapse-Inspired Approach for Filter Pruning

This paper introduces a new aspect for determining the rank of the unimp...
research
03/15/2022

Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs

Unstructured pruning is well suited to reduce the memory footprint of co...
research
03/12/2020

SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration

Accelerating the inference speed of CNNs is critical to their deployment...
research
11/18/2019

Provable Filter Pruning for Efficient Neural Networks

We present a provable, sampling-based approach for generating compact Co...

Please sign up or login with your details

Forgot password? Click here to reset