A New Measure of Model Redundancy for Compressed Convolutional Neural Networks

12/09/2021
by   Feiqing Huang, et al.
0

While recently many designs have been proposed to improve the model efficiency of convolutional neural networks (CNNs) on a fixed resource budget, theoretical understanding of these designs is still conspicuously lacking. This paper aims to provide a new framework for answering the question: Is there still any remaining model redundancy in a compressed CNN? We begin by developing a general statistical formulation of CNNs and compressed CNNs via the tensor decomposition, such that the weights across layers can be summarized into a single tensor. Then, through a rigorous sample complexity analysis, we reveal an important discrepancy between the derived sample complexity and the naive parameter counting, which serves as a direct indicator of the model redundancy. Motivated by this finding, we introduce a new model redundancy measure for compressed CNNs, called the K/R ratio, which further allows for nonlinear activations. The usefulness of this new measure is supported by ablation studies on popular block designs and datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/21/2019

Jointly Sparse Convolutional Neural Networks in Dual Spatial-Winograd Domains

We consider the optimization of deep convolutional neural networks (CNNs...
research
03/16/2020

SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Weights Flipping

The channel redundancy in feature maps of convolutional neural networks ...
research
11/12/2019

Tight Sample Complexity of Learning One-hidden-layer Convolutional Neural Networks

We study the sample complexity of learning one-hidden-layer convolutiona...
research
05/21/2018

Compression of Deep Convolutional Neural Networks under Joint Sparsity Constraints

We consider the optimization of deep convolutional neural networks (CNNs...
research
11/08/2017

Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels

In this paper, we consider parameter recovery for non-overlapping convol...
research
08/13/2019

Einconv: Exploring Unexplored Tensor Decompositions for Convolutional Neural Networks

Tensor decomposition methods are one of the primary approaches for model...
research
06/22/2020

Exploiting Weight Redundancy in CNNs: Beyond Pruning and Quantization

Pruning and quantization are proven methods for improving the performanc...

Please sign up or login with your details

Forgot password? Click here to reset