Spectral-Pruning: Compressing deep neural network via spectral analysis

08/26/2018
by   Taiji Suzuki, et al.
0

The model size of deep neural network is getting larger and larger to realize superior performance in complicated tasks. This makes it difficult to implement deep neural network in small edge-computing devices. To overcome this problem, model compression methods have been gathering much attention. However, there have been only few theoretical back-grounds that explain what kind of quantity determines the compression ability. To resolve this issue, we develop a new theoretical frame-work for model compression, and propose a new method called Spectral-Pruning based on the theory. Our theoretical analysis is based on the observation such that the eigenvalues of the covariance matrix of the output from nodes in the internal layers often shows rapid decay. We define "degree of freedom" to quantify an intrinsic dimensionality of the model by using the eigenvalue distribution and show that the compression ability is essentially controlled by this quantity. Along with this, we give a generalization error bound of the compressed model. Our proposed method is applicable to wide range of models, unlike the existing methods, e.g., ones possess complicated branches as implemented in SegNet and ResNet. Our method makes use of both "input" and "output" in each layer and is easy to implement. We apply our method to several datasets to justify our theoretical analyses and show that the proposed method achieves the state-of-the-art performance.

READ FULL TEXT
research
11/16/2021

Neuron-based Pruning of Deep Neural Networks with Better Generalization using Kronecker Factored Curvature Approximation

Existing methods of pruning deep neural networks focus on removing unnec...
research
09/25/2019

Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network

One of biggest issues in deep learning theory is its generalization abil...
research
06/07/2022

Neural Network Compression via Effective Filter Analysis and Hierarchical Pruning

Network compression is crucial to making the deep networks to be more ef...
research
08/19/2021

An Information Theory-inspired Strategy for Automatic Network Pruning

Despite superior performance on many computer vision tasks, deep convolu...
research
06/11/2022

A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation

The goal of model compression is to reduce the size of a large neural ne...
research
11/10/2020

Dirichlet Pruning for Neural Network Compression

We introduce Dirichlet pruning, a novel post-processing technique to tra...
research
03/24/2020

A Comprehensive Analysis of Correlated Source Compression Using Edge Computing in Distributed Systems

This paper examines the theory pertaining to lossless compression of cor...

Please sign up or login with your details

Forgot password? Click here to reset