A Partial Regularization Method for Network Compression

09/03/2020
by   E Zhenqian, et al.
0

Deep Neural Networks have achieved remarkable success relying on the developing availability of GPUs and large-scale datasets with increasing network depth and width. However, due to the expensive computation and intensive memory, researchers have concentrated on designing compression methods in order to make them practical for constrained platforms. In this paper, we propose an approach of partial regularization rather than the original form of penalizing all parameters, which is said to be full regularization, to conduct model compression at a higher speed. It is reasonable and feasible according to the existence of the permutation invariant property of neural networks. Experimental results show that as we expected, the computational complexity is reduced by observing less running time in almost all situations. It should be owing to the fact that partial regularization method invovles a lower number of elements for calculation. Surprisingly, it helps to improve some important metrics such as regression fitting results and classification accuracy in both training and test phases on multiple datasets, telling us that the pruned models have better performance and generalization ability. What's more, we analyze the results and draw a conclusion that an optimal network structure must exist and depend on the input data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/11/2019

An Improving Framework of regularization for Network Compression

Deep Neural Networks have achieved remarkable success relying on the dev...
research
11/20/2018

Gradient-Coherent Strong Regularization for Deep Neural Networks

Deep neural networks are often prone to over-fitting with their numerous...
research
11/30/2018

A Framework for Fast and Efficient Neural Network Compression

Network compression reduces the computational complexity and memory cons...
research
05/05/2021

Modulating Regularization Frequency for Efficient Compression-Aware Model Training

While model compression is increasingly important because of large neura...
research
01/30/2022

Training Thinner and Deeper Neural Networks: Jumpstart Regularization

Neural networks are more expressive when they have multiple layers. In t...
research
07/02/2021

Neural Network Layer Algebra: A Framework to Measure Capacity and Compression in Deep Learning

We present a new framework to measure the intrinsic properties of (deep)...
research
05/19/2021

A Novel lightweight Convolutional Neural Network, ExquisiteNetV2

In the paper of ExquisiteNetV1, the ability of classification of Exquisi...

Please sign up or login with your details

Forgot password? Click here to reset