A Unified Approximation Framework for Deep Neural Networks

07/26/2018
by   Yuzhe Ma, et al.
0

Deep neural networks (DNNs) have achieved significant success in a variety of real world applications. However, tons of parameters in the networks restrict the efficiency of neural networks due to the large model size and the intensive computation. To address this issue, various compression and acceleration techniques have been investigated, among which low-rank filters and sparse filters are heavily studied. In this paper we propose a unified framework to compress the convolutional neural networks by combining these two strategies, while taking the nonlinear activation into consideration. The filer of a layer is approximated by the sum of a sparse component and a low-rank component, both of which are in favor of model compression. Especially, we constrain the sparse component to be structured sparse which facilitates acceleration. The performance of the network is retained by minimizing the reconstruction error of the feature maps after activation of each layer, using the alternating direction method of multipliers (ADMM). The experimental results show that our proposed approach can compress VGG-16 and AlexNet by over 4X. In addition, 2.2X and 1.1X speedup are achieved on VGG-16 and AlexNet, respectively, at a cost of less increase on error rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2018

A Unified Approximation Framework for Non-Linear Deep Neural Networks

Deep neural networks (DNNs) have achieved significant success in a varie...
research
06/11/2020

Convolutional neural networks compression with low rank and sparse tensor decompositions

Convolutional neural networks show outstanding results in a variety of c...
research
07/09/2019

A Targeted Acceleration and Compression Framework for Low bit Neural Networks

1 bit deep neural networks (DNNs), of which both the activations and wei...
research
11/16/2014

Efficient and Accurate Approximations of Nonlinear Convolutional Networks

This paper aims to accelerate the test-time computation of deep convolut...
research
05/09/2020

GPU Acceleration of Sparse Neural Networks

In this paper, we use graphics processing units(GPU) to accelerate spars...
research
12/10/2018

Accelerating Convolutional Neural Networks via Activation Map Compression

The deep learning revolution brought us an extensive array of neural net...
research
11/09/2022

ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention

Vision Transformer (ViT) has emerged as a competitive alternative to con...

Please sign up or login with your details

Forgot password? Click here to reset