A Unified Approximation Framework for Non-Linear Deep Neural Networks

07/26/2018
by   Yuzhe Ma, et al.
0

Deep neural networks (DNNs) have achieved significant success in a variety of real world applications. However, tons of parameters in the networks restrict the efficiency of neural networks due to the large model size and the intensive computation. To address this issue, various compression and acceleration techniques have been investigated, among which low-rank filters and sparse filters are heavily studied. In this paper we propose a unified framework to compress the convolutional neural networks by combining these two strategies, while taking the nonlinear activation into consideration. The filer of a layer is approximated by the sum of a sparse component and a low-rank component, both of which are in favor of model compression. Especially, we constrain the sparse component to be structured sparse which facilitates acceleration. The performance of the network is retained by minimizing the reconstruction error of the feature maps after activation of each layer, using the alternating direction method of multipliers (ADMM). The experimental results show that our proposed approach can compress VGG-16 and AlexNet by over 4X. In addition, 2.2X and 1.1X speedup are achieved on VGG-16 and AlexNet, respectively, at a cost of less increase on error rate.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2018

A Unified Approximation Framework for Deep Neural Networks

Deep neural networks (DNNs) have achieved significant success in a varie...
research
06/11/2020

Convolutional neural networks compression with low rank and sparse tensor decompositions

Convolutional neural networks show outstanding results in a variety of c...
research
11/16/2014

Efficient and Accurate Approximations of Nonlinear Convolutional Networks

This paper aims to accelerate the test-time computation of deep convolut...
research
03/28/2017

Coordinating Filters for Faster Deep Neural Networks

Very large-scale Deep Neural Networks (DNNs) have achieved remarkable su...
research
03/22/2023

Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training

Deep neural networks have achieved great success in many data processing...
research
06/06/2019

Occluded Face Recognition Using Low-rank Regression with Generalized Gradient Direction

In this paper, a very effective method to solve the contiguous face occl...
research
12/10/2018

Accelerating Convolutional Neural Networks via Activation Map Compression

The deep learning revolution brought us an extensive array of neural net...

Please sign up or login with your details

Forgot password? Click here to reset