Compressing deep neural networks by matrix product operators

04/11/2019
by   Ze-Feng Gao, et al.
0

A deep neural network is a parameterization of a multi-layer mapping of signals in terms of many alternatively arranged linear and nonlinear transformations. The linear transformations, which are generally used in the fully-connected as well as convolutional layers, contain most of the variational parameters that are trained and stored. Compressing a deep neural network to reduce its number of variational parameters but not its prediction power is an important but challenging problem towards the establishment of an optimized scheme in training efficiently these parameters and in lowering the risk of overfitting. Here we show that this problem can be effectively solved by representing linear transformations with matrix product operators (MPO). We have tested this approach in five main neural networks, including FC2, LeNet-5, VGG, ResNet, and DenseNet on two widely used datasets, namely MNIST and CIFAR-10, and found that this MPO representation indeed sets up a faithful and efficient mapping between input and output signals, which can keep or even improve the prediction accuracy with dramatically reduced number of parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/17/2016

Factorized Bilinear Models for Image Recognition

Although Deep Convolutional Neural Networks (CNNs) have liberated their ...
research
05/10/2023

Compressing neural network by tensor network with exponentially fewer variational parameters

Neural network (NN) designed for challenging machine learning tasks is i...
research
09/06/2018

ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions

We consider a general framework for reducing the number of trainable mod...
research
01/06/2018

Generating Neural Networks with Neural Networks

Hypernetworks are neural networks that transform a random input vector i...
research
07/30/2015

Multilinear Map Layer: Prediction Regularization by Structural Constraint

In this paper we propose and study a technique to impose structural cons...
research
11/28/2018

Shared Representational Geometry Across Neural Networks

Different neural networks trained on the same dataset often learn simila...
research
11/12/2019

Trainable Spectrally Initializable Matrix Transformations in Convolutional Neural Networks

In this work, we investigate the application of trainable and spectrally...

Please sign up or login with your details

Forgot password? Click here to reset