Tensorized Spectrum Preserving Compression for Neural Networks

05/25/2018
by   Jiahao Su, et al.
0

Modern neural networks can have tens of millions of parameters, and are often ill-suited for smartphones or IoT devices. In this paper, we describe an efficient mechanism for compressing large networks by tensorizing network layers: i.e. mapping layers on to high-order matrices, for which we introduce new tensor decomposition methods. Compared to previous compression methods, some of which use tensor decomposition, our techniques preserve more of the networks invariance structure. Coupled with a new data reconstruction-based learning method, we show that tensorized compression outperforms existing techniques for both convolutional and fully-connected layers on state-of-the art networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2018

Wide Compression: Tensor Ring Nets

Deep neural networks have demonstrated state-of-the-art performance in a...
research
12/07/2021

Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization

Tensor decomposition is one of the fundamental technique for model compr...
research
09/22/2015

Tensorizing Neural Networks

Deep neural networks currently demonstrate state-of-the-art performance ...
research
06/29/2020

Hybrid Tensor Decomposition in Neural Network Compression

Deep neural networks (DNNs) have enabled impressive breakthroughs in var...
research
09/29/2015

Compression of Deep Neural Networks on the Fly

Thanks to their state-of-the-art performance, deep neural networks are i...
research
11/09/2017

Compact Neural Networks based on the Multiscale Entanglement Renormalization Ansatz

The goal of this paper is to demonstrate a method for tensorizing neural...
research
03/14/2019

Tucker Tensor Layer in Fully Connected Neural Networks

We introduce the Tucker Tensor Layer (TTL), an alternative to the dense ...

Please sign up or login with your details

Forgot password? Click here to reset