Einconv: Exploring Unexplored Tensor Decompositions for Convolutional Neural Networks

08/13/2019
by   Kohei Hayashi, et al.
8

Tensor decomposition methods are one of the primary approaches for model compression and fast inference of convolutional neural networks (CNNs). However, despite their potential diversity, only a few typical decompositions such as CP decomposition have been applied in practice; more importantly, no extensive comparisons have been performed between available methods. This raises the simple question of how many decompositions are possible, and which of these is the best. In this paper, we first characterize a decomposition class specific to CNNs by adopting graphical notation, which is considerably flexible. When combining with the nonlinear activations, the class includes renowned CNN modules such as depthwise separable convolution and bottleneck layer. In the experiments, we compare the tradeoff between prediction accuracy and time/space complexities by enumerating all the possible decompositions. Also, we demonstrate, using a neural architecture search, that we can find nonlinear decompositions that outperform existing decompositions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

SeKron: A Decomposition Method Supporting Many Factorization Structures

While convolutional neural networks (CNNs) have become the de facto stan...
research
08/12/2023

Hadamard-Hitchcock decompositions: identifiability and computation

A Hadamard-Hitchcock decomposition of a multidimensional array is a deco...
research
06/18/2020

MARS: Masked Automatic Ranks Selection in Tensor Decompositions

Tensor decomposition methods have recently proven to be efficient for co...
research
03/16/2017

Shift Aggregate Extract Networks

We introduce an architecture based on deep hierarchical decompositions t...
research
04/26/2023

Tensor Decomposition for Model Reduction in Neural Networks: A Review

Modern neural networks have revolutionized the fields of computer vision...
research
12/19/2014

Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition

We propose a simple two-step approach for speeding up convolution layers...
research
12/09/2021

A New Measure of Model Redundancy for Compressed Convolutional Neural Networks

While recently many designs have been proposed to improve the model effi...

Please sign up or login with your details

Forgot password? Click here to reset