ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions

09/06/2018
by   Chai Wah Wu, et al.
0

We consider a general framework for reducing the number of trainable model parameters in deep learning networks by decomposing linear operators as a product of sums of simpler linear operators. Recently proposed deep learning architectures such as CNN, KFC, Dilated CNN, etc. are all subsumed in this framework and we illustrate other types of neural network architectures within this framework. We show that good accuracy on MNIST and Fashion MNIST can be obtained using a relatively small number of trainable parameters. In addition, since implementation of the convolutional layer is resource-heavy, we consider an approach in the transform domain that obviates the need for convolutional layers. One of the advantages of this general framework over prior approaches is that the number of trainable parameters is not fixed and can be varied arbitrarily. In particular, we illustrate the tradeoff of varying the number of trainable variables and the corresponding error rate. As an example, by using this decomposition on a reference CNN architecture for MNIST with over 3x10^6 trainable parameters, we are able to obtain an accuracy of 98.44 3554 trainable parameters.

READ FULL TEXT
research
04/11/2019

Compressing deep neural networks by matrix product operators

A deep neural network is a parameterization of a multi-layer mapping of ...
research
01/06/2021

LightLayers: Parameter Efficient Dense and Convolutional Layers for Image Classification

Deep Neural Networks (DNNs) have become the de-facto standard in compute...
research
01/07/2022

Block Walsh-Hadamard Transform Based Binary Layers in Deep Neural Networks

Convolution has been the core operation of modern deep neural networks. ...
research
02/27/2019

Reducing Artificial Neural Network Complexity: A Case Study on Exoplanet Detection

Despite their successes in the field of self-learning AI, Convolutional ...
research
10/24/2022

A Continuous Convolutional Trainable Filter for Modelling Unstructured Data

Convolutional Neural Network (CNN) is one of the most important architec...
research
06/17/2021

On the training of sparse and dense deep neural networks: less parameters, same performance

Deep neural networks can be trained in reciprocal space, by acting on th...
research
05/27/2023

A Hybrid Quantum-Classical Approach based on the Hadamard Transform for the Convolutional Layer

In this paper, we propose a novel Hadamard Transform (HT)-based neural n...

Please sign up or login with your details

Forgot password? Click here to reset