LightLayers: Parameter Efficient Dense and Convolutional Layers for Image Classification

01/06/2021
by   Debesh Jha, et al.
12

Deep Neural Networks (DNNs) have become the de-facto standard in computer vision, as well as in many other pattern recognition tasks. A key drawback of DNNs is that the training phase can be very computationally expensive. Organizations or individuals that cannot afford purchasing state-of-the-art hardware or tapping into cloud-hosted infrastructures may face a long waiting time before the training completes or might not be able to train a model at all. Investigating novel ways to reduce the training time could be a potential solution to alleviate this drawback, and thus enabling more rapid development of new algorithms and models. In this paper, we propose LightLayers, a method for reducing the number of trainable parameters in deep neural networks (DNN). The proposed LightLayers consists of LightDense andLightConv2D layer that are as efficient as regular Conv2D and Dense layers, but uses less parameters. We resort to Matrix Factorization to reduce the complexity of the DNN models resulting into lightweight DNNmodels that require less computational power, without much loss in the accuracy. We have tested LightLayers on MNIST, Fashion MNIST, CI-FAR 10, and CIFAR 100 datasets. Promising results are obtained for MNIST, Fashion MNIST, CIFAR-10 datasets whereas CIFAR 100 shows acceptable performance by using fewer parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2020

Permute to Train: A New Dimension to Training Deep Neural Networks

We show that Deep Neural Networks (DNNs) can be efficiently trained by p...
research
07/13/2020

Nested Learning For Multi-Granular Tasks

Standard deep neural networks (DNNs) are commonly trained in an end-to-e...
research
01/02/2020

Lightweight Residual Densely Connected Convolutional Neural Network

Extremely efficient convolutional neural network architectures are one o...
research
09/06/2018

ProdSumNet: reducing model parameters in deep neural networks via product-of-sums matrix decompositions

We consider a general framework for reducing the number of trainable mod...
research
05/29/2019

Less is More: An Exploration of Data Redundancy with Active Dataset Subsampling

Deep Neural Networks (DNNs) often rely on very large datasets for traini...
research
12/15/2022

Backdoor Attack Detection in Computer Vision by Applying Matrix Factorization on the Weights of Deep Networks

The increasing importance of both deep neural networks (DNNs) and cloud ...
research
09/06/2023

Adaptive Growth: Real-time CNN Layer Expansion

Deep Neural Networks (DNNs) have shown unparalleled achievements in nume...

Please sign up or login with your details

Forgot password? Click here to reset