Deep Anchored Convolutional Neural Networks

04/22/2019
by   Jiahui Huang, et al.
0

Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks. State-of-the-art methods favor such deep network architectures for its accuracy performance, with the cost of having massive number of parameters and high weights redundancy. Previous works have studied how to prune such CNNs weights. In this paper, we go to another extreme and analyze the performance of a network stacked with a single convolution kernel across layers, as well as other weights sharing techniques. We name it Deep Anchored Convolutional Neural Network (DACNN). Sharing the same kernel weights across layers allows to reduce the model size tremendously, more precisely, the network is compressed in memory by a factor of L, where L is the desired depth of the network, disregarding the fully connected layer for prediction. The number of parameters in DACNN barely increases as the network grows deeper, which allows us to build deep DACNNs without any concern about memory costs. We also introduce a partial shared weights network (DACNN-mix) as well as an easy-plug-in module, coined regulators, to boost the performance of our architecture. We validated our idea on 3 datasets: CIFAR-10, CIFAR-100 and SVHN. Our results show that we can save massive amounts of memory with our model, while maintaining a high accuracy performance.

READ FULL TEXT

page 1

page 8

research
10/23/2022

Drastically Reducing the Number of Trainable Parameters in Deep CNNs by Inter-layer Kernel-sharing

Deep convolutional neural networks (DCNNs) have become the state-of-the-...
research
08/14/2016

About Pyramid Structure in Convolutional Neural Networks

Deep convolutional neural networks (CNN) brought revolution without any ...
research
10/11/2022

The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes

Convolutional neural networks were the standard for solving many compute...
research
07/09/2021

Joint Matrix Decomposition for Deep Convolutional Neural Networks Compression

Deep convolutional neural networks (CNNs) with a large number of paramet...
research
12/03/2014

Memory Bounded Deep Convolutional Networks

In this work, we investigate the use of sparsity-inducing regularizers d...
research
03/29/2022

Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks

Deep Neural Networks, particularly Convolutional Neural Networks (ConvNe...
research
01/28/2019

Squeezed Very Deep Convolutional Neural Networks for Text Classification

Most of the research in convolutional neural networks has focused on inc...

Please sign up or login with your details

Forgot password? Click here to reset