Memory Bounded Deep Convolutional Networks

12/03/2014
by   Maxwell D. Collins, et al.
0

In this work, we investigate the use of sparsity-inducing regularizers during training of Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. This in turn reduces the memory and runtime cost involved in deploying the learned CNNs. We show that training with such regularization can still be performed using stochastic gradient descent implying that it can be used easily in existing codebases. Experimental evaluation of our approach on MNIST, CIFAR, and ImageNet datasets shows that our regularizers can result in dramatic reductions in memory requirements. For instance, when applied on AlexNet, our method can reduce the memory consumption by a factor of four with minimal loss in accuracy.

READ FULL TEXT
research
12/22/2014

Deep Fried Convnets

The fully connected layers of a deep convolutional neural network typica...
research
04/22/2019

Deep Anchored Convolutional Neural Networks

Convolutional Neural Networks (CNNs) have been proven to be extremely su...
research
04/16/2021

High Performance Convolution Using Sparsity and Patterns for Inference in Deep Convolutional Neural Networks

Deploying deep Convolutional Neural Networks (CNNs) is impacted by their...
research
11/16/2018

DropFilter: A Novel Regularization Method for Learning Convolutional Neural Networks

The past few years have witnessed the fast development of different regu...
research
06/23/2021

Universal Consistency of Deep Convolutional Neural Networks

Compared with avid research activities of deep convolutional neural netw...
research
05/26/2015

Accelerating Very Deep Convolutional Networks for Classification and Detection

This paper aims to accelerate the test-time computation of convolutional...
research
11/06/2017

Characterizing Sparse Connectivity Patterns in Neural Networks

We propose a novel way of reducing the number of parameters in the stora...

Please sign up or login with your details

Forgot password? Click here to reset