Over the last few years, high capacity models such as deep Convolutional Neural Networks (CNNs) have been used to produce state-of-the-art results for a wide variety of vision problems including Image classification and Object detection. This success has been in part attributed to the feasibility of training models with large number of parameters and the availability of large training datasets.
Recent work has made significant strides in the techniques used to train deep learning models, making it possible to optimize objective functions defined over million of parameters. These techniques however require careful tuning of the optimization and initialization hyperparameters to ensure that the training procedure arrives at a reasonable model. In addition, simply performing the computations required for models with so many parameters has required leveraging high-performance parallel architectures such as GPUs [9, 11], or distributed clusters .
While the capacity of such deep models allows them to learn sophisticated mappings, it also introduces the need for good regularization techniques. Furthermore, they suffer from high memory cost at deployment time due to large model sizes. The current generation of deep models [11, 14, 24] show a reasonable degree of generalization in part because of recent advances in regularization techniques like DropOut . However, these and other approaches  proposed in the literature fail to address the issue of high memory cost, which is particularly problematic in models that rely on multiple fully connected layers. The high memory cost becomes especially important when deploying in application in computationally constrained architectures like mobile devices.
|Caffe Version ||57.28%||80.44%||233 MB|
|Sparse (ours)||55.60%||80.40%||58 MB|
To overcome the aforementioned problems, we propose the use of sparsity-inducing regularizers for Convolution Neural Networks (CNNs). These regularizers encourage that fewer connections in the convolution and fully connected layers take non-zero values and in effect result in sparse connectivity between hidden units in the deep network. In doing so, the regularizers not only restrict the model capacity but also reduce the memory and runtime cost involved in deploying the learned CNNs.
We applied our method on MNIST, CIFAR and ImageNet datasets. Our results show that one can generate models that reduce the memory consumption by a factor of 3 or more with minimal loss in accuracy. We show how this can be used to improve the accuracy of vision classifiers using ensembles of deep networkswithout
incurring the greater memory costs that would ordinarily result from having to store multiple models. Using sparsity regularization in this way significantly improves upon more typical ways researchers seek to limit model complexity, by e.g. changing the network topology by removing units or neurons. Finally, we show how the regularized training objectives can be efficiently optimized using stochastic gradient descent.
The key contributions of this paper are:
A set of sparsity-inducing regularization functions that we demonstrate are effective at reducing model complexity with no or minimal reduction in accuracy.
Updates for these regularizations that are easily implemented within standard existing stochastic-gradient-based deep network training algorithms.
Empirical validation of the effect of sparsity on CNNs on CIFAR and MNIST datasets.
1.1 Related Work
Regularization of Neural Networks
Weight decay was one of the first techniques for regularization of neural networks and was shown to significantly improve the generalization of Neural Networks by Krogh and Hertz  It continues to be a key component of the training for state-of-the-art deep learning methods .
Weight decay works by reducing the magnitude of the weight vector of the network. It yields a simple additional term to the weight updates at each iteration of the learning procedure. The update term used is the gradient of the squared
-norm of the weights. An interesting observation is that the training procedure for a linear perceptron with weight decay is equivalent to learning a linear SVM using stochastic gradient descent, where the weight decay is serving as the update to maximize the margin of the classifier.
While not seen in CNNs or Computer Vision application, building sparse networks was considered for more classical neural networks. These were grouped into “pruning” methods, and include both regularization penalties and techniques for determining the importance of a given weight to the network’s accuracy.
Hinton et. al  proposed a regularization technique known as dropout, which now forms a basis for many state-of-the-art deep neural network models [11, 14, 24] During training, some portion of the units in the network are “dropped:” their output is fixed to zero and their weights are not updated during back-propagation. The output of these units are then multiplied by the dropout factor at test time. This is done in part as a computationally cheaper approximation to training an ensemble of sparse networks.
The sparse networks implicitly created by dropout are simplified in a direction orthogonal to the parameter regularization considered here. Dropout eliminated entire units or neurons, while we seek to reduce the number of parameters of those units. Rather than having a smaller number of complex units, we try to train models that have a constant number of simpler units.
Comparison to Model Compression
A further method of constructing simpler models is the technique of model compression . Model compression relies on the availability of large amounts of unlabelled data, using it to build smaller and computationally cheaper models by teaching the compressed model from the output of a larger and more parameter-heavy model that has been trained to fit the target task. This data is labelled using the original network and then used to train a smaller network. The idea behind this is to make sure that the smaller network does not have to worry about regularization. Furthermore, unlike our method where sparsity is enforced explicity through the use of an or norm on a large number of hidden units, the model compression work simply trains a network with smaller number of hidden units. Like Dropout regularization, it essentially maintains the dense connectivity structure between layers while our method results in sparser connectivity between layers. In this work we target a similar setting, in which plenty of computational resources are available at training time, but we wish to use these to train a model that is suitable for deployment in a more constrained environment.
Some recent work [15, 30] has considered the same task but have adopted a different approach. They build networks with a set of low rank filters in each layer. This is built after training a network without this constraint, where the simpler network is selected to approximate the original full-rank network.
Extensive work has also been done on training deep models that learn sparse representations of the data, while the learned parameters are themselves nonsparse. In an early work on models with this prior, Olshausen and Field  used a basic neural network structure inspired by the V1 layer of the visual cortex to learn sparse representations of a set of images. More recent work has used the term sparse autoencoder
to describe this type of structure. and in deep learning autoencoders have been used to initialize multi-layer networks as a method of layer-wise pre-training.
Indeed, much recent work on designing novel architectures for deep learning for vision application seeks to reduce or sparsify the parameter set. Convolutional layers themselves, in addition to enforcing an assumption of translation invariance, are meant in part to simply reduce the number of parameters . A convolution can be represented by a fully-connected layer, though it is unlikely to learn this mapping with a finite amount of training data and computation time. Some prior work has investigated automatically determining CNN structure, though instead by searching the space of layers and their sizes that may be used in place of simpler higher-parameter-count choices .
2 Convolutional Neural Nets
Convolutional Neural Nets are a variety of Deep Learning methods in which convolutions form the early layers. These have become the standard technique for performing deep learning for Computer Vision problems, as they explicitly deal with vision-based primitives. In the first of these layers learned filters are convolved with the input image. The output of each of these kernels, usually composed with some nonlinear mapping, is then convolved with another set of learned filters in the second layer. Each subsequent convolution layer then convolves another set of filters with the output of each convolution in the previous layer.
CNNs are frequently augmented with final “fully connected” layers that are more similar to classical neural networks. These provide a mapping from the features learned by the convolutional layers to the output labels. In most state-of-the-art networks the fully-connected layers are responsible for the majority of the parameter count of the network, though some recent work has considered models consisting entirely of convolutional layers with a novel architecture  or converted the fully-connected layers into a convolution . The test models we consider for image classification will have in their final layer a number of units equal to the number of output classes for the benchmark task. Each unit will correspond to an output class, and the unit that produces largest output on a given input image will be the class label the model predicts for that image.
In this work, the terms “weights” and “parameters” are used interchangeably to refer to the trained weighting on the signal sent from one unit to another connected unit in the next layer in forward propagation. The set of parameters of a CNN also includes the individual elements per pixel and channel in a learned convolution filter. In a classic “fully connected” or “inner product” layer, each unit in the layer will take the inner product of its weight vector and the output of all units in the previous layer. This may then be transformed by some nonlinear function, or simply output as is to the next layer. Individual CNN units will frequently also have an additive “bias,” which we also count in our parameter costs but contribute little to the overall size and complexity of most networks.
2.1 Optimization Perspective on CNN Training
The standard methods for training deep learning models may be expressed as stochastic gradient descent on a given loss function between the output prediction and training labels. The training procedure is given a set of input vectorsand the corresponding ground truth labels . A basic feed-forward network can be treated as a function of the input data and the collection of learned parameters for all layers. The overall optimization problem on these weights may then be posed as follows:
Here, is a loss function between the true labels and the predictions of the network on the training example. Taking the sum over the training data is treated as a proxy for the empirical risk. The function is the regularization term, with a weighting hyperparameter , which seeks to reduce the hypothesis space. For ordinary weight decay, this will be a squared norm: . For image classification the loss will be a soft-max loss.
The objective in (1), along with its gradients, is very expensive to compute for practical problems. The state of the art in Computer Vision considers very large networks with up to 22 layers  and 60 million parameters . Further, these models are trained on large datasets such as ImageNet, with the 2012-2014 classification challenge set having 1.2 million training images. The optimization of this objective is made far more computationally tractable by using stochastic gradient descent  or stochastic variants of adaptive and accelerated gradient methods .
Instead of using the full objective , these approximate the gradient using a sample drawn from the training data, taking instead the gradient of the function:
This stochastic gradient has the key property that it is in expectation equal to the true gradient of the full objective function:
3 Regularization Updates
We encourage sparsity in the networks by applying simple updates to the set of weights in each layer during training. First, though, we consider the regularization functions themselves. Regularization is typically based on a norm function on the parameters of the model taken as a vector. Define the norms as
for . This definition is typically extended to and norms by taking limits on . Of special interest to sparsity regularization is the “norm,” as is the count of the number of nonzeros of .
3.1 Regularization and the Shrinkage Operator
The most common classical technique for learning sparse models in machine learning isregularization . The norm of a vector is the tightest convex relaxation of the norm. It has been shown for some classes of machine learning models that regularization terms consisting of an norm can provide a provably tight approximation, or even an exact solution, to a corresponding regularization that directly penalizes or constrains the number of nonzero parameters of the model .
We can optimize an regularization term by updating the weights along a subgradient of the norm. A negative multiple of a subgradient gives a descent direction, and updating the weights a sufficiently small distance along this ray will reduce the regularization term. Its sum with the gradient of the loss terms in (1) gives a subgradient of the whole objective. Since the norm is differentiable almost everywhere, so the subdifferential is a singleton set, it is not even usually necessary in practice in CNN training to choose a particular subgradient. Therefore, the choice of subgradient we investigate in the paper is simply the sign operator, applied element-wise to the vector of weights. It can be seen that for . The resulting update is therefore, for some , the element-wise operator:
This update is currently implemented in Caffe  as a type of weight decay that can be applied the whole network.
This update has the significant shortcoming that, while it will produce a large number of weights that are very near
zero, it will almost never output weights that are exactly zero. Since later layers can still receive and learn to magnify input based on the resulting small nonzero weights, and earlier layers will receive backpropagated gradients along these weights, these near-zero parameters cannot be ignored and the model is not necessarily any simpler. When seeking to construct a sparse model, the natural technique is to try to threshold away these very small weights that an optimal
solution will have set to zero. Finding a good threshold for this heuristic requires some attention to the schedule of learning weights and the level of regularization relative to the gradients near the end of training.
A regularization step with significantly better empirical and theoretical properties is the shrinkage operator :
Here the notation refers to the positive component of a scalar . The shrinkage operator is among the oldest techniques for sparsity-inducing regularization. A key property for deep network training is that this operator will not allow weights to change sign and “overshoot” zero in an update. The operator will output actual zero weights rather than small weights oscillating in sign at each iteration. For a suitable choice of , it eliminates the need to consider thresholding. Neighboring layers will get actually zero input/backpropagation along that connection, so they can properly learn on the zero weight.
This shrinkage update is an example of a proximal mapping (also called a “proximal operator”) and alternating this with descent steps along the gradient of the loss would yield a proximal gradient method. These methods generalize and extend the LASSO, and have been applied to e.g. group sparsity regularizations . Proximal mappings are also effective when using approximate or stochastic gradients. This was demonstrated by Tsuruoka et. al. for -regularized regression , and Schmidt et. al.  provide convergence guarantees in the case of a convex problem for both stochastic gradient and accelerated variants.
3.2 Projection to balls
While regularization is known to induce or encourage sparsity in shallow machine learning models, the direct way to construct sparse models is to consider the norm.
We propose a simple regularization operator to train models under norm constraints. Under this update, every updates we set to zero all but the largest-magnitude elements of the parameter vector. This imposes the hard constraint that for some integer . This operator can be seen as a projection onto an ball, namely:
This “projection” matches the operator of hard-thresholding all but the least-magnitude elements, as the quantity the projection seeks to minimize in (7) is the total squared magnitude of the elements that are zero in the optimal but not in .
This -regularization update is inspired by “projected gradient” methods frequently used in convex optimization. In these methods either the iterate or the gradient along which the next iterate is generated are projected onto a convex feasible set. Consider the former case, most similar to our update. If the optimum is not in the interior, this basic procedure will search along the boundary of the feasible set. A theoretical analysis of projected gradient algorithms can be gleaned from the more general analysis on proximal operators. Projection onto a convex set is the proximal operator for a extended-valued function:
Therefore, projection onto a convex feasible set can be seen as a special case of proximal stochastic gradient methods. See  for discussion specifically on projection within stochastic gradient methods.
Results from convex optimization provide us with theoretical guarantees of the efficacy of proximal stochastic gradient methods. In this case neither the objective nor the feasible set onto which we are projecting are convex. For this reason, in shallow models the norm is typically eschewed for being intractable to optimize, and optimization problems incorporating -norm terms lack many of the guarantees we see with convex models. When instead training deep learning models, these guarantees are already absent. However, recent advances in the training techniques for deep learning have made optimizing such models for Computer Vision tasks fairly robust. We empirically see that directly using the norm works, and stochastic gradient solvers successfully optimize the training error.
Our regularization updates were implemented as a modification to Caffe . The update matches that already present, though we extended that implementation to allow mixing different types of regularization per-layer. Therefore, for example in Figure 6 we were able to retain ordinary weight decay for all but select portions of the model in order to analyze the effect of sparsity on particular parts of the model. We use as a target for our regularization the baseline networks provided with that distribution, LeNet  and CIFAR-10 Quick .
For larger-scale experiments, we demonstrate the usefulness of these regularizers on a network developed for the ImageNet classification challenge . This network has five convolutional layers and three fully-connected layers, with a total of 60 million parameters. Upon the publication of , it was one of the largest neural networks described in the literature. Due to the much larger scale of both the dataset and the network, we perform experiments using a computationally cheaper “fine-tuning” procedure in which we use an existing non-sparse network as the initialization for sparsity-regularized training. The starting point for this training was a Caffe-based  duplication of the “AlexNet” network.111 This can be downloaded using the Caffe tools from the “model zoo.” http://caffe.berkeleyvision.org/model_zoo.html The network was progressively sparsified in stages over 200,000 iterations, taking approximately one week on a GeForce GTX TITAN. Thresholding/ projection was done every 100 iterations. The number of nonzeros remaining after the thresholding was manually reduced in steps, where we tightened the sparsity constraint each time the -constrained training converged on a network with comparable accuracy to the original dense weights.
As a first experiment, we verify that the stochastic gradient descent solver is able to successfully optimize the problems given by our new regularizations. This can be seen by looking at the training loss and accuracy, plotted in Figure 3. In this optimization, we considered training a baseline CIFAR-10 model with sparsity penalties on the fully-connected layers.
Optimization with the -norm projection was seen to be far more robust than the authors’ expectations. We did not observe any case where a network with baseline regularization was successfully trained but an -regularized variant of the model failed to find a model reasonably close the optimum w.r.t. the training loss. We hypothesize that the techniques utilized by existing Deep Learning training methods to optimize the highly nonconvex objectives of those models are already powerful enough to in practice cope with the additional complexity incurred by the nonconvex feasible region.
This held across all choices of layers, structured updates, and other hyperparameters. In addition to this experiment, the surprisingly high test accuracies for very sparse models seen in this work’s other experiments also suggest that the optimization was successful in finding a reasonable fit to the training data in those cases.
4.2 Layer-wise distribution of sparsity
We observed on standard experimental networks that each layer of the network performs very differently as we vary the level of sparsity-inducing regularization. It can for instance be seen in Figure 6 that significant decreases in accuracy are seen when sparsifying the first convolutional layer at nonzero ratios for which the fully-connected layers are still able to describe the target concept. The distribution of sparsity between different layers is the hyperparameter with the greatest effect on the performance of sparse deep models. The procedure below allows us to automatically determine sparsity hyperparameters incorporating some of the intuitions given by the layer-specific results in Figure 6.
The choice of which layers to sparsify was done by doing a greedy search on the parameter space, trying to reduce the number of nonzeros while maximizing the accuracy on a validation set. The procedure was:
Separate a validation set from the training data.
Repeat until a model of the desired sparsity is found:
For each layer, reduce the number of nonzeros by 20% and train a network.
Remove 20% nonzeros from the layer whose network in (a) produce the best validation accuracy.
Use this network as the start of the next iteration.
We noted that for a baseline CIFAR-10 network this procedure yielded nonzero distributions heavily skewed toward sparsifying the later layers of the network. The level of sparsity, and the number of nonzero parameters, chosen by this procedure for each layer of the best candidate networks is shown in Figure4. From the normalized plot on the right of Figure 4, it is clear that the tightest sparsity constraints were imposed on the final convolution layer and the fully-connected layers while the first two convolution layers were relatively untouched. In terms of the number of parameters set to zero, the greatest reduction in weights was in the final convolution and first fully-connected layers. We found that these networks significantly outperformed baselines that imposed sparsity constraints uniformly across all layers.
MNIST is a set of handwritten digits, with 60,000 training examples and 10,000 test examples. Each image is with a single channel. MNIST is among the most commonly used datasets in machine learning, computer vision, and deep networks.
CIFAR-10 is a subset of the “80 million tiny images” dataset , where ground truth class labels have been provided. There are 10 classes, with 5,000 training images and 1,000 test images per class, for a total of 50,000 training images and 10,000 test images. Each image is RGB, with pixels. Other than subtracting the mean of the training set, we do not consider whitening or data augmentation, obtaining higher absolute error rates but allowing us to compare against an easily reproducible baseline.
The AlexNet fine-tuning experiments used the ILSVRC 2012 training set. Simple data augmenation using random crops and horizontal mirroring was used in the training phase, along with subtracting the mean image from the training set.
Convolutional Neural Nets trained with existing regularization methods based on weight decay will have many weights of very small magnitude, due at least in part to the number of redundant parameters that never receive large updates during back-propagation. These networks do not however approximate a sparse network, nor can they be made sparse without additional training using regularizations like those presented in this work. In Figure 5 we present the result of an experiment comparing models trained using our projection to a model using ordinary weight decay. In this experiment, we first threshold weights with magnitude less than , for varying choices of in the interval , on an -regularized model trained on CIFAR-10. The distribution of nonzero parameters between layers in the thresholded model is then duplicated in a model trained with regularization. This is done by imposing a per-layer constraint through our projection operator, where the maximum number of nonzeros allows is the same as in that layer of the thresholded model. This resulting test accuracies show that a far more useful model may be obtained at high levels of sparsity if the thresholding is alternated with training, as in our projection method, rather than done only as a post-processing step.
5.2 Accuracy and Regularization Updates
|Convolution Layer||Fully-connected||Convolution Layer||Fully-connected|
MNIST and CIFAR-10
A key empirical result of this work is that sparse models achieve surprisingly high accuracies even as the number of nonzero parameters get quite small. In Figure 6 we look at limited experiments where particular layers of two baseline MNIST and CIFAR-10 models are sparsified with various penalties. We consider as a baseline both thresholding -regularized models and varying the network structure to directly reduce the number of parameters.
We note that, while the projection is the regularization that imposes sparsity most directly, the -based regularizations perform comparably well for a “middle” range of regularization strength. This is as would be expected, based on the literature for sparsity-inducing regularization for shallow models. The projection tends to outperform the subgradient and shrinkage updates as the number of nonzeros begins to approach within an order of magnitude of the dense model. This can be explained as the result of the -based updates’ effect on the magnitude of the regularized weights, by contrast to the projection that does not at all modify the largest-magnitude weights. This distinction becomes more important as you have more smaller-magnitude weights in less sparse models. It is also less visible in MNIST as all models quickly reached a “ceiling” accuracy. In the other direction, the hard thresholding/ update seems to produce models of moderate accuracy at higher levels of sparsity. In this range, for higher regularization strength, the updates also frequently fail to produce very sparse models at all, due to a discontinuity in the regularization path between the sparser models shown in 6 and models that are all zero.
Our later experiments primarily use the projection because it requires far less intensive hyperparameter tuning. The different sparsity levels seen for the norms are only seen in a very narrow range of values for the regularization multiplier. Outside this range we see either dense models or models that are completely zero. By contrast, good models are produced for nearly any choice of the number of nonzeros imposed by an constraint.
In the AlexNet experiments we focus on the case of applying our -projection regularization, and only on the fully-connected layers as these are together responsible for 96% of the total number of weights in the network.
The Top-1 and Top-5 validation accuracies of the original networks, the Caffe duplication, and our sparse version of the Caffe duplication are shown in the table in Figure 1. Note that the Caffe duplication achieves slightly lower validation accuracies than the original networks trained by Krizhevsky et. al.. Initially, only the final fully-connected layer was sparsified and reduced to 400,000 nonzero weights out of 4,096,000 in the original dense network (plus 1,000 per-neuron biases). This is ten times smaller than the original parameter set of this layer in the dense network. In the second stage, the other two fully-connected layers were also regularized to produce a network with 3 million parameters in each of these two layers (for 6 million total). This is from a total of 54.5 million weights in these two layers of the original dense network. Overall, our sparse network has all but 14% of the weights of the network set to zero.
To compare, we also directly thresholded the network without additional training to have the same number of nonzeros in each layer. This yields a network that achieves only 38.92% Top-1 validation accuracy and 63.44% Top-5 validation accuracy. The accuracies reported here are done without test-time oversampling or any similar test-time data augmentation methods.
Our results suggest that the very high parameter counts in deep learning models include a number of redundant weights. Much of the computational resources and model complexity incurred by these very large models is spent to yield relatively little benefit in terms of test-time accuracy. Given a fixed budget of some computational resource (e.g. memory), this is much better spent on more effective ways to increase accuracy. A particularly powerful method for improving test-time accuracy, known to work quite well when maximizing accuracy on difficult machine learning benchmarks  is building ensembles that combine the output of multiple models.
|2||71770 (49.3%)||143540||77.40% 0.192%|
|3||46023 (31.6%)||138069||77.18% 0.215%|
|4||36333 (25.0%)||145332||75.96% 0.116%|
|5||28249 (19.4%)||141245||74.60% 0.155%|
Building ensembles of sparse models under a parameter budget. The 1-model case was trained on the original dataset as a baseline, all others on bagged resampling. Accuracies given are means with standard deviations across multiple trials with different models and bagged datasets in each trial.
We construct ensembles using bagging , a classical Machine Learning technique in which each member of the ensemble is trained on a random resampling of the training data. Bagging is frequently used where the ensemble members are expected to overfit or produce unstable predictions. In Table 1 we show the resulting accuracy as we grow the ensemble under a parameter budget. For each ensemble we train, we seek to maintain a set of nonzero weights that does not grow in size even as we consider greater numbers of predictors in the ensemble. This can be done by sparsifying the individual elements of the ensemble with the regularizations presented in this work. This targets e.g. a setting such as a mobile device with limited memory, for which building ensembles with the full model is prohibitively expensive; which can be true quite quickly for CNNs. Using this as a proxy for model capacity and power, this experiment further allows us to explore the tradeoffs in how we build predictors and where model capacity should be spent to get the best performance at deployment. To train an ensemble with members, we repeat the following steps times:
Resample the training data, with replacement, to get another training set of the same size.
Take a layer-wise distribution of nonzeros given by the method in Section 4.2 that has total nonzeros.
Train a model on the resampled data, with each layer having an constraint to enforce this distribution.
At test time, we run each CNN as normal over the test dataset. We then average the output layers, and predict the class corresponding to the largest average output.
We observe a significant increase in accuracy for smaller ensembles. As the size of the ensemble grows and the models become sparser, however, this yields diminishing returns as the individual models can no longer sufficiently approximate the target task.
5.4 Reduced Training Data
A key property of regularization is its ability to improve the generalizability
of a learned model. If there is insufficient training data to properly estimate the true underlying distribution, the minimizer of the empirical risk will be different from the best model for the expected risk under the true distribution. The model will then “overfit” the training data rather than correctly learning the target concept.
In Figure 7, we test this assertion using models trained with our regularization and with weight decay. We randomly subsample the CIFAR-10 training dataset, to produce a series of smaller datasets with increasingly insufficient training data. Two results can be seen from this experiment that match what Machine Learning theory predicts. First, the differences between simpler and more complex models become narrower with less training data, on the left-hand side of the plots. Indeed, the simpler models begin to outperform the more parameter-heavy dense models in some cases when little training data is available. Secondly, in all models we see as more training data becomes available that the training accuracy decreases while the test accuracy on unseen data increases. The change in accuracy on both sets is much less pronounced on simpler models.
In this work we present a powerful technique for regularizing and constructing simpler deep models for vision. This may be a key tool in training deep learning vision models that are meant to be deployed in resource-constrained environments, or to increase the generalizability of existing models. Using sparsity-inducing regularization provides a significantly improved method for reducing the parameters of a model as compared with baselines including reducing the number of units in the network and simple thresholding.
An interesting empirical observation we see is that very sparse models manage to do surprisingly well on benchmark image classification tasks. We leverage this not only to construct very simple models, but also to build ensembles of these sparse models that outperform the baseline dense models while still staying within fixed resources.
-  Robert M. Bell and Yehuda Koren. Lessons from the netflix prize challenge. SIGKDD Explor. Newsl., 9(2):75–79, December 2007.
-  Léon Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT, pages 177–186, 2010.
-  Leo Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.
-  Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In SIGKDD, pages 535–541, 2006.
-  Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, pages 1223–1231. 2012.
-  Jia Deng, Wei Dong, R. Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248–255, June 2009.
-  David L. Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289–1306, April 2006.
-  Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012.
-  Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In ACM Multimedia, pages 675–678, 2014.
-  Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, Computer Science Department, University of Toronto, 2009.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
-  Anders Krogh and John A. Hertz. A simple weight decay can improve generalization. In NIPS, pages 950–957. 1992.
-  Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, Nov 1998.
-  Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2013.
-  Andrew Zisserman Max Jaderberg, Andrea Vedaldi. Speeding up convolutional neural networks with low rank expansions. In BMVC, 2014.
-  Bruno A. Olshausen and David J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37(23):3311 – 3325, 1997.
-  Benjamin Recht and Christopher Ré. Parallel stochastic gradient algorithms for large-scale matrix completion. Mathematical Programming Computation, 5(2):201–226, 2013.
-  Russell Reed. Pruning algorithms-a survey. Neural Networks, IEEE Transactions on, 4(5):740–747, Sep 1993.
-  Mark Schmidt, Nicolas L. Roux, and Francis R. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. In NIPS, pages 1458–1466. 2011.
-  Pierre Sermanet, David Eigen, Xiang Zhang, Michaël Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2014.
-  Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated sub-gradient solver for SVM. Mathematical programming, 127(1):3–30, 2011.
-  Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In NIPS, pages 2951–2959. 2012.
-  Ilya Sutskever, James Martens, George Dahl, and Geoffrey E. Hinton. On the importance of initialization and momentum in deep learning. In ICML, pages 1139–1147, 2013.
-  Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842, 2014.
-  Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):pp. 267–288, 1996.
Antonio Torralba, Rob Fergus, and William T. Freeman.
80 million tiny images: A large data set for nonparametric object and scene recognition.Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(11):1958–1970, Nov 2008.
Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou.
Stochastic gradient descent training for L1-regularized log-linear
models with cumulative penalty.
Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 477–485, 2009.
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol.
Extracting and composing robust features with denoising autoencoders.In ICML, pages 1096–1103, 2008.
-  Stephen J. Wright, Robert D. Nowak, and Mário A. T. Figueiredo. Sparse reconstruction by separable approximation. Signal Processing, IEEE Transactions on, 57(7):2479–2493, July 2009.
-  Xiangyu Zhang, Jianhua Zou, Xiang Ming, Kaiming He, and Jian Sun. Efficient and accurate approximations of nonlinear convolutional networks. arXiv:1411.4229, 2014.
Appendix A Memory Usage
To describe what the sparsity levels mean in concrete computation terms we consider three storage formats, each of which allow for fast computation with sparse networks. When calculating the memory usage of sparse networks, we assume the optimal choice among the following formats:
Dense: One can ignore any sparsity present in the network, and store it as with the original dense weights. For very sparse networks a great deal of storage will be filled with the number zero, but for networks with few zero weights this will still be cheaper due to the additional overhead for required sparse data structures.
Bitmask: We have a simple bitmask with a number of bits equal to the number of total parameters. Iff the bit corresponding to a a parameter is one. then there the value of that parameter is nonzero. The nonzero parameters are then stored in a flat array. Additional indices into the flat array, depending on how computation is done with these weights, can allow this format to still be used directly in CNNs at runtime.
Indexed: Only the nonzero parameters are stored, each one as a pair consisting of an index into the original weight array alongside the weight value. This is the traditional way to handle sparse vectors.
In some types of layers more specific sparse formats such as Compressed Sparse Row (CSR) matrices may also be suitable for computation. These will have a memory cost approximately equal to the “indexed” case.
We assume that the weights are stored as single-precision (32-bit/4-byte) floating-point values. The overhead introduced by the sparse data structures is relatively smaller for double-precision (64-bit/8-bytes) floating-point values. Additional formats, such as half-precision floats or fixed-point weights are not supported by standard hardware or CNN codes.
Using these formats, we plot in Figure 1 the memory required to store the weights of sparsified forms of the baseline test networks. Each point is a candidate network considered by a greedy search over the per-layer distribution of nonzeros. In the table in the same figure, we give the memory used for the “indexed” format on a sparse CNN for ImageNet. We use Kibibytes ( bytes) and Mebibytes ( bytes), abbrevated to “KB” and “MB” respectively.