I have made this letter longer than usual, only because I have not had the time to make it shorter 111Loosly translated from French - Blaise Pascal
Aspiring writers are often given the following advice: produce a first draft, then remove unnecessary words and shorten phrases whenever possible. Can a similar recipe be followed while building deep networks? For large-scale tasks like object classification, the general practice (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015; Szegedy et al., 2015) has been to use large networks with powerful regularizers (Srivastava et al., 2014). This implies that the overall model complexity is much smaller than the number of model parameters. A smaller model has the advantage of being faster to evaluate and easier to store - both of which are crucial for real-time and embedded applications.
Given such a large network, how do we make it smaller? A naive approach would be to remove weights which are close to zero. However, this intuitive idea does not seem to be theoretically well-founded. LeCunn et alproposed Optimal Brain Damage (OBD) (LeCun et al., 1989), a theoretically sound technique which they showed to work better than the naive approach. A few years later, Hassibi et alcame up with Optimal Brain Surgeon (OBS) (Hassibi et al., 1993), which was shown to perform much better than OBD, but was much more computationally intensive. This line of work focusses on pruning unnecessary weights in a trained model.
There has been another line of work in which a smaller network is trained to mimic a much larger network. Bucila et al(Bucilua et al., 2006) proposed a way to achieve the same - and trained smaller models which had accuracies similar to larger networks. Ba and Caruna (Ba and Caruana, 2014) used the approach to show that shallower (but much wider) models can be trained to perform as well as deep models. Knowledge Distillation (KD) (Hinton et al., 2014) is a more general approach, of which Bucila et al’s is a special case. FitNets (Romero et al., 2014) use KD at several layers to learn networks which are deeper but thinner (in contrast to Ba and Caruna’s shallow and wide), and achieve high levels of compression on trained models.
Many methods have been proposed to train models that are deep, yet have a lower parameterisation than conventional networks. Collins and Kohli (Collins and Kohli, 2014) propose a sparsity inducing regulariser for backpropogation which promotes many weights to have zero magnitude. They achieve reduction in memory consumption when compared to traditionally trained models. Denil et al(Denil et al., 2013) demonstrate that most of the parameters of a model can be predicted given only a few parameters. At training time, they learn only a few parameters and predict the rest. Ciresan et al(Cireşan et al., 2011) train networks with random connectivity, and show that they are more computationally efficient than densely connected networks.
Some recent works have focussed on using approximations of weight matrices to perform compression. Jenderberg et al(Jaderberg et al., 2014) and Denton et al(Denton et al., 2014) use SVD-based low rank approximations of the weight matrix. Gong et al(Gong et al., 2014), on the other hand, use a clustering-based product quantization approach to build an indexing scheme that reduces the space occupied by the matrix on disk. Unlike the methods discussed previously, these do not need any training data to perform compression. However, they change the network structure in a way that prevents operations like fine-tuning to be done easily after compression. One would need to ‘uncompress’ the network, fine-tune and then compress it again.
Similar to the methods discussed in the paragraph above, our pruning method doesn’t need any training/validation data to perform compression. Unlike these methods, our method merely prunes parameters, which ensures that the network’s overall structure remains same - enabling operations like fine-tuning on the fly. The following section explains this in more detail.
2 Wiring similar neurons together
Given the fact that neural nets have many redundant parameters, how would the weights configure themselves to express such redundancy? In other words, when can weights be removed from a neural network, such that the removal has no effect on the net’s accuracy?
Suppose that there are weights which are exactly equal to zero. It is trivial to see that these can be removed from the network without any effect whatsoever. This was the motivation for the naive magnitude-based removal approach discussed earlier.
In this work we look at another form of redundancy. Let us consider a toy example of a NN with a single hidden layer, and a single output neuron. This is shown in figure 1. Let
be vectors of weights (or ‘weight-sets’) which includes the bias terms, andbe scalar weights in the next layer. Let denote the input, with the bias term absorbed. The output is given by
is a monotonically increasing non-linearity, such as sigmoid or ReLU.
Now let us suppose that . This means that . Replacing by in (1), we get
This means whenever two weight sets () are equal, one of them can effectively be removed. Note that we need to alter the co-efficient to in order to achieve this. We shall call this the ‘surgery’ step. This reduction also resonates with the well-known Hebbian principle, which roughly states that “neurons which fire together, wire together”. If we find neurons that fire together (), we wire them together (). Hence we see here that along with single weights being equal to zero, equal weight vectors also contribute to redundancies in a NN. Note that this approach assumes that the same non-linearity is used for all neurons in a layer.
3 The case of dissimilar neurons
Using the intuition presented in the previous section, let us try to formally derive a process to eliminate neurons in a trained network. We note that two weight sets may never be exactly equal in a NN. What do we do when ? Here .
As in the previous example, let be the output neuron when there are hidden neurons. Let us consider two similar weight sets and in and that we have chosen to remove to give us .
We know that the following is true.
If (or ), we would have . However, since , this need not hold true. Computing the squared difference , we have
To perform further simplification, we use the following Lemma.
Let and be a monotonically increasing function,
such that . Then,
This can be further simplified using Cauchy-Schwarz inequality.
Now, let us take expectation over the random variableon both sides. Here, is assumed to belong to the input distribution represented by the training data.
Note that is a scalar quantity, independent of the network architecture. Given the above expression, we ask which pair least changes the output activation. To answer this, we take minimum over on both sides, yielding
To minimize an upper bound on the expected value of the squared difference, we thus need to find indicies such that is the least. Note that we need not compute the value of to do this - making it dataset independent. Equation (3) takes into consideration both the naive approach of removing near-zero weights (based on ) and the approach of removing similar weight sets (based on ).
The above analysis was done for the case of a single output neuron. It can be trivially extended to consider multiple output neurons, giving us the following equation
where denotes the average of the quantity over all output neurons. This enables us to apply this method to intermediate layers in a deep network. For convenience, we define the saliency of two weight-sets in as .
We elucidate our procedure for neuron removal here:
Compute the saliency for all possible values of . It can be stored as a square matrix , with dimension equal to the number of neurons in the layer being considered.
Pick the minimum entry in the matrix. Let it’s indices be . Delete the neuron, and update .
Update by removing the column and row, and updating the column (to account for the updated .)
The most computationally intensive step in the above algorithm is the computation of the matrix upfront. Fortunately, this needs to be done only once before the pruning starts, and only single columns are updated at the end of pruning each neuron.
3.1 Connection to Optimal Brain Damage
In the case of toy model considered above, with the constraint that only weights from the hidden-to-output connection be pruned, let us analyse the OBD approach.
The OBD approach looks to prune those weights which have the least effect on the training/validation error. In contrast, our approach looks to prune those weights which change the output neuron activations the least. The saliency term in OBD is , where is the diagonal element of the Hessian matrix. The equivalent quantity in our case is the saliency . Note that both contain . If the change in training error is proportional to change in output activation, then both methods are equivalent. However, this does not seem to hold in general. Hence it is not always necessary that the two approaches remove the same weights.
In general, OBD removes a single weight at a time, causing it to have a finer control over weight removal than our method, which removes a set of weights at once. However, we perform an additional ‘surgery’ step () after each removal, which is missing in OBD. Moreover, for large networks which use a lot of training data, computation of the Hessian matrix (required for OBD) is very heavy. Our method provides a way to remove weights quickly.
3.2 Connection to Knowledge Distillation
Hinton et al(Hinton et al., 2014)
proposed to use the ‘softened’ output probabilities of a learned network for training a smaller network. They showed that as, their procedure converges to the case of training using output layer neurons (without softmax). This reduces to Bucila et al’s Bucilua et al. (2006) method. Given a larger network’s output neurons and smaller network’s neurons , they train the smaller network so that is minimized.
In our case, corresponds to and to . We minimize an upper bound on
, whereas KD exactly minimizes over the training set. Moreover, in the KD case, the minimization is performed over all weights, whereas in our case it is only over the output layer neurons. Note that we have the expectation term (and the upper bound) because our method does not use any training data.
3.3 Weight normalization
In order for our method to work well, we need to ensure that we remove only those weights for which the RHS of (3) is small. Let , where is a positive constant (say 0.9). Clearly, these two weight sets compute very similar features. However, we may not be able to eliminate this pair because of the difference in magnitudes. We hence propose to normalise all weight sets while computing their similarity.
For the ReLU non-linearity, defined by , and for any and any , we have the following result:
Using this result, we scale all weight sets () such that their norm is one. The factor is multiplied with the corresponding co-efficient in the next layer. This helps us identify better weight sets to eliminate.
3.4 Some heuristics
While the mathematics in the previous section gives us a good way of thinking about the algorithm, we observed that certain heuristics can improve performance.
The usual practice in neural network training is to train the bias without any weight decay regularization. This causes the bias weights to have a much higher magnitude than the non-bias weights. For this reason, we normalize only the non-bias weights. We also make sure that the similarity measure takes ‘sensible-sized’ contributions from both weights and biases. This is accomplished for fully connected layers as follows.
Let , and let correspond to the normalized weights. Rather than using , we use .
Note that both are measures of similarity between weight sets. We have empirically found the new similarity measure performs much better than just using differences. We hypothesize that this could be a tighter upper bound on the quantity .
Similar heuristics can be employed for defining a similarity term for convolutional layers. In this work, however, we only consider fully connected layers.
4 How many neurons to remove?
One way to use our technique would be to keep removing neurons until the test accuracy starts going below certain levels. However, this is quite laborious to do for large networks with multiple layers.
We now ask whether it is possible to somehow determine the number of removals automatically. Is there some indication given by removed weights that tell us when it is time to stop? To investigate the same, we plot the saliency of the removed neuron as a function of the order of removal. For example, the earlier pruned neurons would have a low value of saliency , while the later neurons would have a higher value. The red line in Figure 2(a) shows the same. We observe that most values are very small, and the neurons at the very end have comparatively high values. This takes the shape of a distinct exponential-shaped curve towards the end.
One heuristic would probably be to have the cutoff point near the foot of the exponential curve. However, is it really justified? To answer the same, we also compute the increase in test error (from baseline levels) at each stage of removal (given by the blue line). We see that the error stays constant for the most part, and starts increasing rapidly near the exponential. Scaled appropriately, the saliency curve could be considered as a proxy for the increase in test error. However, computing the scale factor needs information about the test error curve. Instead, we could use the slope of saliency curve to estimate how densely we need to sample the test error. For example, fewer measurements are needed near the flatter region and more measurements are needed near the exponential region. This would be adata-driven way to determine number of neurons to remove.
We also plot the histogram of values of saliency. We see that the foot of the exponential (saliency 1.2) corresponds to the mode of the gaussian-like curve (Figure 2(b)). If we require a data-free
way of finding the number of neurons to remove, we simply find the saliency value of the mode in the histogram and use that as cutoff. Experimentally, we see that this works well when the baseline accuracy is high to begin with. When it is low, we see that using this method causes a substantial decrease in accuracy of the resulting classifier. In this work, we use fractions (0.25, 0.5, etc) of the number given by the above method for large networks. We choose the best among the different pruned models based on validation data. A truly data-free method, however, would require us to not use any validation data to find the number of neurons to prune. Note that only our pruning method is data-free. The formulation of such a complete data-free method for large networks demands further investigation.
5 Experiments and Results
In most large scale neural networks (Krizhevsky et al., 2012; Simonyan and Zisserman, 2015) , the fully connected layers contain most of the parameters in the network. As a result, reducing just the fully connected layers would considerably compress the network. We hence show experiments with only fully connected layers.
5.1 Comparison with OBS and OBD
Given the fact that Optimal Brain Damage/Surgery methods are very difficult to evaluate for mid-to-large size networks, we attempted to compare it against our method on a toy problem. We use the SpamBase dataset (Asuncion and Newman, 2007)
, which comprises of 4300 datapoints belonging to two classes, each having 57 dimensional features. We consider a small neural network architecture - with a single hidden layer composed of 20 neurons. The network used a sigmoidal non-linearity (rather than ReLU), and was trained using Stochastic Gradient Descent (SGD). The NNSYSID222http://www.iau.dtu.dk/research/control/nnsysid.html package was used to conduct these experiments.
Figure 3 is a plot of the test error as a function of the number of neurons removed. A ‘flatter’ curve indicates better performance, as this means that one can remove more weights for very little increase in test error. We see that our method is able to maintain is low test error as more weights are removed. The presence of an additional ‘surgery’ step in our method improves performance when compared to OBD. Figure 4 shows performance of our method when surgery is not performed. We see that the method breaks down completely in such a scenario. OBS performs inferior to our method because it presumably prunes away important weights early on - so that any surgery is not able to recover the original performance level. In addition to this, our method took seconds to run, whereas OBD took minutes and OBS took hours. This points to the fact that our method could scale well for large networks.
5.2 Experiments on LeNet
We evaluate our method on the MNIST dataset, using a LeNet-like (LeCun et al., 1998)2014). The network consisted of a two convolutional layers with 20 and 50 filters, and two fully connected layers with 500 and 10 (output layer) neurons. Noting the fact that the third layer contains of the total weights, we perform compression only on that layer.
|Neurons pruned||Naive method||Random removals||Ours||Compression (%)|
The results are shown in Table 1. We see that our method performs much better than the naive method of removing weights based on magnitude, as well as random removals - both of which are data-free techniques.
Our data-driven cutoff selection method predicts a cut-off of , for a decrease in accuracy. The data-free method, on the other hand, predicts a cut-off of . We see that immediately after that point, the performance starts decreasing rapidly.
5.3 Experiments on AlexNet
For networks like AlexNet (Krizhevsky et al., 2012), we note that there exists two sets of fully connected layers, rather than one. We observe that pruning a given layer changes the weight-sets for the next layer. To incorporate this, we first prune weights in earlier layers before pruning weights in later layers.
For our experiments, we use an AlexNet-like architecture, called CaffeNet, provided with the Caffe Deep Learning framework. It is very similar to AlexNet, except that the order of max-pooling and normalization have been interchanged. We use the ILSVRC 2012(Russakovsky et al., 2015) validation set to compute accuracies in the following table.
|# FC6 pruned||# FC7 pruned||Accuracy (%)||Compression (%)||# weights removed|
We observe that using fractions (0.25, 0.5, 0.75) of the prediction made by our data-free method gives us competitive accuracies. We observe that removing as many as 9.3 million parameters in case of 700 removed neurons in FC6 only reduces the base accuracy by 0.2%. Our best method was able to remove upto 21.3 million weights, reducing the base accuracy by only 2.2%.
We proposed a data-free method to perform NN model compression. Our method weakly relates to both Optimal Brain Damage and a form of Knowledge Distillation. By minimizing the expected squared difference of logits we were able to avoid using any training data for model compression. We also observed that the saliency curve has low values in the beginning and exponentially high values towards the end. This fact was used to decide on the number of neurons to prune. Our method can be used on top of most existing model architectures, as long as they contain fully connected layers.
Proof of Lemma 1.
Given is monotonically increasing, and .
Since both , and , we can square both sides of the inequality.
We gratefully acknowledge the support of NVIDIA Corporation for the donation of the K40 GPU used for this research.
Asuncion and Newman (2007)
Arthur Asuncion and David Newman.
UCI machine learning repository, 2007.
- Ba and Caruana (2014) Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in Neural Information Processing Systems, pages 2654–2662, 2014.
- Bucilua et al. (2006) Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 535–541. ACM, 2006.
- Cireşan et al. (2011) Dan C Cireşan, Ueli Meier, Jonathan Masci, Luca M Gambardella, and Jürgen Schmidhuber. High-performance neural networks for visual object classification. arXiv preprint arXiv:1102.0183, 2011.
- Collins and Kohli (2014) Maxwell D. Collins and Pushmeet Kohli. Memory bounded deep convolutional networks. CoRR, abs/1412.1442, 2014. URL http://arxiv.org/abs/1412.1442.
- Denil et al. (2013) Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156, 2013.
- Denton et al. (2014) Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems, pages 1269–1277, 2014.
- Gong et al. (2014) Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
- Hassibi et al. (1993) Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal brain surgeon. Advances in Neural Information Processing Systems, pages 164–164, 1993.
- Hinton et al. (2014) Geoffrey E Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In NIPS 2014 Deep Learning Workshop, 2014.
Jaderberg et al. (2014)
Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman.
Speeding up convolutional neural networks with low rank expansions.In Proceedings of the British Machine Vision Conference. BMVA Press, 2014.
- Jia et al. (2014) Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
- Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012.
- LeCun et al. (1989) Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal brain damage. In Advances in Neural Information Processing Systems, volume 2, pages 598–605, 1989.
- LeCun et al. (1998) Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
- Romero et al. (2014) Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
- Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. doi: 10.1007/s11263-015-0816-y.
- Simonyan and Zisserman (2015) K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
- Srivastava et al. (2014) Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014.
Szegedy et al. (2015)
Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir
Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich.
Going deeper with convolutions.
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.