Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization

07/28/2017
by   Frederick Tung, et al.
0

When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space than the source domain used for pre-training (e.g. ImageNet), the fine-tuned network is likely to be over-parameterized. However, applying network pruning as a post-processing step to reduce the memory requirements has drawbacks: fine-tuning and pruning are performed independently; pruning parameters are set once and cannot adapt over time; and the highly parameterized nature of state-of-the-art pruning methods make it prohibitive to manually search the pruning parameter space for deep networks, leading to coarse approximations. We propose a principled method for jointly fine-tuning and compressing a pre-trained convolutional network that overcomes these limitations. Experiments on two specialized image domains (remote sensing images and describable textures) demonstrate the validity of the proposed approach.

READ FULL TEXT VIEW PDF

Authors

page 2

page 6

11/21/2018

SpotTune: Transfer Learning through Adaptive Fine-tuning

Transfer learning, which allows a source task to affect the inductive bi...
06/15/2021

User-specific Adaptive Fine-tuning for Cross-domain Recommendations

Making accurate recommendations for cold-start users has been a longstan...
12/15/2021

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

In recent years, deep neural networks have known a wide success in vario...
01/07/2022

Repurposing Existing Deep Networks for Caption and Aesthetic-Guided Image Cropping

We propose a novel optimization framework that crops a given image based...
07/04/2019

A Survey of Pruning Methods for Efficient Person Re-identification Across Domains

Recent years have witnessed a substantial increase in the deep learning ...
09/05/2019

Effective Domain Knowledge Transfer with Soft Fine-tuning

Convolutional neural networks require numerous data for training. Consid...
06/02/2020

Shapley Value as Principled Metric for Structured Network Pruning

Structured pruning is a well-known technique to reduce the storage size ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction


(a)

(b)

(c)

Figure 1:

Consider the task of training a deep convolutional neural network on a specialized image domain (e.g. remote sensing images). (a) The conventional approach starts with a network pre-trained on a large, generic dataset (e.g. ImageNet) and fine-tunes it to the specialized domain. (b) Since the specialized domain spans a narrower visual space, the fine-tuned network is likely to be over-parameterized and can be compressed. A natural way to achieve this is to perform network pruning after fine-tuning, however this approach has limitations (see discussion in Section

1). (c) Fine-pruning addresses these limitations by jointly fine-tuning and compressing the pre-trained network in an iterative process. Each iteration consists of network fine-tuning, pruning module adaptation, and network pruning.

When faced with a recognition task in a novel domain or application, a common strategy is to start with a CNN pre-trained on a large dataset, such as ImageNet [Russakovsky et al.(2014)Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, Berg, and Fei-Fei], and fine-tune the network to the new task (Fig. 1a). Fine-tuning involves adapting the structure of the existing network to enable the new task, while re-using the pre-trained weights for the unmodified layers of the network. For example, a common and simple form of fine-tuning involves replacing the final fully-connected layer of the pre-trained CNN, which has an output dimensionality based on the pre-training dataset (e.g. 1000 dimensions for ImageNet), with a new fully-connected layer with a dimensionality that matches the target dataset.

Fine-tuning allows powerful learned representations to be transferred to novel domains. Typically, we fine-tune complex network architectures that have been pre-trained on large databases containing millions of images. For example, we may fine-tune AlexNet [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] pre-trained on ImageNet’s 1.2 million images (61 million parameters). In this way, we adapt these complex architectures to smaller and more specialized domains, such as remote sensing images. However, the specialized domain may not span the full space of natural images on which the original network was pre-trained. This suggests that the network architecture may be over-parameterized, and therefore inefficient in terms of memory and power consumption, with respect to the more constrained novel domain, in which a much more lightweight network would suffice for good performance. In applications with tight constraints on memory and power, such as mobile devices or robots, a more lightweight network with comparable classification accuracy may be valuable.

Given a fine-tuned network, a straightforward way to obtain a more lightweight network is to perform network pruning [Guo et al.(2016)Guo, Yao, and Chen, Han et al.(2016b)Han, Mao, and Dally, Srinivas and Babu(2015)] (Fig. 1

b). However, this strategy has drawbacks: (1) the fine-tuning and pruning operations are performed independently; (2) the pruning parameters are set once and cannot adapt after training has started; and (3) since state-of-the-art pruning methods are highly parameterized, manually searching for good pruning hyperparameters is often prohibitive for deep networks, leading to coarse pruning strategies (e.g. pruning convolutional and fully connected layers separately

[Guo et al.(2016)Guo, Yao, and Chen]).

We propose a novel process called fine-pruning (Fig. 1c) that addresses these limitations:

  1. Fine-pruning obtains a lightweight network specialized to a target domain by jointly fine-tuning and compressing the pre-trained network. The compatibility between the target domain and the pre-training domain is not normally known in advance (e.g. how similar are remote sensing images to ImageNet?), making it difficult to determine a priori how effective knowledge transfer will be, how aggressively compression can be applied, and where compression efforts should be focused. The knowledge transfer and network compression processes are linked and inform each other in fine-pruning.

  2. Fine-pruning applies a principled adaptive network pruning strategy guided by Bayesian optimization, which automatically adapts the layer-wise pruning parameters over time as the network changes. For example, the Bayesian optimization controller might learn and execute a gradual pruning strategy in which network pruning is performed conservatively and fine-tuning restores the original accuracy in each iteration; or the controller might learn to prune aggressively at the outset and reduce the compression in later iterations (e.g. by splicing connections [Guo et al.(2016)Guo, Yao, and Chen]) to recover accuracy.

  3. Bayesian optimization enables efficient exploration of the pruning hyperparameter space, allowing all layers in the network to be considered together when making pruning decisions.

2 Related Work

Network pruning. Network pruning refers to the process of reducing the number of weights (connections) in a pre-trained neural network. The motivation behind this process is to make neural networks more compact and energy efficient for operation on resource constrained devices such as mobile phones. Network pruning can also improve network generalization by reducing overfitting. The earliest methods [Hassibi and Stork(1992), LeCun et al.(1990)LeCun, Denker, and Solla] prune weights based on the second-order derivatives of the network loss. Data-free parameter pruning [Srinivas and Babu(2015)]

provides a data-independent method for discovering and removing entire neurons from the network. Deep compression

[Han et al.(2016b)Han, Mao, and Dally] integrates the complementary techniques of weight pruning, scalar quantization to encode the remaining weights with fewer bits, and Huffman coding. Dynamic network surgergy [Guo et al.(2016)Guo, Yao, and Chen]

iteratively prunes and splices network weights. The novel splicing operation allows previously pruned weights to be reintroduced. Weights are pruned or spliced based on thresholding their absolute value. All weights, including pruned ones, are updated during backpropagation.

Other network compression strategies.

Network pruning is one way to approach neural network compression. Other effective strategies include weight binarization

[Courbariaux et al.(2015)Courbariaux, Bengio, and David, Rastegari et al.(2016)Rastegari, Ordonez, Redmon, and Farhadi], architectural improvements [Iandola et al.(2016)Iandola, Han, Moskewicz, Ashraf, Dally, and Keutzer], weight quantization [Han et al.(2016b)Han, Mao, and Dally], sparsity constraints [Lebedev and Lempitsky(2016), Zhou et al.(2016)Zhou, Alvarez, and Porikli], guided knowledge distillation [Hinton et al.(2015)Hinton, Vinyals, and Dean, Romero et al.(2015)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio], and replacement of fully connected layers with structured projections [Cheng et al.(2015)Cheng, Yu, Feris, Kumar, Choudhary, and Chang, Moczulski et al.(2016)Moczulski, Denil, Appleyard, and de Freitas, Yang et al.(2015)Yang, Moczulski, Denil, de Freitas, Smola, Song, and Wang]. Many of these network compression methods can train compact neural networks from scratch, or compress pre-trained networks for testing in the same domain. However, since they assume particular types of weights, mimic networks trained in the same domain, or modify the network structure, most of these methods are not easily extended to the task of fine-tuning a pre-trained network to a specialized domain.

In this paper, we consider joint fine-tuning and network pruning in the context of transferring the knowledge of a pre-trained network to a smaller and more specialized visual recognition task. Previous approaches for compressing pre-trained neural networks aim to produce a compact network that performs as well as the original network on the dataset on which the network was originally trained. In contrast, our focus is on the fine-tuning or transfer learning problem of producing a compact network for a small, specialized target dataset, given a network pre-trained on a large, generic dataset such as ImageNet. Our approach does not require the source dataset (e.g. ImageNet) on which the original network was trained.

3 Method

Each fine-pruning iteration comprises three steps: fine-tuning, adaptation of the pruning module, and network pruning (Fig. 1c). Fine-pruning can accommodate any parameterized network pruning module. In our experiments, we use the state-of-the-art dynamic network surgery method [Guo et al.(2016)Guo, Yao, and Chen] for the network pruning module, but fine-pruning does not assume a particular pruning method. Pruning module adaptation is guided by a Bayesian optimization [Gardner et al.(2014)Gardner, Kusner, Xu, Weinberger, and Cunningham, Snoek et al.(2012)Snoek, Larochelle, and Adams] controller, which enables an efficient search of the joint pruning parameter space, learning from the outcomes of previous exploration. This controller allows the pruning behaviour to change over time as connections are removed or formed.

Bayesian optimization is a general framework for solving global minimization problems involving blackbox objective functions:

(1)

where is a blackbox objective function that is typically expensive to evaluate, non-convex, may not be expressed in closed form, and may not be easily differentiable [Wang et al.(2013)Wang, Zoghi, Hutter, Matheson, and de Freitas]. Eq. 1 is minimized by constructing a probabilistic model for to determine the most promising candidate to evaluate next. Each iteration of Bayesian optimization involves selecting the most promising candidate , evaluating , and using the data pair to update the probabilistic model for .

In our case, is a set of pruning parameters. For example, if the network pruning module is deep compression [Han et al.(2016b)Han, Mao, and Dally], consists of the magnitude thresholds used to remove weights; if the network pruning module is dynamic network surgery [Guo et al.(2016)Guo, Yao, and Chen], consists of magnitude thresholds as well as cooling function hyperparameters that control how often the pruning mask is updated. We define by

(2)

where is the top-1 error on the held-out validation set obtained by pruning the network according to the parameters and then fine-tuning; is the sparsity (proportion of pruned connections) of the pruned network obtained using the parameters ; and is an importance weight that balances accuracy and sparsity, which is set by held-out validation (we set to maximize the achieved compression rate while maintaining the held-out validation error within a tolerance percentage, e.g. 2%).

0:  Pre-trained convolutional network, importance weight
1:  Fine-tune network { Fig. 1a}
2:  repeat
3:     repeat { Bayesian optimization controller}
4:        Select next candidate parameters to evaluate as
5:        Evaluate
6:        Update Gaussian process model using
7:     until converged or maximum iterations of Bayesian optimization reached
8:     Prune network using best found
9:     Fine-tune network
10:  until converged or maximum iterations of fine-pruning reached
Algorithm 1 Fine-Pruning

We model the objective function as a Gaussian process [Rasmussen and Williams(2006)]

. A Gaussian process is an uncountable set of random variables, any finite subset of which is jointly Gaussian. Let

, where is a mean function and is a covariance kernel such that

(3)

Given inputs and function evaluations , the posterior belief of at a novel candidate can be computed in closed form. In particular,

(4)

where

(5)

The implication of the closed form solution is that, given a collection of parameters and the objective function evaluated at those parameters, we can efficiently predict the posterior at unevaluated parameters.

To select the most promising candidate to evaluate next, we use the expected improvement criterion. Let denote the best candidate evaluated so far. The expected improvement of a candidate is defined as

(6)

For a Gaussian process, the expected improvement of a candidate can also be efficiently computed in closed form. Specifically,

(7)

where

is the standard normal cumulative distribution function and

is the standard normal probability density function. For a more detailed discussion on Gaussian processes and Bayesian optimization, we refer the interested reader to

[Gardner et al.(2014)Gardner, Kusner, Xu, Weinberger, and Cunningham], [Rasmussen and Williams(2006)], and [Snoek et al.(2012)Snoek, Larochelle, and Adams]. We use the publicly available code of [Gardner et al.(2014)Gardner, Kusner, Xu, Weinberger, and Cunningham] and [Rasmussen and Williams(2006)] in our implementation.

The complete fine-pruning process is summarized in Algorithm 1.

4 Experiments

Figure 2: Sample images from the two specialized domain datasets used in our experiments: (a) Remote sensing images from the UCMerced Land Use Dataset [Yang and Newsam(2010)]; (b) Texture images from the Describable Textures Dataset [Cimpoi et al.(2014)Cimpoi, Maji, Kokkinos, Mohamed, and Vedaldi].

Datasets. We performed experiments on two specialized image domains:

  • Remote sensing images: The UCMerced Land Use Dataset [Yang and Newsam(2010)] is composed of public domain aerial orthoimagery from the United States Geological Survey. The dataset covers 21 land-use classes, such as agricultural, dense residential, golf course, and harbor. Each land-use class is represented by 100 images. We randomly split the images into 50% for training, 25% for held-out validation, and 25% for testing.

  • Describable textures: The Describable Textures Dataset [Cimpoi et al.(2014)Cimpoi, Maji, Kokkinos, Mohamed, and Vedaldi] was introduced as part of a study in estimating human-describable texture attributes from images, which can then be used to improve tasks such as material recognition and description. The dataset consists of 5,640 images covering 47 human-describable texture attributes, such as blotchy, cracked, crystalline, fibrous, and pleated. We use the ten provided training, held-out validation, and testing splits.

Fig. 2 shows examples of images from the two datasets.

Baselines. We compare fine-pruning with a fine-tuning only baseline (Fig. 1a) as well as independent fine-tuning followed by pruning (Fig. 1b), which for brevity we will refer to as the independent baseline. All experiments start from an ImageNet-pretrained AlexNet [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton]. For a controlled comparison, we run the same state-of-the-art pruning method, dynamic network surgery [Guo et al.(2016)Guo, Yao, and Chen], in both the independent baseline and fine-pruning. In the original dynamic network surgery paper [Guo et al.(2016)Guo, Yao, and Chen], the authors prune convolutional and fully connected layers separately due to the prohibitive complexity of manually searching for layer-wise pruning parameters. To more fairly illustrate the benefit of fine-pruning, we set layer-wise pruning parameters for dynamic network surgery in the independent baseline using Bayesian optimization as well.

Accuracy Accuracy Parameters Compression
(Val.) (Test) Rate
UCMerced Land Use Dataset [Yang and Newsam(2010)]
Fine-tuning only (Fig. 1a) 94.7% 94.3% 57.0 M
Independent fine-tuning and pruning (Fig. 1b) 92.70.7% 93.80.7% 1.780.41 M 31.9
Fine-pruning (Fig. 1c) 92.50.9% 94.10.6% 1.170.39 M 48.8
Describable Textures Dataset [Cimpoi et al.(2014)Cimpoi, Maji, Kokkinos, Mohamed, and Vedaldi]
Fine-tuning only (Fig. 1a) 53.50.8% 53.70.9% 57.1 M
Independent fine-tuning and pruning (Fig. 1b) 52.81.2% 53.41.5% 3.620.54 M 15.8
Fine-pruning (Fig. 1c) 53.00.9% 52.80.8% 2.410.68 M 23.7
Table 1: Experimental results on two specialized image domains: remote sensing images and describable textures. All experiments start with ImageNet-pretrained AlexNet [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] and use the state-of-the-art dynamic network surgery method [Guo et al.(2016)Guo, Yao, and Chen] for network pruning. For a fair comparison, the pruning parameters in the independent fine-tuning and pruning baseline are also tuned by Bayesian optimization. We average results over ten runs on the remote sensing dataset and the ten provided splits on the describable textures dataset.

Implementation details. We set all parameters by held-out validation on the two datasets. The importance weight

is set to 1 on both datasets. We warm-start both the independent baseline and fine-pruning with identical parameters obtained by random search. Fine-pruning is run to convergence or to a maximum of 10 iterations. In each fine-pruning iteration, Bayesian optimization considers up to 50 candidates and network fine-tuning is performed with a fixed learning rate of 0.001 (the same learning policy used to obtain the initial fine-tuned network) to 10 epochs.

Results. Table 1 summarizes our experimental comparison of fine-tuning, independent fine-tuning and pruning, and fine-pruning, on the UCMerced Land Use and Describable Textures datasets. On UCMerced Land Use, the independent baseline produces sparse networks with 1.78 million parameters on average over ten runs, representing a reduction in the number of weights by 31.9-fold, while maintaining the test accuracy within 1% of the dense fine-tuned network. Fine-pruning achieves further improvements in memory efficiency, producing sparse networks of 1.17 million parameters on average, or a 48.8-fold reduction in the number of weights, while maintaining the test accuracy within 1% of the dense fine-tuned network. On Describable Textures, we average the results over the ten provided splits. Similar improvements are obtained on this harder dataset. The independent baseline reduces the number of weights by 15.8-fold while maintaining the test accuracy within 1% of the dense fine-tuned network. Fine-pruning lifts the compression rate to 23.7-fold while maintaining the test accuracy within 1% of the dense fine-tuned network.

Figure 3: Compression as a function of fine-pruning iteration. On both the (a) UCMerced Land Use Dataset and (b) Describable Textures Dataset, the pruning module adaptation, guided by Bayesian optimization, learns a policy of starting with a strong initial prune and tapering off in later iterations.

Fig. 3 shows how the compression rate varies with the fine-pruning iteration. We observe that, on both datasets, the pruning module adaptation learns to start with a strong initial prune and then gradually increase pruning aggressiveness in later iterations until the network converges. This behavior can also be observed by examining the pruning parameters selected by Bayesian optimization.

Table 2 illustrates the average number of weights layer by layer after fine-pruning for both datasets. We observe that the original fine-tuned networks in both cases are highly over-parameterized, and a significant reduction in memory can be obtained by fine-pruning. A large proportion of the original network parameters reside in the fully connected layers fc6 and fc7. Provided that the underlying network pruning module allows for pruning parameters to be set on an individual layer basis, our Bayesian optimization controller automatically learns to prioritize the compression of these layers because they have the largest influence on in the objective function (Eq. 2).

Parameters: Parameters: Percentage
Before After Pruned
UCMerced Land Use Dataset [Yang and Newsam(2010)]
conv1 35 K 26 K 26.1%
conv2 307 K 92 K 70.2%
conv3 885 K 261 K 70.5%
conv4 664 K 218 K 67.2%
conv5 443 K 181 K 59.1%
fc6 37.8 M 313 K 99.2%
fc7 16.8 M 60 K 99.6%
fc8 86 K 17 K 80.3%
total 57.0 M 1.17 M 98.0%
Describable Textures Dataset [Cimpoi et al.(2014)Cimpoi, Maji, Kokkinos, Mohamed, and Vedaldi]
conv1 35 K 32 K 8.3%
conv2 307 K 245 K 20.4%
conv3 885 K 343 K 61.3%
conv4 664 K 442 K 33.4%
conv5 443 K 216 K 51.2%
fc6 37.8 M 401 K 98.9%
fc7 16.8 M 661 K 96.1%
fc8 193 K 72 K 62.4%
total 57.1 M 2.41 M 95.8%
Table 2: Layer-wise compression results. Our Bayesian optimization controller automatically learns to prioritize the compression of the fc6 and fc7 layers, which have the most parameters.

5 Conclusion

In this paper we have presented a joint process for network fine-tuning and compression that produces a memory-efficient network tailored to a specialized image domain. Our process is guided by a Bayesian optimization controller that allows pruning parameters to adapt over time to the characteristics of the changing network. Fine-pruning is general and can accommodate any parameterized network pruning algorithm. In future we plan to study whether our technique can be applied to provide better time efficiency as well. For example, structured sparsity may result in more significant time savings on a GPU than unstructured sparsity [Wen et al.(2016)Wen, Wu, Wang, Chen, and Li]. Specialized hardware engines [Han et al.(2016a)Han, Liu, Mao, Pu, Pedram, Horowitz, and Dally] can also accelerate networks with unstructured sparsity while reducing energy consumption.

Acknowledgements. This work was supported by the Natural Sciences and Engineering Research Council of Canada.

References

  • [Arandjelovic et al.(2016)Arandjelovic, Gronat, Torii, Pajdla, and Sivic] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. NetVLAD: CNN architecture for weakly supervised place recognition. In

    IEEE Conference on Computer Vision and Pattern Recognition

    , 2016.
  • [Cheng et al.(2015)Cheng, Yu, Feris, Kumar, Choudhary, and Chang] Y. Cheng, F. X. Yu, R. S. Feris, S. Kumar, A. Choudhary, and S.-F. Chang. An exploration of parameter redundancy in deep networks with circulant projections. In IEEE International Conference on Computer Vision, 2015.
  • [Cimpoi et al.(2014)Cimpoi, Maji, Kokkinos, Mohamed, and Vedaldi] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. Describing textures in the wild. In IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [Courbariaux et al.(2015)Courbariaux, Bengio, and David] M. Courbariaux, Y. Bengio, and J.-P. David. BinaryConnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems, 2015.
  • [Gardner et al.(2014)Gardner, Kusner, Xu, Weinberger, and Cunningham] J. R. Gardner, M. J. Kusner, Z. Xu, K. Q. Weinberger, and J. P. Cunningham. Bayesian optimization with inequality constraints. In

    International Conference on Machine Learning

    , 2014.
  • [Guo et al.(2016)Guo, Yao, and Chen] Y. Guo, A. Yao, and Y. Chen. Dynamic network surgery for efficient DNNs. In Advances in Neural Information Processing Systems, 2016.
  • [Han et al.(2016a)Han, Liu, Mao, Pu, Pedram, Horowitz, and Dally] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally. EIE: Efficient inference engine on compressed deep neural network. In ACM/IEEE International Symposium on Computer Architecture, 2016a.
  • [Han et al.(2016b)Han, Mao, and Dally] S. Han, H. Mao, and W. J. Dally. Deep Compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. In International Conference on Learning Representations, 2016b.
  • [Hassibi and Stork(1992)] B. Hassibi and D. G. Stork. Second order derivatives for network pruning: optimal brain surgeon. In Advances in Neural Information Processing Systems, 1992.
  • [Hinton et al.(2015)Hinton, Vinyals, and Dean] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv:1503.02531, 2015.
  • [Iandola et al.(2016)Iandola, Han, Moskewicz, Ashraf, Dally, and Keutzer] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360, 2016.
  • [Johns et al.(2016)Johns, Leutenegger, and Davison] E. Johns, S. Leutenegger, and A. J. Davison. Pairwise decomposition of image sequences for active multi-view recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [Kendall et al.(2015)Kendall, Grimes, and Cipolla] A. Kendall, M. Grimes, and R. Cipolla. PoseNet: A convolutional network for real-time 6-DOF camera relocalization. In IEEE International Conference on Computer Vision, 2015.
  • [Krizhevsky et al.(2012)Krizhevsky, Sutskever, and Hinton] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012.
  • [Lebedev and Lempitsky(2016)] V. Lebedev and V. Lempitsky. Fast ConvNets using group-wise brain damage. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [LeCun et al.(1990)LeCun, Denker, and Solla] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, 1990.
  • [Liu et al.(2016)Liu, Anguelov, Erhan, Szegedy, Reed, Fu, and Berg] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, 2016.
  • [Moczulski et al.(2016)Moczulski, Denil, Appleyard, and de Freitas] M. Moczulski, M. Denil, J. Appleyard, and N. de Freitas. ACDC: A structured efficient linear layer. In International Conference on Learning Representations, 2016.
  • [Qi et al.(2016)Qi, Su, Nießner, Dai, Yan, and Guibas] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view CNNs for object classification on 3D data. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [Rasmussen and Williams(2006)] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
  • [Rastegari et al.(2016)Rastegari, Ordonez, Redmon, and Farhadi] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi. XNOR-Net: ImageNet classification using binary convolutional neural networks. In European Conference on Computer Vision, 2016.
  • [Redmon et al.(2016)Redmon, Divvala, Girshick, and Farhadi] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: unified, real-time object detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [Romero et al.(2015)Romero, Ballas, Kahou, Chassang, Gatta, and Bengio] A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. FitNets: hints for thin deep nets. In International Conference on Learning Representations, 2015.
  • [Russakovsky et al.(2014)Russakovsky, Deng, Su, Krause, Satheesh, Ma, Huang, Karpathy, Khosla, Bernstein, Berg, and Fei-Fei] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575, 2014.
  • [Simonyan and Zisserman(2014)] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems, 2014.
  • [Simonyan and Zisserman(2015)] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
  • [Snoek et al.(2012)Snoek, Larochelle, and Adams] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems, 2012.
  • [Srinivas and Babu(2015)] S. Srinivas and R. V. Babu. Data-free parameter pruning for deep neural networks. In British Machine Vision Conference, 2015.
  • [Tran et al.(2015)Tran, Bourdev, Fergus, Torresani, and Paluri] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D convolutional networks. In IEEE International Conference on Computer Vision, 2015.
  • [Wang et al.(2013)Wang, Zoghi, Hutter, Matheson, and de Freitas] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, and N. de Freitas. Bayesian optimization in high dimensions via random embeddings. In

    International Joint Conference on Artificial Intelligence

    , 2013.
  • [Wen et al.(2016)Wen, Wu, Wang, Chen, and Li] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, 2016.
  • [Yang and Newsam(2010)] Y. Yang and S. Newsam. Bag-of-visual-words and spatial extensions for land-use classification. In ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2010.
  • [Yang et al.(2015)Yang, Moczulski, Denil, de Freitas, Smola, Song, and Wang] Z. Yang, M. Moczulski, M. Denil, N. de Freitas, A. Smola, L. Song, and Z. Wang. Deep fried convnets. In IEEE International Conference on Computer Vision, 2015.
  • [Zhang et al.(2016)Zhang, Isola, and Efros] R. Zhang, P. Isola, and A. A. Efros. Colorful image colorization. In European Conference on Computer Vision, 2016.
  • [Zhou et al.(2014)Zhou, Lapedriza, Xiao, Torralba, and Oliva] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva.

    Learning deep features for scene recognition using places database.

    In Advances in Neural Information Processing Systems, 2014.
  • [Zhou et al.(2016)Zhou, Alvarez, and Porikli] H. Zhou, J. M. Alvarez, and F. Porikli. Less is more: towards compact CNNs. In European Conference on Computer Vision, 2016.