Compressing Convolutional Neural Networks

06/14/2015 ∙ by Wenlin Chen, et al. ∙ Washington University in St Louis Nvidia 0

Convolutional neural networks (CNN) are increasingly used in many areas of computer vision. They are particularly attractive because of their ability to "absorb" great quantities of labeled data through millions of parameters. However, as model sizes increase, so do the storage and memory requirements of the classifiers. We present a novel network architecture, Frequency-Sensitive Hashed Nets (FreshNets), which exploits inherent redundancy in both convolutional layers and fully-connected layers of a deep learning model, leading to dramatic savings in memory and storage consumption. Based on the key observation that the weights of learned convolutional filters are typically smooth and low-frequency, we first convert filter weights to the frequency domain with a discrete cosine transform (DCT) and use a low-cost hash function to randomly group frequency parameters into hash buckets. All parameters assigned the same hash bucket share a single value learned with standard back-propagation. To further reduce model size we allocate fewer hash buckets to high-frequency components, which are generally less important. We evaluate FreshNets on eight data sets, and show that it leads to drastically better compressed performance than several relevant baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In the recent years convolutional neural networks (CNN) have lead to impressive results in object recognition [17], face verification [24] and audio classification [20]. Problems that seemed impossibly hard only five years ago can now be solved at better than human accuracy [15]. Although CNNs have been known for a quarter of a century [12]

, only recently have their superb generalization abilities been accepted widely across the machine learning and computer vision communities. This broad acceptance coincides with the release of very large collections of labeled data 

[9]

. Deep networks and CNNs are particularly well suited to learn from large quantities of data, in part because they can have arbitrarily many parameters. As data sets grow, so do model sizes. In 2012, the first winner of the ImageNet competition that used a CNN had already 240MB of parameters and the most recent winning model, in 2014, required 567MB 

[26].

Independently, there has been another parallel shift of computing from servers and workstations to mobile platforms. As of January 2014 there have already been more web searches through smart phones than computers111http://tinyurl.com/omd58sq. Today speech recognition is primarily used on cell phones with intelligent assistants such as Apple’s Siri, Google Now or Microsoft’s Cortana. As this trend continues, we are expecting machine learning applications to also shift increasingly towards mobile devices. However, the disjunction of deep learning with ever increasing model sizes and mobile computing reveals an inherent dilemma. Mobile devices have tight memory and storage limitations. For example, even the most recent iPhone 6 only features 1GB of RAM, most of which must be used by the operating system or the application itself. In addition, developers must make their apps compatible with the most limited phone still in circulation, often restricting models to just a few megabytes of parameters.

In response, there has been a recent interest in reducing the model sizes of deep networks. Denil et al. [10] use low-rank decomposition of the weight matrices to reduce the effective number of parameters in the network. Buciluǎ et al. [4] and Ba et al. [1] show that complex models can be compressed into 1-layer neural networks. Independently, the model size of neural networks can be reduced effectively through reduced bit precision [7].

In this paper we propose a novel approach for neural network compression targeted especially for CNNs. We build on recent work by Chen et al. [5], who show that weights of fully connected networks can be effectively compressed with the hashing trick [30]. Due to the nature of local pixel correlation in images (i.e. spatial locality), filters in CNNs tend to be smooth. We transform these filters into frequency domain with the discrete cosine transform (DCT) [22]. In frequency space, the filters are naturally dominated by low frequency components. Our compression takes this smoothness property into account and randomly hashes the frequency components of all CNN filters at a given layer into one common set of hash buckets. All components inside one hash bucket share the same value. As lower frequency components are more pronounced than higher frequencies, we allow collisions only between similar frequencies and allocate fewer hash buckets for the high frequencies (which are less important).

Our approach has several compelling properties: 1. The number of parameters in the CNN is independent of the number of convolutional filters; 2. During testing we only need to add a low-cost hash function and the inverse DCT transformation to any existing CNN code for filter reconstruction; 3. During training, the hashed weights can be learned with simple back-propagation [2]—the gradient of a hash bucket value is the sum of gradients of all hashed frequency components in that bucket.

We evaluate our compression scheme on eight deep learning image benchmark data sets and compare against four competitive baselines. Although all compression schemes lead to lower test accuracy as the compression increases, our FreshNets method is by far the most effective compression method and yields the lowest generalization error rates on almost all classification tasks.

2 Background

Feature Hashing (a.k.a the hashing trick) [8, 25, 30]

has been previously studied as a technique for reducing model storage size. In general, it can be regarded as a dimensionality reduction method that maps an input vector

to a much smaller feature space via a mapping where . The mapping is a composite of two approximately uniform auxiliary hash functions and . The element of the -dimensional hashed input is defined as

As shown in [30], a key property of feature hashing is its preservation of inner product operations, where inner products after hashing produce the correct pre-hash inner product in expectation:

This property holds because of the bias correcting sign factor . With feature hashing, models are directly learned in the much smaller space , which not only speeds up training and evaluation but also significantly conserves memory. For example, a linear classifier in the original space could occupy memory for model parameters, but when learned in the hashed space only requires parameters. The information loss induced by hash collision is much less severe for sparse feature vectors and can be counteracted through multiple hashing [25] or larger hash tables [30].

Discrete Cosine Transform (DCT) [22]. Methods built on the DCT are widely used for compressing images and movies, including forming the standard technique for JPEG [29]. DCT expresses a function as a weighted combination of sinusoids of different phases/frequencies where the weight of each sinusoid reflects the magnitude of the corresponding frequency in the input. When employed with sufficient numerical precision and without quantization or other compression operations, the DCT and inverse DCT (projecting frequency inputs back to the spatial domain) are lossless. Compression is made possible in images by local smoothness of pixels (e.g.

a blue sky) which can be well represented regionally by fewer non-zero frequency components. Though highly related to the discrete Fourier transformation (DFT), DCT is often preferable for compression tasks because of its

spectral compaction property where weights for most images tend to be concentrated in a few low-frequency components of the DCT [22]. Further, the DCT transformation yields a real-valued representation, unlike the DFT whose representation has imaginary components. Given an input matrix , the corresponding matrix in frequency domain after DCT is defined as:

(1)

is the cosine basis function, and when and otherwise. We use the shorthand to denote the DCT operation in Eq. (1), i.e. . The inverse DCT converts from the frequency domain back to the spatial domain, reconstructing without loss:

(2)

We denote the inverse DCT function in Eq. (2) as , i.e. .

3 Frequency-Sensitive Hashed Nets

Figure 1: A schematic illustration of FreshNets. Two spatial filters are re-constructed from the frequency weights in vector . The frequency weights are accessed with two hash functions and then transformed to the spatial domain. The vector is partitioned into sub-vectors shared by all entries with similar frequency (corresponding to index sum ). Colors indicate which hash bucket was accessed.

Here we present FreshNets, a method for using weight sharing to reduce the model size (and memory demands) of convolutional neural networks. Similar to the work of Chen et al. [5], we achieve smaller models by randomly forcing weights throughout the network to share identical values. Unlike previous work, we implement the weight sharing and gradient updates of convolutional filters in the frequency domain. These sharing constraints are made prior to training, and we learn frequency weights under the sharing assignments. Since the assignments are made with a hash function, they incur no additional storage.

Filters in spatial and frequency domain. Let the matrix denote the weight matrix of the convolutional filter that connects the input plane to the output plane. (For notational convenience we assume square filters and only consider the filters in a single layer of the network.) The weights of all filters in a convolutional layer can be denoted by a

-dimensional tensor

where and are the number of input planes and output planes, respectively, resulting in a total of parameters. Convolutional filters can be represented equivalently in either the spatial or frequency domain, mapping between the two via the DCT and its inverse. We denote the filter in frequency domain as and recover the original spatial representation through , as defined in Eq. (1) and (2), respectively. The tensor of all filters is denoted .

Random Weight Sharing by Hashing. We would like to reduce the number of model parameters to exactly values stored in a weight vector , where . To achieve this, we randomly assign a value from to each filter frequency weight in . A naïve implementation of this random weight sharing would introduce an auxiliary matrix for to track the weight assignments, using to significant additional memory. To address this problem, Chen et al. [5] advocate use of the hashing trick to (pseudo-)randomly assign shared parameters. Using the hashing trick, we tie each filter weight to an element of indexed by the output of a hash function :

(3)

where , and is a sign factor computed by a second hash function to preserve inner-products in expectation as described in Section 2. With the mapping in Eq. (3), we can implement shared parameter assignments with no additional storage cost. (For a schematic illustration, see Figure 1. The figure also incorporates a frequency sensitive hashing scheme discussed later in this section.)

Gradients over Shared Frequency Weights. Typical convolutional neural networks learn filters in the spatial domain. As our shared weights are stored in the frequency domain, we derive the gradient with respect to filter parameters in frequency space. Following Eq. (2), we express the gradient of parameters in the spatial domain w.r.t. their counterparts in the frequency domain:

(4)

Let

be the loss function adopted for training. Using standard back-propagation, we can derive the gradient

w.r.t. filter parameters in the spatial domain,

. By the chain rule with Eq. (

4), we express the gradient of in the frequency domain:

(5)

Comparing with Eq. (1), we see that the gradient in the frequency domain is merely the DCT of the gradient in the spatial domain:

(6)

We compute gradient for each shared weight by simply summing over the gradient at each filter parameter where the weight is assigned, i.e. all where :

(7)

where denotes the entry in matrix .

Figure 2: An example of a filter in spatial (left) and frequency domain (right).

Frequency Sensitive Hashing. Figure 2 shows a filter in spatial (left) and frequency (right) domains. In the spatial domain CNN filters are smooth [17] due to the local pixel smoothness in natural images. In the frequency domain this corresponds to components with large magnitudes in the low frequencies, depicted in the upper left half of in Figure 2. Correspondingly, the high frequencies, in the bottom right half of , have magnitudes near zero.

As components of different frequency groups tend to be of different magnitudes (and thereby varying importance to the spatial structure of the filter), we want to avoid collisions between high and low frequency components. Therefore, we assign separate hash spaces to different frequency groups. In particular, we partition the values of into sub-vectors of sizes , where . This partitioning allows parameters with the same frequency, corresponding to their index sum , to be hashed into a corresponding dedicated hash space . We rewrite Eq. (3) with the new frequency sensitive shared weight assignments:

where maps an input key to a natural number in and .

We define a compression rate for each frequency region and assign . A smaller induces more collisions during hashing, leading to increased weight sharing. Since lower frequency components tend to be of higher importance, making collisions more hurtful, we commonly assign larger (fewer collisions) to low-frequency regions. Intuitively, given a size budget for the whole convolutional layer, we want to squeeze the hash space of high frequency region to save space for low frequency regions. These compression rates can either be assigned by hand or determined programmatically by cross-validation, as demonstrated in Section 5.

4 Related Work

Several recent studies have confirmed that there is significant redundancy in the parameters learned in deep neural networks. Recent work by Denil et al. [10] learns parameters in fully-connected layers after decomposition into two low-rank matrices, i.e. where , and . In this way, the original parameters could be stored with storage, where . Several works apply related approaches to speed up the evaluation time with convolutional neural networks. Two works propose to approximate convolutional filters by a weighted linear combination of basis filters [23, 16]. In this setting, the convolution operation only needs to be performed with the small set of basis filters. The desired output feature maps are computed by matrix multiplication as the weighted sum of these basis convolutions. Further speedup can be achieved by learning rank-one basis filters so that the convolution operations are very cheap to compute [11, 19]. Based on this idea, Denton et al. [11] advocate decomposing the four-dimensional tensor of the filter weights into a sum of different rank-one, four-dimensional tensors. In addition, they adopt bi-clustering to group filters such that each subgroup can be better approximated by rank-one tensors.

In each of these works, evaluation time is the main focus, with any resulting storage reduction achieved merely as a side effect. Other works focus entirely on compressing the fully-connected layers of CNNs [13, 31]. However, with the trend toward architectures with fewer fully connected layers and additional convolutional layers [27], compression of filters is of increased importance. Another technique for speeding up convolutional neural network evaluation is computing convolutions in the Fourier frequency domain, as convolution in the spatial domain is equivalent to (comparatively lower-cost) element-wise multiplication in the frequency domain [21, 28]. Unlike FreshNets, for a filter of size and an image of size where , Mathieu et al. [21] convert the filter to its frequency domain of size by oversampling the frequencies, which is necessary for doing element-wise multiplication with a larger image but also increases the memory overhead at test time. Training in the Fourier frequency domain may be advantageous for similar reasons, particularly when convolutions are being performed over large 3-D volumes [3].

Most relevant to this work is HashedNets [5] which compresses the fully connected layers of deep neural networks. This method uses the hashing trick to efficiently implement parameter sharing prior to learning, achieving notable compression with less loss of accuracy than the competing baselines which relied on low-rank decomposition or learning in randomly sparse architectures.

5 Experimental Results

In this section, we conduct several comprehensive experiments on benchmark datasets to evaluate the performance of FreshNets.

Datasets.

We experiment with eight benchmark datasets: cifar10, cifar100, svhn and five challenging variants of mnist. The cifar10 dataset contains images of pixels with three color channels. Images are selected from ten classes with each class consisting of unique instances. The cifar100 dataset also contains images, but is more challenging since the images are selected from classes (each class has 600 images). For both cifar datasets, images are designated for training and the remaining images for testing. To improve accuracy on cifar100, we augment by horizontal reflection and cropping [17], resulting in M training images. The svhn dataset is a large collection of digits ( classes) cropped from real-world scenes, consisting of training images, testing images and less difficult images for additional training. In our experiments, we use all available training images, for a total of training samples. For the mnist variants [18], each variation either reduces the training size (mnist-07) or amends the original digits by rotation (rot), background superimposition (bg-rand and bg-img), or a combination thereof (bg-rot). We preprocess all datasets with whitening (except cifar100 and svhn which were prohibitively large).

Baselines.

We compare the proposed FreshNets with four baseline methods: HashedNets [5], low-rank decomposition (LRD) [10], filter dropping (DropFilt) and frequency dropping (DropFreq). HashedNets was first proposed to compress fully-connected layers in deep neural networks via the hashing trick. In this baseline, we apply the hashing trick directly to the convolutional layer by hashing filter weights in the spatial domain. This induces random weight sharing across all filters in a single convolutional layer. Additionally, we compare against low-rank decomposition of the convolutional filters [10]. Following the method in [11], we unfold the four-dimensional filter tensor to form a two dimensional matrix on which we apply the low-rank decomposition. The parameters of the decomposition are fine-tuned via back-propagation. DropFreq learns parameters in the DCT frequency domain but sets high frequency components to to meet the compression requirement. DropFilt compresses simply by reducing the number of filters in each convolutional layer.

All methods were implemented using Torch7 [6] and run on NVIDIA GTX TITAN graphics cards with cores and GB of global memory. Model parameters are stored and updated as bit floating-point values.222The compression rates of all methods could be further improved by learning and storing parameters in lower precision [7, 14].

Layer Operation Input dim. Inputs Outputs C size MP size Parameters
C,RL
C,MP,DO,RL
C,RL
C,MP,DO,RL
C,MP,DO,RL
FC,Softmax
Table 1:

Network architecture. C: Convolution. RL: ReLu. MP: Max-pooling. DO: Dropout. FC: Fully-connected. The number of parameters in the fully-connected layer is specific to

input images and varies with the number of classes, either or depending on the dataset.
(a) Compression (b) Compression
CNN DropFilt DropFreq LRD HashedNets FreshNets CNN LRD HashedNets FreshNets
cifar10 blue blue
cifar100 blue blue
svhn blue blue
mnist-07 blue blue
rot blue blue
bg-rot blue blue
bg-rand blue blue
bg-img blue blue
Table 2: Test error rates (in ) with compression factors and . Convolutional layers were compressed by the indicated methods (DropFilt, DropFreq, LRD, HashedNets, and FreshNets), with no convolutional layer compression applied to CNN. The fully connected layer is compressed by HashNets for all methods, including CNN.

Comprehensive evaluation.

We adopt the network network architecture shown in Table 1 for all datasets. The architecture is a deep convolutional neural network consisting of five convolutional layers (with

filters) and one fully-connected layer. Before convolution, input feature maps are zero-padded such that output maps remain the same size as the (un-padded) input maps after convolution. Max-pooling is performed after convolutions in layers

, and with filter size

and stride

, reducing both input map dimensions by half. Rectified linear units are adopted as the activation function throughout. The output of the network is a softmax function over labels.

Figure 3: Test error rates at varying compression levels for datasets cifar10 (left) and rot (right).

In this architecture, the convolutional layers hold the majority of parameters ( million in convolutional layer v.s. thousand in the fully connected layer with output classes). During training, we optimize parameters using mini-batch gradient descent with batch size and momentum . We use percent of the training set as a validation set for early stopping. For FreshNets, we use a frequency-sensitive compression scheme which increases weight sharing among higher frequency components.333

We evaluate several frequency-sensitive schemes later in this section, but for this comprehensive evaluation we set frequency compression rates by a rescaled beta distribution with

and for all layers. For all baselines, we apply HashedNets [5] to the fully connected layer at the corresponding level of compression. All error results are reported on the test set.

Table 2(a) and (b) show the comprehensive evaluation of all methods under compression ratios and , respectively. We exclude DropFilt and DropFreq in Table 2(b) because neither supports compression in this architecture for all layers. For all methods, the fully connected layer (top layer) is compressed by HashedNets [5] at the corresponding compression rate. In this way, the final size of the entire network respects the specified compression ratio. For reference, we also show the error rate of a standard convolutional neural network (CNN, columns 2 and 8) with the fully-connected layer compressed by HashedNets and no compression in the convolutional layers. Excluding this reference, we highlight the method with best test error on each dataset in bluebold.

Figure 4: Results with different frequency sensitive compression schemes, each adopting a different beta distribution as the compression rate for each frequency. The inner figure shows normalized test error of each scheme on cifar10 with the beta distribution hyper-parameters. The outer figure depicts the five beta distributions (with colors matching the inner figure).

We discern several general trends. In Table 2(a), we observe the performance of the DropFilt and DropFreq at compression. At this compression rate, DropFilt corresponds to a network filters at each layer: , , , , at layers respectively. This architecture yields particularly poor test accuracy, including essentially random predictions on three datasets. DropFreq, which at compression parameterizes each filter in the original network by only or low-frequency values in the DCT frequency space, performs with similarly poor accuracy. Low rank decomposition (LRD) and HashedNets each yield similar performance at both and compression. Neither explicitly considers the smoothness inherent in learned convolutional filters, instead compressing the filters in the spatial domain. Our method, FreshNets, consistently outperforms all baselines, particularly at the higher compression rate as shown in Table 2(b). Using the same model in Table 1, Figure 3 shows more complete curves of test errors with multiple compression factors on the cifar10 and rot datasets.

Varying compression by frequency.

As mentioned in Section 2, we allow a higher collision rate in the high frequency components than in the low frequency components for each filter. To demonstrate the utility of this scheme, we evaluate several hash compression schemes. Systematically, we set the compression rate of the frequency band with a parameterized function, i.e. . In this experiment, we use the beta distribution: , where is a real number between 0 and 1, is the filter size, and is a normalizing factor such that the resulting distribution of parameters meets the target parameter budget , i.e. . We adjust and to control the compression rate for each frequency region. As shown in Figure 4, we have multiple pairs of and , each of which results in a different compression scheme. For example, if and , the compression rate monotonically decreases as a function of component frequency, meaning more parameter sharing among high frequency components (blue curve in Figure 4).

To quickly evaluate the performance of each scheme, we use a simple four-layer FreshNets where the first two layers are DCT-hashed convolutional layers (with filters) containing and feature maps respectively, and the last two layers are fully connected layers. We test FreshNets on cifar10 with each of the compression schemes shown in Figure 4. In each, weight sharing is limited to be within groups of similar frequencies, as described in Section 2, however number of unique weights shared within each group is varied. We denote the compression scheme with (red curve) as a frequency-oblivious scheme since it produces a uniform compression independent of frequency. In the inset bar plot in Figure 4, we report test error normalized by the test error of the frequency-oblivious scheme and averaged over compression rates , , , , , and . We can see that the proposed scheme with fewer shared weights allocated to high frequency components (represented by the blue curve) outperforms all other compression schemes. An inverse scheme where the high frequency regions have the lowest collision rate (purple curve) performs the worst. These empirical results fit our assumption that the low frequency components of a filter are more important than the high frequency components.

Figure 5: Visualization of filters learning on mnist in (a) an uncompressed CNN, (b) a CNN compressed with FreshNets, and (c) a CNN compressed with HashedNets (compression rate in both (b) and (c)). FreshNets preserves the smoothness of the filters, whereas HashedNets does not.

Filter visualization.

We investigate the smoothness of the learned convolutional filters in Figure 5 by visualizing the filter weights (first layer) of (a) a standard, uncompressed CNN, (b) FreshNets, and (c) HashedNets (with weight sharing in the spatial domain). For this experiment, we again apply a four layer network with two convolutional layers but adopt larger filters () for better visualization. All three networks are trained on mnist, and both FreshNets and HashedNets have compression on the first convolutional layer. When plotting, we scale the values in each filter matrix to the range . Hence, white and black pixels stand for large positive and negative weights, respectively. We observe that, although more blurry due to the compression, the filter weights of FreshNets are still smooth while weights in HashedNets appear more chaotic.

6 Conclusion

In this paper we present FreshNets, a method for learning convolutional neural networks with dramatically compressed model storage. Harnessing the hashing trick for parameter-free random weight sharing and leveraging the smoothness inherent in convolutional filters, FreshNets compresses parameters in a frequency-sensitive fashion such that significant model parameters (e.g. low-frequency components) are better preserved. As such, FreshNets preserves prediction accuracy significantly better than competing baselines at high compression rates.

References