NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm

11/06/2017 ∙ by Xiaoliang Dai, et al. ∙ Princeton University 0

Neural networks (NNs) have begun to have a pervasive impact on various applications of machine learning. However, the problem of finding an optimal NN architecture for large applications has remained open for several decades. Conventional approaches search for the optimal NN architecture through extensive trial-and-error. Such a procedure is quite inefficient. In addition, the generated NN architectures incur substantial redundancy. To address these problems, we propose an NN synthesis tool (NeST) that automatically generates very compact architectures for a given dataset. NeST starts with a seed NN architecture. It iteratively tunes the architecture with gradient-based growth and magnitude-based pruning of neurons and connections. Our experimental results show that NeST yields accurate yet very compact NNs with a wide range of seed architecture selection. For example, for the LeNet-300-100 (LeNet-5) NN architecture derived from the MNIST dataset, we reduce network parameters by 34.1x (74.3x) and floating-point operations (FLOPs) by 35.8x (43.7x). For the AlexNet NN architecture derived from the ImageNet dataset, we reduce network parameters by 15.7x and FLOPs by 4.6x. All these results are the current state-of-the-art for these architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

footnotetext: This work was supported by NSF Grant No. CNS-1617640.

Over the last decade, deep neural networks (DNNs) have begun to revolutionize myriad research domains, such as computer vision, speech recognition, and machine translation 

speechlstm ; translation ; human_performance . Their ability to distill intelligence from a dataset through multi-level abstraction can even lead to super-human performance lecun2015deep

. Thus, DNNs are emerging as a new cornerstone of modern artificial intelligence.

Though critically important, how to efficiently derive an appropriate DNN architecture from large datasets has remained an open problem. Researchers have traditionally derived the DNN architecture by sweeping through its architectural parameters and training the corresponding architecture until the point of diminishing returns in its performance. This suffers from three major problems. First, the widely used back-propagation (BP) algorithm assumes a fixed DNN architecture and only trains weights. Thus, training cannot improve the architecture. Second, a trial-and-error methodology can be inefficient when DNNs get deeper and contain millions of parameters. Third, simply going deeper and larger may lead to large, accurate, but over-parameterized DNNs. For example, Han et al. PruningHS showed that the number of parameters in VGG-16 can be reduced by with no loss of accuracy.

To address these problems, we propose a DNN synthesis tool (NeST) that trains both DNN weights and architectures. NeST is inspired by the learning mechanism of the human brain, where the number of synaptic connections increases upon the birth of a baby, peaks after a few months, and decreases steadily thereafter spectrum . NeST starts DNN synthesis from a seed DNN architecture (birth point). It allows the DNN to grow connections and neurons based on gradient information (baby brain) so that the DNN can adapt to the problem at hand. Then, it prunes away insignificant connections and neurons based on magnitude information (adult brain) to avoid redundancy. A combination of network growth and pruning algorithms enables NeST to generate accurate and compact DNNs. We used NeST to synthesize various compact DNNs for the MNIST LeNet and ImageNet imageNetdataset datasets. NeST leads to drastic reductions in the number of parameters and floating-point operations (FLOPs) relative to the DNN baselines, with no accuracy loss.

2 Related Work

An evolutionary algorithm provides a promising solution to DNN architecture selection through evolution of network architectures. Its search mechanism involves iterations over mutation, recombination, and most importantly, evaluation and selection of network architectures 

GoogleEvolve ; neat . Additional performance enhancement techniques include better encoding methods rl4 and algorithmic redesign for DNNs neat2 . All these assist with more efficient search in the wide DNN architecture space.

Reinforcement learning (RL) has emerged as a new powerful tool to solve this problem rl ; rl3 ; rl2 ; baidurl . Zoph et al. rl

use a recurrent neural network controller to iteratively generate groups of candidate networks, whose performance is then used as a reward for enhancing the controller. Baker et al. 

rl3 propose a Q-learning based RL approach that enables convolutional architecture search. A recent work rl2 proposes the NASNet architecture that uses RL to search for architectural building blocks and achieves better performance than human-invented architectures.

The structure adaptation (SA) approach exploits network clues (e.g., distribution of weights) to incorporate architecture selection into the training process. Existing SA methods can be further divided into two categories: constructive and destructive. A constructive approach starts with a small network and iteratively adds connections/neurons DNC ; Tiling . A destructive approach, on the other hand, starts with a large network and iteratively removes connections/neurons. This can effectively reduce model redundancy. For example, recent pruning methods, such as network pruning PruningHS ; pruning_work ; pruning_work1 ; net_trim , layer-wise surgeon layerwise , sparsity learning nips3 ; sparseconv ; pruning_work2 ; pruning_work3 , and dynamic network surgery nips2 , can offer extreme compactness for existing DNNs with no or little accuracy loss.

3 Synthesis Methodology

In this section, we propose NeST that leverages both constructive and destructive SA approaches through a grow-and-prune paradigm. Unless otherwise stated, we adopt the notations given in Table 1 to represent various variables.

Label Description Label Description

DNN loss function

weights between and layer
output value of neuron in layer biases in layer
input value of neuron in layer by by matrix with real elements
Table 1: Notations and descriptions
Figure 1: An illustration of the architecture synthesis flow in NeST.

3.1 Neural Network Synthesis Tool

We illustrate the NeST approach with Fig. 1. Synthesis begins with an initial seed architecture, typically initialized as a sparse and partially connected DNN. We also ensure that all neurons are connected in the seed architecture. Then, NeST utilizes two sequential phases to synthesize the DNN: (i) gradient-based growth phase, and (ii) magnitude-based pruning phase. In the growth phase, the gradient information in the architecture space is used to gradually grow new connections, neurons, and feature maps to achieve the desired accuracy. In the pruning phase, the DNN inherits the synthesized architecture and weights from the growth phase and iteratively removes redundant connections and neurons, based on their magnitudes. Finally, NeST comes to rest at a lightweight DNN model that incurs no accuracy degradation relative to a fully connected model.

3.2 Gradient-based Growth

In this section, we explain our algorithms to grow connections, neurons, and feature maps.

3.2.1 Connection Growth

The connection growth algorithm greedily activates useful, but currently ‘dormant,’ connections. We incorporate it in the following learning policy:

Policy 1: Add a connection iff it can quickly reduce the value of loss function .

The DNN seed contains only a small fraction of active connections to propagate gradients. To locate the ‘dormant’ connections that can reduce effectively, we evaluate for all the ‘dormant’ connections (computed either using the whole training set or a large batch). Policy 1 activates ‘dormant’ connections iff they are the most efficient at reducing . This can also assist with avoiding local minima and achieving higher accuracy dsd . To illustrate this policy, we plot the connections grown from the input to the first layer of LeNet-300-100 LeNet (for the MNIST dataset) in Fig. 3. The image center has a much higher grown density than the margins, consistent with the fact that the MNIST digits are centered.

From a neuroscience perspective, our connection growth algorithm coincides with the Hebbian theory: “Neurons that fire together wire together HebbRule ." We define the stimulation magnitude of the presynaptic neuron in the layer and the postsynaptic neuron in the layer as and , respectively. The connections activated based on Hebbian theory would have a strong correlation between presynaptic and postsynaptic cells, thus a large value of . This is also the magnitude of the gradient of with respect to ( is the weight that connects and ):

(1)

Thus, this is mathematically equivalent to Policy 1.

Figure 2: Major components of the DNN architecture synthesis algorithm in NeST.
Figure 3: Grown connections from the input layer to the first layer of LeNet-300-100.

3.2.2 Neuron Growth

Our neuron growth algorithm consists of two steps: (i) connection establishment and (ii) weight initialization. The neuron growth policy is as follows:

Policy 2: In the layer, add a new neuron as a shared intermediate node between existing neuron pairs that have high postsynaptic () and presynaptic () neuron correlations (each pair contains one neuron from the layer and the other from the layer). Initialize weights based on batch gradients to reduce the value of .

  Input: - birth strength, - growth ratio
  Denote: - number of neurons in the layer, - number of neurons in the layer, - bridging gradient matrix, - extracts mean value of non-zero elements
  Add a neuron in the layer, initialize ,
  for  do
      
  end for
   = largest element in
  for  do
      if  then
          
          
      end if
      
  end for
  Concatenate network weights W with ,
Algorithm 1 Neuron growth in the layer

Algorithm 1 incorporates Policy 2 and illustrates the neuron growth iterations in detail. Before adding a neuron to the layer, we evaluate the bridging gradient between the neurons at the previous and subsequent layers. We connect the top ( is the growth ratio) correlated neuron pairs through a new neuron in the layer. We initialize the weights based on the bridging gradient to enable gradient descent, thus decreasing the value of .

We implement a square root rule for weight initialization to imitate a BP update on the bridging connection , which connects and . The BP update leads to a change in :

(2)

where is the learning rate. In Algorithm 1, when we connect the newly added neuron (in the layer) with and , we initialize their weights to the square root of the magnitude of the bridging gradient:

(3)

where () is the initialized value of the weight that connects the newly added neuron with (). The weight initialization rule leads to a change in :

(4)

where

is the neuron activation function. Suppose

is the activation function. Then,

(5)

Since and are typically very small, the approximation in Eq. (5) leads to Eq. (6).

(6)

This is linearly proportional to the effect of a BP update. Thus, our weight initialization mathematically imitates a BP update. Though we illustrated the algorithm with the

activation function, the weight initialization rule works equally well with other activation functions, such as rectified linear unit (ReLU) and leaky rectified linear unit (Leaky ReLU).

We use a birth strength factor to strengthen the connections of a newly grown neuron. This prevents these connections from becoming too weak to survive the pruning phase. Specifically, after square root rule based weight initialization, we scale up the newly added weights by

(7)

where is an operation that extracts the mean value of all non-zero elements. This strengthens new weights. In practice, we find to be an appropriate range.

3.2.3 Growth in the Convolutional Layers

Convolutional layers share the connection growth methodology of Policy 1. However, instead of neuron growth, we use a unique feature map growth algorithm for convolutional layers. In a convolutional layer, we convolve input images with kernels to generate feature maps. Thus, to add a feature map, we need to initialize the corresponding set of kernels. We summarize the feature map growth policy as follows:

Policy 3: To add a new feature map to the convolutional layers, randomly generate sets of kernels, and pick the set of kernels that reduces the most.

In our experiment, we observe that the percentage reduction in for Policy 3 is approximately twice as in the case of the naive approach that initializes the new kernels with random values.

3.3 Magnitude-based Pruning

We prune away insignificant connections and neurons based on the magnitude of weights and outputs:

Policy 4: Remove a connection (neuron) iff the magnitude of the weight (neuron output) is smaller than a pre-defined threshold.

We next explain two variants of Policy 4: pruning of insignificant weights and partial-area convolution.

3.3.1 Pruning of Insignificant Weights

Han et al. PruningHS

show that magnitude-based pruning can successfully cut down the memory and computational costs. We extend this approach to incorporate the batch normalization technique. Such a technique can reduce the internal covariate shift by normalizing layer inputs and improve the training speed and behavior. Thus, it has been widely applied to large DNNs 

bn . Consider the batch normalization layer:

(8)

where E and V are batch normalization terms, and depicts the Hadamard (element-wise) division operator. We define effective weights and effective biases as:

(9)

We treat connections with small effective weights as insignificant. Pruning of insignificant weights is an iterative process. In each iteration, we only prune the most insignificant weights (e.g., top 1%) for each layer, and then retrain the whole DNN to recover its performance.

3.3.2 Partial-area Convolution

In common convolutional neural networks (CNNs), the convolutional layers typically consume

of the parameters, but contribute to - of the total FLOPs oneweirdtrick . In a convolutional layer, kernels shift and convolve with the entire input image. This process incurs redundancy, since not the whole input image is of interest to a particular kernel. Anwar et al. featuremappruning presented a method to prune all connections from a not-of-interest input image to a particular kernel. This method reduces FLOPs but incurs performance degradation featuremappruning .

Instead of discarding an entire image, our proposed partial-area convolution algorithm allows kernels to convolve with the image areas that are of interest. We refer to such an area as area-of-interest. We prune connections to other image areas. We illustrate this process in Fig. 5. The green area depicts area-of-interest, whereas the red area depicts parts that are not of interest. Thus, green connections (solid lines) are kept, whereas red ones (dashed lines) are pruned away.

Partial-area convolution pruning is an iterative process. We present one iteration in Algorithm 2. We first convolve input images with convolution kernels and generate feature maps, which are stored in a four-dimensional feature map matrix C. We set the pruning threshold to the percentile of all elements in , where is the pruning ratio, typically 1% in our experiment. We mark the elements whose values are below this threshold as insignificant, and prune away their input connections. We retrain the whole DNN after each pruning iteration. In our current implementation, we utilize a mask Msk to disregard the pruned convolution area.

  Input: I - input images, K - kernel matrix, Msk - feature map mask, - pruning ratio
  Output: Msk, F - feature maps
  Denote: - Depthwise feature map, - Hadamard (element-wise) multiplication
  for  do
       = (, )
  end for
   = largest element in
  for  do
      if  then
          
      end if
  end for
  ,  
Algorithm 2 Partial-area convolution

Partial-area convolution enables additional FLOPs reduction without any performance degradation. For example, we can reduce FLOPs in LeNet-5 LeNet by when applied to MNIST. Compared to the conventional CNNs that force a fixed square-shaped area-of-interest on all kernels, we allow each kernel to self-explore the preferred shape of its area-of-interest. Fig. 5 shows the area-of-interest found by the layer-1 kernels in LeNet-5 when applied to MNIST. We observe significant overlaps in the image central area, which most kernels are interested in.

Figure 4: Pruned connections (dashed red lines) and remaining connections (solid green lines) in partial-area convolution.
Figure 5: Area-of-interest for five different kernels in the first layer of LeNet-5.

4 Experimental Results

We implement NeST using Tensorflow 

tensorflow

and PyTorch 

pytorch on Nvidia GTX 1060 and Tesla P100 GPUs. We use NeST to synthesize compact DNNs for the MNIST and ImageNet datasets. We select DNN seed architectures based on clues (e.g., depth, kernel size, etc.) from the existing LeNets, AlexNet, and VGG-16 architectures, respectively. NeST exhibits two major advantages:

  • Wide seed range: NeST yields high-performance DNNs with a wide range of seed architectures. Its ability to start from a wide range of seed architectures alleviates reliance on human-defined architectures, and offers more freedom to DNN designers.

  • Drastic redundancy removal: NeST-generated DNNs are very compact. Compared to the DNN architectures generated with pruning-only methods, DNNs generated through our grow-and-prune paradigm have much fewer parameters and require much fewer FLOPs.

4.1 LeNets on MNIST

We derive the seed architectures from the original LeNet-300-100 and LeNet-5 networks LeNet

. LeNet-300-100 is a multi-layer perceptron with two hidden layers. LeNet-5 is a CNN with two convolutional layers and three fully connected layers. We use the affine-distorted MNIST dataset 

LeNet , on which LeNet-300-100 (LeNet-5) can achieve an error rate of (). We discuss our results next.

4.1.1 Growth Phase

First, we derive nine (four) seed architectures for LeNet-300-100 (LeNet-5). These seeds contain fewer neurons and connections per layer than the original LeNets. The number of neurons in each layer is the product of a ratio and the corresponding number in the original LeNets (e.g., the seed architecture for LeNet-300-100 becomes LeNet-120-40 if ). We randomly initialize only 10% of all possible connections in the seed architecture. Also, we ensure that all neurons in the network are connected.

We first sweep for LeNet-300-100 (LeNet-5) from 0.2 (0.5) to 1.0 (1.0) with a step-size of 0.1 (0.17), and then grow the DNN architectures from these seeds. We study the impact of these seeds on the GPU time for growth and post-growth DNN sizes under the same target accuracy (this accuracy is typically a reference value for the architecture). We summarize the results for LeNets in Fig. 6. We have two interesting findings for the growth phase:

Figure 6: Growth time vs. post-growth DNN size trade-off for various seed architectures for LeNet-300-100 (left) and LeNet-5 (right) to achieve a 1.3% and 0.8% error rate, respectively.
Figure 7: Compression ratio and final DNN size for different LeNet-300-100 (left) and LeNet-5 (right) seed architectures.
  1. [leftmargin=6mm]

  2. Smaller seed architectures often lead to smaller post-growth DNN sizes, but at the expense of a higher growth time. We will later show that smaller seeds and thus smaller post-growth DNN sizes are better, since they also lead to smaller final DNN sizes.

  3. When the post-growth DNN size saturates due to the full exploitation of the synthesis freedom for a target accuracy, a smaller seed is no longer beneficial, as evident from the flat left ends of the dashed curves in Fig. 6.

4.1.2 Pruning Phase

Next, we prune the post-growth LeNet DNNs to remove their redundant neurons/connections. We show the post-pruning DNN sizes and compression ratios for LeNet-300-100 and LeNet-5 for the different seeds in Fig. 7. We have two major observations for the pruning phase:

  1. [leftmargin=6mm]

  2. Larger the pre-pruning DNN, larger is its compression ratio. This is because larger pre-pruning DNNs have a larger number of weights and thus also higher redundancy.

  3. Larger the pre-pruning DNN, larger is its post-pruning DNN. Thus, to synthesize a more compact DNN, one should choose a smaller seed architecture (growth phase finding 1) within an appropriate range (growth phase finding 2).

Model Method Error #Param FLOPs
RBF network LeNet - 3.60% 794K 1588K

Polynomial classifier 

LeNet
- 3.30% 40K 78K
K-nearest neighbors LeNet - 3.09% 47M 94M
SVMs (reduced set) RsSVM - 1.10% 650K 1300K
Caffe model (LeNet-300-100) caffe - 1.60% 266K 532K
LWS (LeNet-300-100) layerwise Prune 1.96% 4K 8K
Net pruning (LeNet-300-100) PruningHS Prune 1.59% 22K 43K
Our LeNet-300-100: compact Grow+Prune 1.58% 3.8K 6.7K
Our LeNet-300-100: accurate Grow+Prune 1.29% 7.8K 14.9K
Caffe model (LeNet-5) caffe - 0.80% 431K 4586K
LWS (LeNet-5) layerwise Prune 1.66% 4K 199K
Net pruning (LeNet-5) PruningHS Prune 0.77% 35K 734K
Our LeNet-5 Grow+Prune 0.77% 5.8K 105K
Table 2: Different inference models for MNIST
Model Method Top-1 err. Top-5 err. #Param (M) FLOPs (B)
Baseline AlexNet AlexNet - 0.0% 0.0% 61 () 1.5 ()
Data-free pruning data_free Prune +1.62% - 39.6 () 1.0 ()
Fastfood-16-AD fastfood - +0.12% - 16.4 () 1.4 ()
Memory-bounded memory_bounded - +1.62% - 15.2 () -
SVD svd - +1.24% +0.83% 11.9 () -
LWS (AlexNet) layerwise Prune +0.33% +0.28% 6.7 () 0.5 ()
Net pruning (AlexNet) PruningHS Prune -0.01% -0.06% 6.7 () 0.5 ()
Our AlexNet Grow+Prune -0.02% -0.06% 3.9 () 0.33 ()
Baseline VGG-16 torch_vgg - 0.0% 0.0% 138 () 30.9 ()
LWS (VGG-16) layerwise Prune +3.61% +1.35% 10.3 () 6.5 ()
Net pruning (VGG-16) PruningHS Prune +2.93% +1.26% 10.3 () 6.5 ()
Our VGG-16: accurate Grow+Prune -0.35% -0.31% 9.9 () 6.3 ()
Our VGG-16: compact Grow+Prune +2.31% +0.98% 4.6 () 3.6 ()
Currently without partial-area convolution due to GPU memory limits.
Table 3: Different AlexNet and VGG-16 based inference models for ImageNet

4.1.3 Inference model comparison

We compare our results against related results from the literature in Table 2. Our results outperform other reference models from various design perspectives. Without any loss of accuracy, we are able to reduce the number of connections and FLOPs of LeNet-300-100 (LeNet-5) by 70.2 () and (), respectively, relative to the baseline Caffe model caffe . We include the model details in the Appendix.

4.2 AlexNet and VGG-16 on ImageNet

Next, we use NeST to synthesize DNNs for the ILSVRC 2012 image classification dataset imageNetdataset . We initialize a slim and sparse seed architecture base on the AlexNet oneweirdtrick and VGG-16 VGG . Our seed architecture for AlexNet contains only 60, 140, 240, 210, and 160 feature maps in the five convolutional layers, and 3200, 1600, and 1000 neurons in the fully connected layers. The seed architecture for VGG-16 uses for the first 13 convolutional layers, and has 3200, 1600, and 1000 neurons in the fully connected layers. We randomly activate 30% of all the possible connections for both seed architectures.

Table 3 compares the model synthesized by NeST with various AlexNet and VGG-16 based inference models. We include the model details in the Appendix. Our baselines are the AlexNet Caffe model (42.78% top-1 and 19.73% top-5 error rate) PruningHS and VGG-16 PyTorch model (28.41% top-1 and 9.62% top-5 error rate) torch_vgg . Our grow-and-prune synthesis paradigm outperforms the pruning-only methods listed in Table 3. This may be explained by the observation that pruning methods potentially inherit a certain amount of redundancy associated with the original large DNNs. Network growth can alleviate this phenomenon.

Note that our current mask-based implementation of growth and pruning incurs a temporal memory overhead during training. If the model becomes deeper, as in the case of ResNet ResNet or DenseNet densenet , using masks to grow and prune connections/neurons/feature maps may not be economical due to this temporal training memory overhead. We plan to address this aspect in our future work.

5 Discussions

Our synthesis methodology incorporates three inspirations from the human brain.

First, the number of synaptic connections in a human brain varies at different human ages. It rapidly increases upon the baby’s birth, peaks after a few months, and decreases steadily thereafter. A DNN experiences a very similar learning process in NeST, as shown in Fig. 8

. This curve shares a very similar pattern to the evolution of the number of synapses in the human brain 

neuronnum .

Figure 8: #Connections vs. synthesis iteration for LeNet-300-100.

Second, most learning processes in our brain result from rewiring of synapses between neurons. Our brain grows and prunes away a large amount (up to 40%) of synaptic connections every day spectrum . NeST wakes up new connections, thus effectively rewiring more neurons pairs in the learning process. Thus, it mimics the ‘learning through rewiring’ mechanism of human brains.

Third, only a small fraction of neurons are active at any given time in human brains. This mechanism enables the human brain to operate at an ultra-low power (20 Watts). However, fully connected DNNs contain a substantial amount of insignificant neuron responses per inference. To address this problem, we include a magnitude-based pruning algorithm in NeST to remove the redundancy, thus achieving sparsity and compactness. This leads to huge storage and computation reductions.

6 Conclusions

In this paper, we proposed a synthesis tool, NeST, to synthesize compact yet accurate DNNs. NeST starts from a sparse seed architecture, adaptively adjusts the architecture through gradient-based growth and magnitude-based pruning, and finally arrives at a compact DNN with high accuracy. For LeNet-300-100 (LeNet-5) on MNIST, we reduced the number of network parameters by () and FLOPs by (). For AlexNet and VGG-16 on ImageNet, we reduced the network parameters (FLOPs) by () and (), respectively.

References

Appendix A Experimental details of LeNets

Our models will be released soon.

Table LABEL:tab:LeNet3 and Table LABEL:tab:LeNet5 show the smallest DNN models we could synthesize for LeNet-300-100 and LeNet-5, respectively. In these tables, Conv% refers to the percentage of area-of-interest over a full image for partial-area convolution, and Act% refers to the percentage of non-zero activations (the average percentage of neurons with non-zero output values per inference).

Layer #Weights Act% FLOPs
fc1 7032 46% 14.1K
fc2 718 71% 0.7K
fc3 94 100% 0.1K
Total 7844 N/A 14.9K
(a) LeNet-300-100 (error rate 1.29%)
Layer #Weights Conv% Act% FLOPs
conv1 74 39% 89% 45.2K
conv2 749 41% 57% 54.4K
fc1 4151 N/A 79% 4.7K
fc2 632 N/A 58% 1.0K
fc3 166 N/A 100% 0.2K
Total 5772 N/A N/A 105K
(b) LeNet-5 (error rate 0.77%)
Table 4: Smallest synthesized LeNets

Appendix B Experimental details of AlexNet

Table 5 illustrates the evolution of an AlexNet seed in the grow-and-prune paradigm as well as the final inference model. The AlexNet seed only contains 8.4M parameters. This number increases to 28.3M after the growth phase, and then decreases to 3.9M after the pruning phase. This final AlexNet-based DNN model only requires 325M FLOPs at a top-1 error rate of 42.76%.

Layers #Parameters #Parameters #Parameters Conv% Act% FLOPs
Seed Post-Growth Post-Pruning
conv1 7K 21K 17K 92% 87% 97M
conv2 65K 209K 107K 91% 82% 124M
conv3 95K 302K 164K 88% 49% 40M
conv4 141K 495K 253K 86% 48% 36M
conv5 105K 355K 180K 87% 56% 25M
fc1 5.7M 19.9M 1.8M N/A 49% 2.0M
fc2 1.7M 5.3M 0.8M N/A 47% 0.8M
fc3 0.6M 1.7M 0.5M N/A 100% 0.5M
Total 8.4M 28.3M 3.9M N/A N/A 325M
Table 5: Synthesized AlexNet (error rate 42.76%)

Appendix C Experimental details of VGG-16

Table LABEL:tab:vgg_final illustrates the details of our final compact inference model based on the VGG-16 architecture. The final model only contains 4.6M parameters, which is 30.2 smaller than the original VGG-16.

Layer #Param FLOPs #Param Act% FLOPs
Original VGG-16 Synthesized VGG-16
conv1_1 2K 0.2B 1K 64% 0.1B
conv1_2 37K 3.7B 10K 76% 0.7B
conv2_1 74K 1.8B 21K 73% 0.4B
conv2_2 148K 3.7B 39K 76% 0.7B
conv3_1 295K 1.8B 79K 53% 0.4B
conv3_2 590K 3.7B 103K 57% 0.3B
conv3_3 590K 3.7B 110K 56% 0.4B
conv4_1 1M 1.8B 205K 37% 0.2B
conv4_2 2M 3.7B 335K 37% 0.2B
conv4_3 2M 3.7B 343K 35% 0.2B
conv5_1 2M 925M 350K 33% 48M
conv5_2 2M 925M 332K 32% 43M
conv5_3 2M 925M 331K 24% 41M
fc1 103M 206M 1.6M 38% 0.8M
fc2 17M 34M 255K 41% 0.2M
fc3 4M 8M 444K 100% 0.4M
Total 138M 30.9B 4.6M N/A 3.6B
Table 6: Synthesized VGG-16 (error rate 30.72%)