SqueezeNext: Hardware-Aware Neural Network Design

03/23/2018 ∙ by Amir Gholami, et al. ∙ berkeley college 0

One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. In this work, we introduce SqueezeNext, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. This new network is able to match AlexNet's accuracy on the ImageNet benchmark with 112× fewer parameters, and one of its deeper variants is able to achieve VGG-19 accuracy with only 4.4 Million parameters, (31× smaller than VGG-19). SqueezeNext also achieves better top-5 classification accuracy with 1.3× fewer parameters as compared to MobileNet, but avoids using depthwise-separable convolutions that are inefficient on some mobile processor platforms. This wide range of accuracy gives the user the ability to make speed-accuracy tradeoffs, depending on the available resources on the target hardware. Using hardware simulation results for power and inference speed on an embedded system has guided us to design variations of the baseline model that are 2.59×/8.26× faster and 2.25×/7.5× more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 12

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Illustration of a ResNet block on the left, a SqueezeNet block in the middle, and a SqueezeNext (SqNxt) block on the right. SqueezeNext uses a two-stage bottleneck module to reduce the number of input channels to the convolution. The latter is further decomposed into separable convolutions to further reduce the number of parameters (orange parts), followed by a expansion module.

Deep Neural Networks have transformed a wide range of applications in computer vision. This has been made possible in part by novel neural net architectures, the availability of more training data, and faster hardware for both training and inference. The transition to Deep Neural Network based solutions started with AlexNet 

[19], which won the ImageNet challenge by a large margin. The ImageNet classification challenge started in 2010 with the first winning method achieving an error rate of 28.2%, followed by 26.2% in 2011. However, a clear improvement in accuracy was achieved by AlexNet with an error rate of 16.4%, a 10% margin with the runner up. AlexNet consists of five convolutional, and three fully connected layers. The network contains a total of 61 million parameters. Due to this large size of the network, the original model had to be trained on two GPUs with a model parallel approach, where the filters were distributed to these GPUs. Moreover, dropout was required to avoid overfitting using such a large model size. The next major milestone in ImageNet classification was made by VGG-Net family [23], which exclusively uses convolutions. The main ideas here were usage of convolutions to approximate filter’s receptive field, along with a deeper network. However, the model size of VGG-19 with 138 million parameters is even larger than AlexNet and not suitable for real-time applications. Another step forward in architecture design was the ResNet family [9], which incorporates a repetitive structure of and convolutions along with a skip connection. By changing the depth of the networks, the authors showed competitive performance for multiple learning tasks.

As one can see, a general trend of neural network design has been to find larger and deeper models to get better accuracy without considering the memory or power budget. One widely held belief has been that new hardware is going to provide adequate computational power and memory to allow these networks to run with real-time performance in embedded systems. However, increase in transistor speed due to semiconductor process improvements has slowed dramatically, and it seems unlikely that mobile processors will meet computational requirements on a limited power budget. This has opened several new directions to reduce the memory footprint of existing neural network architectures using compression [7], or designing new smaller models from scratch. SqueezeNet is a successful example for the latter approach [15], which achieves AlexNet’s accuracy with fewer parameters without compression, and smaller with deep compression. Models for other applications such as detection and segmentation have been developed based on SqueezeNet [25, 26]. Another notable work in this direction is the DarkNet Reference network [22], which achieves AlexNet’s accuracy with fewer parameters (28MB), but requires smaller FLOPs per inference image. They also proposed a smaller network called TinyDarkNet, which matches AlexNet’s performance with only 4.0MB parameters. Another notable work is MobileNet [11] which used depth wise convolution for spatial convolutions, and was able to exceed AlexNet’s performance with only 1.32 million parameters. A later work is ShuffleNet [10] which extends this idea to pointwise group convolution along with channel shuffling [10]. More compact versions of Residual Networks have also been proposed. Notable works here include DenseNet [13] and its follow up CondenseNet [12].

Contributions

Aiming to design a family of deep neural networks for embedded applications with limited power and memory budgets we present SqueezeNext. With the smallest version, we can achieve AlexNet’s accuracy with only 0.5 Million model parameters, less than AlexNet (and more than smaller than SqueezeNet). Furthermore, we show how variations of the network, in terms of width and depth, can span a wide of range of accuracy levels. For instance, a deeper variation of SqueezeNext can reach VGG-19’s baseline accuracy with only 4.4 Million model parameters. SqueezeNext uses SqueezeNet architecture as a baseline, however we make the following changes. (i) We use a more aggressive channel reduction by incorporating a two-stage squeeze module. This significantly reduces the total number of parameters used with the convolutions. (ii) We use separable convolutions to further reduce the model size, and remove the additional branch after the squeeze module. (iii) We use an element-wise addition skip connection similar to that of ResNet architecture [9]

, which allows us to train much deeper network without the vanishing gradient problem. In particular, we avoid using DenseNet 

[13] type connections as it increases the number of channels and requires concatenating activations which is costly both in terms of execution time and power consumption. (iv) We optimize the baseline SqueezeNext architecture by simulating its performance on a multi-processor embedded system. The simulation results gives very interesting insight into where the bottlenecks for performance are. Based on these observations, one can then perform variations on the baseline model and achieve higher performance both in term of inference speed and power consumption, and sometimes even better classification accuracy.

Figure 2: Illustration of a SqueezeNext block. An input with channels is passed through a two stage bottleneck module. Each bottleneck module consists of convolutions reducing the input channel size by a factor of 2. The output is then passed through a separable convolution. The order of and convolutions is changed throughout the network. The output from the separable convolution is finally passed through an expansion module to match the skip connection’s channels (the skip connection is not shown here).
Model Top-1 Top-5 # Params Comp.
AlexNet 57.10 80.30 60.9M
SqueezeNet 57.50 80.30 1.2M
1.0-SqNxt-23 59.05 82.60 0.72M
1.0-G-SqNxt-23 57.16 80.23 0.54M
1.0-SqNxt-23-IDA 60.35 83.56 0.9M
1.0-SqNxt-34 61.39 84.31 1.0M
1.0-SqNxt-34-IDA 62.56 84.93 1.3
1.0-SqNxt-44 62.64 85.15 1.2M
1.0-SqNxt-44-IDA 63.75 85.97 1.5M
Table 1: Performance of the baseline SqueezeNext on ImageNet. We report the top-1/top-5 accuracy, the number of parameters (# Params), and compression relative to AlexNet (Comp). The 23 module architecture (1.0-SqNxt-23), exceeds AlexNet’s top-5 by margin. A more aggressive parameter reduction on this network with group convolutions, is able to match AlexNet’s top-5 with fewer parameters. We also show the performance of deeper variations of the base model with/without Iterative Deep Aggregation (IDA) [28]. The 1.0-SqNxt-44 model has the same number of parameters as SqueezeNet but achieves better top-5.

2 SqueezeNext Design

It has been found that many of the filters in the network contain redundant parameters, in the sense that compressing them would not hurt accuracy. Based on this observation, there has been many successful attempts to compress a trained network [17, 7, 21, 29]. However, some of these compression methods require variable bit-width ALUs for efficient execution. To avoid this, we aim to design a small network which can be trained from scratch with few model parameters to start with. To this end, we use the following strategies:

Low Rank Filters

We assume that the input to the layer of the network with convolution filters to be , producing an output activation of (for ease of notation, we assume that the input and output activations have the same spatial size). Here, and are the input and output channel sizes. The total number of parameters in this layer will then be . Essentially, the filters would consist of tensors of size .

In the case of post-training compression, one seeks to shrink the parameters, , using a low rank basis, . Possible candidates for can be CP or Tucker decomposition. The amount of reduction that can be achieved with these methods is proportional to the rank of the original weight, . However, examining the trained weights for most of the networks, one finds that they do not have a low rank structure. Therefore, most works in this area have to perform some form of retraining to recover accuracy [7, 8, 20, 14, 21]. This includes pruning to reduce the number of non-zero weights, and reduced precision for elements of . An alternative to this approach is to re-design a network using the low rank decomposition to force the network to learn the low rank structure from the beginning, which is the approach that we follow. The first change that we make, is to decompose the convolutions into two separable convolutions of size and , as shown in Fig. 1. This effectively reduces the number of parameters from to

, and also increases the depth of the network. These two convolutions both contain a ReLu activation as well as a batch norm layer 

[16].

Bottleneck Module

Other than a low rank structure, the multiplicative factor of and significantly increases the number of parameters in each convolution layer. Therefore, reducing the number of input channels would reduce network size. One idea would be to use depth-wise separable convolution to reduce this multiplicative factor, but this approach does not good performance on some embedded systems due to its low arithmetic intensity (ratio of compute to bandwidth). Another ideas is the one used in the SqueezeNet architecture [15], where the authors used a squeeze layer before the convolution to reduce the number of input channels to it. Here, we use a variation of the latter approach by using a two stage squeeze layer, as shown in Fig. 1. In each SqueezeNext block, we use two bottleneck modules each reducing the channel size by a factor of 2, which is followed by two separable convolutions. We also incorporate a final expansion module, which further reduces the number of output channels for the separable convolutions.

Fully Connected Layers

In the case of AlexNet, the majority of the network parameters are in Fully Connected layers, accounting for 96% of the total model size. Follow-up networks such as ResNet or SqueezeNet consist of only one fully connected layer. Assuming that the input has a size of , then a fully connected layer for the last layer will contain parameters, where is the number of labels (1000 for ImageNet). SqueezeNext incorporates a final bottleneck layer to reduce the input channel size to the last fully connected layer, which considerably reduces the total number of model parameters. This idea was also used in Tiny DarkNet to reduce the number parameters [22].

Figure 3: Illustration of block arrangement in 1.0-SqNxt-23. Each color change corresponds to a change in input feature map’s resolution. The number of blocks after the first convolution/pooling layer is , where the last number refers to the yellow box. This block is followed by a bottleneck module with average pooling to reduce the channel size and spatial resolution (green box), followed by a fully connected layer (black box). In optimized variations of the baseline, we change this depth distribution by decreasing the number of blocks in early stages (dark blue), and instead assign more blocks to later stages (Fig. 9). This increases hardware performance as early layers have poor compute efficiency.

3 Hardware Performance Simulation

Up to now, hardware architectures have been designed to be optimal for a fixed neural network, for instance SqueezeNet. However, as we later discuss there is important insights that can be gained from hardware simulation results. This in turn can be used to modify the neural network architecture, to get better performance in terms of inference and power consumption possibly without incurring generalization loss. In this section, we first explain how we simulate the performance of the network on a hypothetical neural network accelerator for mobile/embedded systems, and then discuss how the baseline model can be varied to get considerably better hardware performance.

The neural network accelerator is a domain specific processor which is designed to accelerate neural network inference and/or training tasks. It usually has a large number of computation units called processing element (PE) and a hierarchical structure of memories and interconnections to exploit the massive parallelism and data reusability inherent in the convolutional layers.

input : Input feature map, , convolution parameters
output : Output feature map,
for  to  do ;
  // Output Channels
   for  to  do ;
    // H: Height
  
     for  to  do ;
      // W: Width
    
       O[k][y][x] = 0;
       for  to  do ;
        // Input Channels
      
         for  to  do ;
          // Filter Size
        
           for  to  do
             O[k][y][x] += I[c][y+j][x+i] * W[k][c][j][i]
            
            
            
            
            
Algorithm 1 Execution flow for computing a convolution kernel

Eyeriss [2]

introduced a taxonomy for classifying neural network accelerators based on the spatial architecture according to the type of the data to be reused at the lowest level of the memory hierarchy. This is related to the order of the six loops of the convolutional layer as shown in Algorithm 

1.111Because the typical size of the batch in the inference task is one, the batch loop is omitted here. In our simulator, we consider two options to execute a convolution: Weight Stationary (WS) and Output Stationary (OS).

Figure 4: Conceptual diagram of two data flows used in the experiment: Output Stationary (top) and Weight Stationary (bottom). A PE array performs a convolution on a 5x4x2 input (left) generating a 5x4x2 output (right). Here denotes the ith cycle. (Top) In T0, and T1 cycles of OS data flow, the shaded area on left is read and convolved with different filter weights and the results are stored in the corresponding output pixels. Then in T2 and T3 cycles the data from the second input channel are read and similar operation is performed to accumulate partial sums. (bottom) in WS the input pixel is first broadcast to the PEs. In the first cycle, PE0 and PE1 apply different convolutions to the first pixel of first input channel and accumulate the results from PE2 and PE3, respectively, and store the results to the corresponding output pixel. In next cycles other input pixels are read and the same operation is performed.

Weight Stationary

WS is the most common method used in many notable neural network accelerators [4, 3, 6, 18, 1]. For WS method, each PE loads a convolution filter and executes the convolution at all spatial locations of the input. Once all the spatial positions are iterated, the PE will load the next convolution filter. In this method, the filter weights are stored in the PE register. For a input feature map, and convolutions (where is the number of filters), the execution process is as follows. The PE loads a single element from the convolution parameters to its local register and applies that to the whole input activation. Afterwards, it moves to the next element and so forth. For multiple PEs, we decompose it to a 2D grid of processors, where the dimension decomposes the channels and the dimension decomposes the output feature maps, , as shown in Figure 4. In summary, in the WS mode the whole PE array keeps a

sub-matrix of the weight tensor, and it performs matrix-vector multiplications on a series of input activation vectors.

Output Stationary

In the OS method, each PE exclusively works on one pixel of the output activation map at a time. In each cycle, it applies parts of the convolution that will contribute to that output pixel, and accumulates the results. Once all the computations for that pixel are finished, the PE moves to work on a new pixel. In case of multiple PEs, each processor simply works on different pixels from multiple channels. In summary, in the OS mode the whole array computes a block of an output feature map over time. In each cycle, new inputs and weights needed to compute the corresponding pixel are provided to each PE. There are multiple ways that OS method could be executed. These include Single Output Channel-Multiple Output Pixel (SOC-MOP), Multiple Output Channels-Single Output Pixel (MOC-SOP), and Multiple Output Channels-Multiple Output Pixels (MOC-MOP) [2]. Here we use SOC-MOP format.

Accelerators that adopt the weight stationary (WS) data flow are designed to minimize the memory accesses for the convolution parameters by reusing the weights of the convolution filter over several activations. On the other hand, accelerators that use the output stationary (OS) data flow are designed to minimize the memory access for the output activations by accumulating the partial sums corresponding to the same output activation over time. Therefore, the and loops form the innermost loop in the WS data flow, whereas the , , and loops form the innermost loop in the OS data flow. Both data flows show good performance for convolutional layers with or larger filters. However, recent trend on the mobile and embedded neural network architecture is the wide adoption of lightweight building blocks, e.g. convolutions, which have limited parallelism and data reusability.

Figure 5:

Block diagram of the neural network accelerator used as the reference hardware for inference speed and energy estimation of various neural networks.

Hardware Simulation Setup

Figure 5 shows the block diagram of the neural network accelerator used as the reference hardware for the inference speed and energy estimation. It consists of a or array of PEs, a 128KB or 32KB global buffer, and a DMA controller to transfer data between the DRAM and the buffer. A PE has a 16-bit integer multiply-and-accumulate (MAC) unit and a local register file. In order for the efficient acceleration of various configurations of the convolutional layer, the accelerator supports the two WS and OS operating modes, as explained before. To reduce the execution time and the energy consumption, the accelerator is designed to exploit the sparsity of the filter weights [7, 27] in the OS mode as well. In this experiment, we conservatively assume 40% of the weight sparsity.

The accelerator processes the neural network one layer at a time, and the operating mode which gives better inference speed is used for each layer. The memory footprint of layers ranges from tens of kilobytes to a few megabytes. If the memory capacity requirement for a layer is larger than the size of the global buffer, the tiling is applied to the , , , and loops of the convolution in Algorithm 1. The order and the size of the tiled loops are selected by using a cost function which takes account of the inference speed and the number of the DRAM accesses.

The performance estimator computes the number of clock cycles required to process each layer and sums all the results. The cycles consumed by the PE array and the global buffer are calculated by modeling the timings of the internal data paths and the control logic, and the DRAM access time is approximated by using two numbers, the latency and the effective bandwidth, which are assumed to be 100 cycles and 16GB/s, respectively. For the energy estimation, we use a similar methodology to [2], but the normalized energy cost of components is slightly modified considering the architecture of the reference accelerator.

Model Top-1 Top-5 Params
1.5-SqNxt-23 63.52 85.66 1.4M
1.5-SqNxt-34 66.00 87.40 2.1M
1.5-SqNxt-44 67.28 88.15 2.6M
VGG-19 68.50 88.50 138M
2.0-SqNxt-23 67.18 88.17 2.4M
2.0-SqNxt-34 68.46 88.78 3.8M
2.0-SqNxt-44 69.59 89.53 4.4M
MobileNet 67.50 (70.9) 86.59 (89.9) 4.2M
2.0-SqNxt-23v5 67.44 (69.8) 88.20(89.5) 3.2M
Table 2: Performance of wider variations of the SqueezeNext architecture. The first three rows use wider, and the last three rows shows wider channels as compared to the respective baseline models. The 2.0-SqNxt-44 network is able to match VGG-19’s performance with less parameters. Furthermore, comparison with MobileNet-1.0-224 shows comparable performance with fewer parameters (2.0-SqNxt23v5).
Figure 6: Per-layer inference time (lower is better) is shown along the left y-axis for variants (v1-v5) of 1.0-SqNxt-23 architecture. Acceleration efficiency (number of MAC operations divided by total cycle counts) is shown by the dotted line and the right y-axis. The top graph shows the results for an array and the bottom a PE array. Note the relatively poor efficiency in early layers for the PE array due to the small number of filter channels.

4 Results

Embedded applications have a variety of constraints with regard to accuracy, power, energy, and speed. To meet these constraints we explore a variety of trade-offs. Results of these explorations are reported in this section.

Model Params MAC Top-1 Top-5 Depth 8x8, 32KB 16x16, 128KB
( 1e+6) Time Energy Time Energy
AlexNet 60.9 725M 57.10 80.30 x5.46 1.6E+10 x8.26 1.5E+10
SqueezeNet v1.0 1.2 837M 57.50 80.30 x3.42 6.7E+09 x2.59 4.5E+09
SqueezeNet v1.1 1.2 352M 57.10 80.30 x1.60 3.3E+09 x1.31 2.4E+09
Tiny Darknet 1.0 495M 58.70 81.70 x1.92 3.8E+09 x1.50 2.5E+09
1.0-SqNxt-23 0.72 282M 59.05 82.60 [6,6,8,1] x1.17 3.2E+09 x1.22 2.5E+09
1.0-SqNxt-23v2 0.74 228M 58.55 82.09 [6,6,8,1] x1.00 2.8E+09 x1.13 2.4E+09
1.0-SqNxt-23v3 0.74 228M 58.18 81.96 [4,8,8,1] x1.00 2.7E+09 x1.08 2.3E+09
1.0-SqNxt-23v4 0.77 228M 59.09 82.41 [2,10,8,1] x1.00 2.6E+09 x1.02 2.2E+09
1.0-SqNxt-23v5 0.94 228M 59.24 82.41 [2,4,14,1] x1.00 2.6E+09 x1.00 2.0E+09
MobileNet 4.2 574M 67.50(70.9) 86.59(89.9) x2.94 9.1E+09 x2.60 5.8E+09
2.0-SqNxt-23 2.4 749M 67.18 88.17 [6,6,8,1] x3.24 8.1E+09 x2.72 5.9E+09
2.0-SqNxt-23v4 2.56 708M 66.95 87.89 [2,10,8,1] x3.17 7.5E+09 x2.55 5.4E+09
2.0-SqNxt-23v5 3.23 708M 67.44 88.20 [2,4,14,1] x3.17 7.4E+09 x2.55 5.4E+09
Table 3: Simulated hardware performance results in terms of inference time and energy for the and PE array configurations. The time for each configuration is normalized by the number of cycles of the fastest network for each configuration (smaller is better). Note how the variations of the baseline SqueezeNext model are able to achieve better inference and power consumption. For instance, the 1.0-SqNxt-23v5 model is 12% faster and 17% more energy efficient than the baseline model for configuration. This is achieved by an efficient redistribution of depth at each stage (see Fig. 3). Also note that the 2.0-SqNxt23-v5 has better energy efficiency as compared to MobileNet.

Training Procedure

For training, we use a center crop of the input image and subtract the ImageNet mean from each input channel. We do not use any further data augmentation and use the same hyper-parameters for all the experiments. Tuning hyper-parameter can increase the performance of some of the SqueezeNext variants, but it is not expected to change the general trend of our results. To accelerate training, we use a data parallel method, where the input batch size is distributed among processors (hereforth referred to as PE). Unless otherwise noted, we use

Intel KNightsLanding (KNL), each consisting of 68 1.4GHz Cores with a single precision peak of 6TFLOPS. All experiments were performed on VLAB system which consists of 256 KNLs, with an Intel Omni-Path 100 series interconnect. We perform a 120 epoch training using Intel Caffe with a batch size of

. In the data parallel approach, the model is replicated to all workers and each KNL gets an input batch size of , randomly selected from the training data and independently performs a forward and backwards pass [5]. This is followed by a collective all-reduce operation, where the sum of the gradients computed through backward pass in each worker is computed. This information is then broadcasted to each worker and the model is updated.

Classification Performance Results

We report the performance of the SqueezeNext architecture in Table 1. The network name is appended by the number of modules. The schematic of each module is shown in Fig. 1. Simply because AlexNet has been widely used as a reference architecture in the literature, we begin with a comparison to AlexNet. Our 23 module architecture exceeds AlexNet’s performance with a margin with smaller number of parameters. Note that in the SqueezeNext architecture, the majority of the parameters are in the convolutions. To explore how much further we can reduce the size of the network, we use group convolution with a group size of two. Using this approach we are able to match AlexNet’s top-5 performance with a smaller model.

Deeper variations of SqueezeNext can actually cover a wide range of accuracies as reported in Table 1. The deepest model we tested consists of 44 modules: 1.0-SqNxt-44. This model achieves better top-5 accuracy as compared to AlexNet. SqueezeNext modules can also be used as building blocks for other types of network designs. One such possibility is to use a tree structure instead of the typical feed forward architecture as proposed in [28]. Using the Iterative Deep Aggregation (IDA) structure, we can achieve better accuracy, although it increases the model size. Another variation for getting better performance is to increase the network width. We increase the baseline width by a multiplier factor of and and report the results in Table 2. The version with twice the width and 44 modules (2.0-SqNxt-44) is able to match VGG-19’s performance with smaller number of parameters.

A novel family of neural networks particularly designed for mobile/embedded applications is MobileNet, which uses depth-wise convolution. Depth-wise convolutions reduce the parameter count, but also have poor arithmetic intensity. MobileNet’s best reported accuracy results have benefited from data augmentation and extensive experimentation with training regimen and hyper-parameter tuning (these results are reported in parentheses in Table 2). Aiming to perform a fairer comparison with these results with SqueezeNext, we trained MobileNet under similar training regimen to SqueezeNext and report the results in Table 2. SqueezeNext is able to achieve similar results for Top-1 and slightly better Top-5 with half the model parameters. It may be possible to get better results for SqueezeNext with hyper-parameter tuning which is network specific. However, our main goal is to show the general performance trend of SqueezeNext and not the maximum achievable performance for each individual version of it.

Hardware Performance Results

Figure 6 shows the per-layer cycle count estimation of 1.0-SqNxt-23, along with its optimized variations explained below. For better visibility, the results of the layers with the same configuration are summed together, e.g. Conv8, Conv13, Conv18, and Conv23, and represented as a single item. In the 1.0-SqNxt-23, the first convolutional layer accounts for 26% of the total inference time. This is due to its relatively large filter size applied on a large input feature map, Therefore, the first optimization we make is replacing this layer with a convolution, and construct 1.0-SqNxt-23-v2 model. Moreover, we plot the accelerator efficiency in terms of flops per cycle for each layer of the 1.0-SqNxt-23 model. Note the significant drop in efficiency for the layers in the first module. This drop is more significant for the PE array configuration as compared to the one. The reason for this drop is that the initial layers have very small number of channels which needs to be applied a large input activation map. However, later layers in the network do not suffer from this, as they contain filters with larger channel size. To resolve this issue, we explore changing the number of modules to better distribute the workload. The baseline model has modules before the activation map size is reduced at each stage, as shown in Fig. 3. We consider three possible variations on top of the v2 model. In the v3/v4 variation, we reduce the number of the blocks in the first module by 2/4 and instead add it to the second module, respectively. In the v5 variation, we reduce the blocks of the first two modules and instead increase the blocks in the third module. The results are shown in Table 3. As one can see, the v5 variation (1.0-SqNxt-23v5) actually achieves better top-1/5 accuracy and has much better performance on the PE array. It uses 17% lower energy and is 12% faster as compared to the baseline model (i.e. 1.0-SqNxt-23). In total, the latter network is / faster and / more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.

Very similar performance improvement is observed with the v4 variation, with only 50K higher number of parameters. We also show results comparing MobileNet and 2.0-SqNxt-23v5 which matches its classification performance. SqueezeNext has lower energy consumption, and achieves better speedup when we use a PE array as compared to . The reason for this is the inefficiency of depthwise-separable convolution in terms of hardware performance, which is due to its poor arithmetic intensity (ratio of compute to memory operations) [24]. This inefficiency becomes more pronounced as higher number of processors are used, since the problem becomes more bandwidth bound. A comparative plot for trade-offs between energy, inference speed, and accuracy for different networks is shown in Fig. 7. As one can see, SqueezeNext provides a family of networks that provide superior accuracy with good power and inference speed.

Figure 7: The spectrum of accuracy versus energy and inference speed for SqueezeNext, SqueezeNet (v1.0 and v1.1), Tiny DarkNet, and MobileNet. SqueezeNext shows superior performance (in both plots higher and to the left is better). The circle areas are proportional to square root of model size for each network.

5 Conclusions

In this work, we presented SqueezeNext, a new family of neural network architectures that is able to achieve AlexNet’s top-5 performance with fewer parameters. A deeper variation of the SqueezeNext architecture exceeds VGG-19’s accuracy with fewer parameters. MobileNet is a very novel network for Mobile applications, but SqueezeNext was able to exceed MobileNet’s top-5 accuracy by , with fewer parameters. SqueezeNext accomplished this without using depthwise-separable convolutions that are troublesome for some mobile processor-architectures. The baseline network consists of a two-stage bottleneck module to reduce the number of input channels to spatial convolutions, use of low rank separable convolutions, along with an expansion module. We also restrict the number of input channels to the fully connected layer to further reduce the model parameters. More efficient variations of the baseline architecture are achieved by simulating the hardware performance of the model on the PE array of a realistic neural network accelerator and reported the results in terms of power and inference cycle counts. Using per layer simulation analysis, we proposed different variations of the baseline model that can not only achieve better inference speed and power energy consumption, but also got better classification accuracy with only negligible increase in model size. The tight coupling between neural net design and performance modeling on a neural net accelerator architecture was essential to get our result. This allowed us to design a novel network that is / faster and / more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.

The resulting wide of range of speed, energy, model-size, and accuracy trade-offs provided by the SqueezeNext family allows the user to select the right neural net model for a particular application.

Acknowledgements

We would like to thank Forrest Iandola for his constructive feedback as well as the generous support from Intel (S. Buck, J, Ota, B. Forgeron, and S. Goswami).

References