One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. In this work, we introduce SqueezeNext, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. This new network is able to match AlexNet's accuracy on the ImageNet benchmark with 112× fewer parameters, and one of its deeper variants is able to achieve VGG-19 accuracy with only 4.4 Million parameters, (31× smaller than VGG-19). SqueezeNext also achieves better top-5 classification accuracy with 1.3× fewer parameters as compared to MobileNet, but avoids using depthwise-separable convolutions that are inefficient on some mobile processor platforms. This wide range of accuracy gives the user the ability to make speed-accuracy tradeoffs, depending on the available resources on the target hardware. Using hardware simulation results for power and inference speed on an embedded system has guided us to design variations of the baseline model that are 2.59×/8.26× faster and 2.25×/7.5× more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.READ FULL TEXT VIEW PDF
Computationally intensive Inference tasks of Deep neural networks have
Using FPGAs to accelerate ConvNets has attracted significant attention i...
We implement a differentiable Neural Architecture Search (NAS) method
Keyword Spotting (KWS) enables speech-based user interaction on smart
We design a family of image classification architectures that optimize t...
Convolutional Neural Networks (CNNs) have become common in many fields
As deep neural network (NN) methods have matured, there has been increas...
Deep Neural Networks have transformed a wide range of applications in computer vision. This has been made possible in part by novel neural net architectures, the availability of more training data, and faster hardware for both training and inference. The transition to Deep Neural Network based solutions started with AlexNet, which won the ImageNet challenge by a large margin. The ImageNet classification challenge started in 2010 with the first winning method achieving an error rate of 28.2%, followed by 26.2% in 2011. However, a clear improvement in accuracy was achieved by AlexNet with an error rate of 16.4%, a 10% margin with the runner up. AlexNet consists of five convolutional, and three fully connected layers. The network contains a total of 61 million parameters. Due to this large size of the network, the original model had to be trained on two GPUs with a model parallel approach, where the filters were distributed to these GPUs. Moreover, dropout was required to avoid overfitting using such a large model size. The next major milestone in ImageNet classification was made by VGG-Net family , which exclusively uses convolutions. The main ideas here were usage of convolutions to approximate filter’s receptive field, along with a deeper network. However, the model size of VGG-19 with 138 million parameters is even larger than AlexNet and not suitable for real-time applications. Another step forward in architecture design was the ResNet family , which incorporates a repetitive structure of and convolutions along with a skip connection. By changing the depth of the networks, the authors showed competitive performance for multiple learning tasks.
As one can see, a general trend of neural network design has been to find larger and deeper models to get better accuracy without considering the memory or power budget. One widely held belief has been that new hardware is going to provide adequate computational power and memory to allow these networks to run with real-time performance in embedded systems. However, increase in transistor speed due to semiconductor process improvements has slowed dramatically, and it seems unlikely that mobile processors will meet computational requirements on a limited power budget. This has opened several new directions to reduce the memory footprint of existing neural network architectures using compression , or designing new smaller models from scratch. SqueezeNet is a successful example for the latter approach , which achieves AlexNet’s accuracy with fewer parameters without compression, and smaller with deep compression. Models for other applications such as detection and segmentation have been developed based on SqueezeNet [25, 26]. Another notable work in this direction is the DarkNet Reference network , which achieves AlexNet’s accuracy with fewer parameters (28MB), but requires smaller FLOPs per inference image. They also proposed a smaller network called TinyDarkNet, which matches AlexNet’s performance with only 4.0MB parameters. Another notable work is MobileNet  which used depth wise convolution for spatial convolutions, and was able to exceed AlexNet’s performance with only 1.32 million parameters. A later work is ShuffleNet  which extends this idea to pointwise group convolution along with channel shuffling . More compact versions of Residual Networks have also been proposed. Notable works here include DenseNet  and its follow up CondenseNet .
Aiming to design a family of deep neural networks for embedded applications with limited power and memory budgets we present SqueezeNext. With the smallest version, we can achieve AlexNet’s accuracy with only 0.5 Million model parameters, less than AlexNet (and more than smaller than SqueezeNet). Furthermore, we show how variations of the network, in terms of width and depth, can span a wide of range of accuracy levels. For instance, a deeper variation of SqueezeNext can reach VGG-19’s baseline accuracy with only 4.4 Million model parameters. SqueezeNext uses SqueezeNet architecture as a baseline, however we make the following changes. (i) We use a more aggressive channel reduction by incorporating a two-stage squeeze module. This significantly reduces the total number of parameters used with the convolutions. (ii) We use separable convolutions to further reduce the model size, and remove the additional branch after the squeeze module. (iii) We use an element-wise addition skip connection similar to that of ResNet architecture 
, which allows us to train much deeper network without the vanishing gradient problem. In particular, we avoid using DenseNet type connections as it increases the number of channels and requires concatenating activations which is costly both in terms of execution time and power consumption. (iv) We optimize the baseline SqueezeNext architecture by simulating its performance on a multi-processor embedded system. The simulation results gives very interesting insight into where the bottlenecks for performance are. Based on these observations, one can then perform variations on the baseline model and achieve higher performance both in term of inference speed and power consumption, and sometimes even better classification accuracy.
It has been found that many of the filters in the network contain redundant parameters, in the sense that compressing them would not hurt accuracy. Based on this observation, there has been many successful attempts to compress a trained network [17, 7, 21, 29]. However, some of these compression methods require variable bit-width ALUs for efficient execution. To avoid this, we aim to design a small network which can be trained from scratch with few model parameters to start with. To this end, we use the following strategies:
We assume that the input to the layer of the network with convolution filters to be , producing an output activation of (for ease of notation, we assume that the input and output activations have the same spatial size). Here, and are the input and output channel sizes. The total number of parameters in this layer will then be . Essentially, the filters would consist of tensors of size .
In the case of post-training compression, one seeks to shrink the parameters, , using a low rank basis, . Possible candidates for can be CP or Tucker decomposition. The amount of reduction that can be achieved with these methods is proportional to the rank of the original weight, . However, examining the trained weights for most of the networks, one finds that they do not have a low rank structure. Therefore, most works in this area have to perform some form of retraining to recover accuracy [7, 8, 20, 14, 21]. This includes pruning to reduce the number of non-zero weights, and reduced precision for elements of . An alternative to this approach is to re-design a network using the low rank decomposition to force the network to learn the low rank structure from the beginning, which is the approach that we follow. The first change that we make, is to decompose the convolutions into two separable convolutions of size and , as shown in Fig. 1. This effectively reduces the number of parameters from to
, and also increases the depth of the network. These two convolutions both contain a ReLu activation as well as a batch norm layer.
Other than a low rank structure, the multiplicative factor of and significantly increases the number of parameters in each convolution layer. Therefore, reducing the number of input channels would reduce network size. One idea would be to use depth-wise separable convolution to reduce this multiplicative factor, but this approach does not good performance on some embedded systems due to its low arithmetic intensity (ratio of compute to bandwidth). Another ideas is the one used in the SqueezeNet architecture , where the authors used a squeeze layer before the convolution to reduce the number of input channels to it. Here, we use a variation of the latter approach by using a two stage squeeze layer, as shown in Fig. 1. In each SqueezeNext block, we use two bottleneck modules each reducing the channel size by a factor of 2, which is followed by two separable convolutions. We also incorporate a final expansion module, which further reduces the number of output channels for the separable convolutions.
In the case of AlexNet, the majority of the network parameters are in Fully Connected layers, accounting for 96% of the total model size. Follow-up networks such as ResNet or SqueezeNet consist of only one fully connected layer. Assuming that the input has a size of , then a fully connected layer for the last layer will contain parameters, where is the number of labels (1000 for ImageNet). SqueezeNext incorporates a final bottleneck layer to reduce the input channel size to the last fully connected layer, which considerably reduces the total number of model parameters. This idea was also used in Tiny DarkNet to reduce the number parameters .
Up to now, hardware architectures have been designed to be optimal for a fixed neural network, for instance SqueezeNet. However, as we later discuss there is important insights that can be gained from hardware simulation results. This in turn can be used to modify the neural network architecture, to get better performance in terms of inference and power consumption possibly without incurring generalization loss. In this section, we first explain how we simulate the performance of the network on a hypothetical neural network accelerator for mobile/embedded systems, and then discuss how the baseline model can be varied to get considerably better hardware performance.
The neural network accelerator is a domain specific processor which is designed to accelerate neural network inference and/or training tasks. It usually has a large number of computation units called processing element (PE) and a hierarchical structure of memories and interconnections to exploit the massive parallelism and data reusability inherent in the convolutional layers.
introduced a taxonomy for classifying neural network accelerators based on the spatial architecture according to the type of the data to be reused at the lowest level of the memory hierarchy. This is related to the order of the six loops of the convolutional layer as shown in Algorithm1.111Because the typical size of the batch in the inference task is one, the batch loop is omitted here. In our simulator, we consider two options to execute a convolution: Weight Stationary (WS) and Output Stationary (OS).
WS is the most common method used in many notable neural network accelerators [4, 3, 6, 18, 1]. For WS method, each PE loads a convolution filter and executes the convolution at all spatial locations of the input. Once all the spatial positions are iterated, the PE will load the next convolution filter. In this method, the filter weights are stored in the PE register. For a input feature map, and convolutions (where is the number of filters), the execution process is as follows. The PE loads a single element from the convolution parameters to its local register and applies that to the whole input activation. Afterwards, it moves to the next element and so forth. For multiple PEs, we decompose it to a 2D grid of processors, where the dimension decomposes the channels and the dimension decomposes the output feature maps, , as shown in Figure 4. In summary, in the WS mode the whole PE array keeps a
sub-matrix of the weight tensor, and it performs matrix-vector multiplications on a series of input activation vectors.
In the OS method, each PE exclusively works on one pixel of the output activation map at a time. In each cycle, it applies parts of the convolution that will contribute to that output pixel, and accumulates the results. Once all the computations for that pixel are finished, the PE moves to work on a new pixel. In case of multiple PEs, each processor simply works on different pixels from multiple channels. In summary, in the OS mode the whole array computes a block of an output feature map over time. In each cycle, new inputs and weights needed to compute the corresponding pixel are provided to each PE. There are multiple ways that OS method could be executed. These include Single Output Channel-Multiple Output Pixel (SOC-MOP), Multiple Output Channels-Single Output Pixel (MOC-SOP), and Multiple Output Channels-Multiple Output Pixels (MOC-MOP) . Here we use SOC-MOP format.
Accelerators that adopt the weight stationary (WS) data flow are designed to minimize the memory accesses for the convolution parameters by reusing the weights of the convolution filter over several activations. On the other hand, accelerators that use the output stationary (OS) data flow are designed to minimize the memory access for the output activations by accumulating the partial sums corresponding to the same output activation over time. Therefore, the and loops form the innermost loop in the WS data flow, whereas the , , and loops form the innermost loop in the OS data flow. Both data flows show good performance for convolutional layers with or larger filters. However, recent trend on the mobile and embedded neural network architecture is the wide adoption of lightweight building blocks, e.g. convolutions, which have limited parallelism and data reusability.
Figure 5 shows the block diagram of the neural network accelerator used as the reference hardware for the inference speed and energy estimation. It consists of a or array of PEs, a 128KB or 32KB global buffer, and a DMA controller to transfer data between the DRAM and the buffer. A PE has a 16-bit integer multiply-and-accumulate (MAC) unit and a local register file. In order for the efficient acceleration of various configurations of the convolutional layer, the accelerator supports the two WS and OS operating modes, as explained before. To reduce the execution time and the energy consumption, the accelerator is designed to exploit the sparsity of the filter weights [7, 27] in the OS mode as well. In this experiment, we conservatively assume 40% of the weight sparsity.
The accelerator processes the neural network one layer at a time, and the operating mode which gives better inference speed is used for each layer. The memory footprint of layers ranges from tens of kilobytes to a few megabytes. If the memory capacity requirement for a layer is larger than the size of the global buffer, the tiling is applied to the , , , and loops of the convolution in Algorithm 1. The order and the size of the tiled loops are selected by using a cost function which takes account of the inference speed and the number of the DRAM accesses.
The performance estimator computes the number of clock cycles required to process each layer and sums all the results. The cycles consumed by the PE array and the global buffer are calculated by modeling the timings of the internal data paths and the control logic, and the DRAM access time is approximated by using two numbers, the latency and the effective bandwidth, which are assumed to be 100 cycles and 16GB/s, respectively. For the energy estimation, we use a similar methodology to , but the normalized energy cost of components is slightly modified considering the architecture of the reference accelerator.
|MobileNet||67.50 (70.9)||86.59 (89.9)||4.2M|
Embedded applications have a variety of constraints with regard to accuracy, power, energy, and speed. To meet these constraints we explore a variety of trade-offs. Results of these explorations are reported in this section.
|Model||Params||MAC||Top-1||Top-5||Depth||8x8, 32KB||16x16, 128KB|
For training, we use a center crop of the input image and subtract the ImageNet mean from each input channel. We do not use any further data augmentation and use the same hyper-parameters for all the experiments. Tuning hyper-parameter can increase the performance of some of the SqueezeNext variants, but it is not expected to change the general trend of our results. To accelerate training, we use a data parallel method, where the input batch size is distributed among processors (hereforth referred to as PE). Unless otherwise noted, we use
Intel KNightsLanding (KNL), each consisting of 68 1.4GHz Cores with a single precision peak of 6TFLOPS. All experiments were performed on VLAB system which consists of 256 KNLs, with an Intel Omni-Path 100 series interconnect. We perform a 120 epoch training using Intel Caffe with a batch size of. In the data parallel approach, the model is replicated to all workers and each KNL gets an input batch size of , randomly selected from the training data and independently performs a forward and backwards pass . This is followed by a collective all-reduce operation, where the sum of the gradients computed through backward pass in each worker is computed. This information is then broadcasted to each worker and the model is updated.
We report the performance of the SqueezeNext architecture in Table 1. The network name is appended by the number of modules. The schematic of each module is shown in Fig. 1. Simply because AlexNet has been widely used as a reference architecture in the literature, we begin with a comparison to AlexNet. Our 23 module architecture exceeds AlexNet’s performance with a margin with smaller number of parameters. Note that in the SqueezeNext architecture, the majority of the parameters are in the convolutions. To explore how much further we can reduce the size of the network, we use group convolution with a group size of two. Using this approach we are able to match AlexNet’s top-5 performance with a smaller model.
Deeper variations of SqueezeNext can actually cover a wide range of accuracies as reported in Table 1. The deepest model we tested consists of 44 modules: 1.0-SqNxt-44. This model achieves better top-5 accuracy as compared to AlexNet. SqueezeNext modules can also be used as building blocks for other types of network designs. One such possibility is to use a tree structure instead of the typical feed forward architecture as proposed in . Using the Iterative Deep Aggregation (IDA) structure, we can achieve better accuracy, although it increases the model size. Another variation for getting better performance is to increase the network width. We increase the baseline width by a multiplier factor of and and report the results in Table 2. The version with twice the width and 44 modules (2.0-SqNxt-44) is able to match VGG-19’s performance with smaller number of parameters.
A novel family of neural networks particularly designed for mobile/embedded applications is MobileNet, which uses depth-wise convolution. Depth-wise convolutions reduce the parameter count, but also have poor arithmetic intensity. MobileNet’s best reported accuracy results have benefited from data augmentation and extensive experimentation with training regimen and hyper-parameter tuning (these results are reported in parentheses in Table 2). Aiming to perform a fairer comparison with these results with SqueezeNext, we trained MobileNet under similar training regimen to SqueezeNext and report the results in Table 2. SqueezeNext is able to achieve similar results for Top-1 and slightly better Top-5 with half the model parameters. It may be possible to get better results for SqueezeNext with hyper-parameter tuning which is network specific. However, our main goal is to show the general performance trend of SqueezeNext and not the maximum achievable performance for each individual version of it.
Figure 6 shows the per-layer cycle count estimation of 1.0-SqNxt-23, along with its optimized variations explained below. For better visibility, the results of the layers with the same configuration are summed together, e.g. Conv8, Conv13, Conv18, and Conv23, and represented as a single item. In the 1.0-SqNxt-23, the first convolutional layer accounts for 26% of the total inference time. This is due to its relatively large filter size applied on a large input feature map, Therefore, the first optimization we make is replacing this layer with a convolution, and construct 1.0-SqNxt-23-v2 model. Moreover, we plot the accelerator efficiency in terms of flops per cycle for each layer of the 1.0-SqNxt-23 model. Note the significant drop in efficiency for the layers in the first module. This drop is more significant for the PE array configuration as compared to the one. The reason for this drop is that the initial layers have very small number of channels which needs to be applied a large input activation map. However, later layers in the network do not suffer from this, as they contain filters with larger channel size. To resolve this issue, we explore changing the number of modules to better distribute the workload. The baseline model has modules before the activation map size is reduced at each stage, as shown in Fig. 3. We consider three possible variations on top of the v2 model. In the v3/v4 variation, we reduce the number of the blocks in the first module by 2/4 and instead add it to the second module, respectively. In the v5 variation, we reduce the blocks of the first two modules and instead increase the blocks in the third module. The results are shown in Table 3. As one can see, the v5 variation (1.0-SqNxt-23v5) actually achieves better top-1/5 accuracy and has much better performance on the PE array. It uses 17% lower energy and is 12% faster as compared to the baseline model (i.e. 1.0-SqNxt-23). In total, the latter network is / faster and / more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.
Very similar performance improvement is observed with the v4 variation, with only 50K higher number of parameters. We also show results comparing MobileNet and 2.0-SqNxt-23v5 which matches its classification performance. SqueezeNext has lower energy consumption, and achieves better speedup when we use a PE array as compared to . The reason for this is the inefficiency of depthwise-separable convolution in terms of hardware performance, which is due to its poor arithmetic intensity (ratio of compute to memory operations) . This inefficiency becomes more pronounced as higher number of processors are used, since the problem becomes more bandwidth bound. A comparative plot for trade-offs between energy, inference speed, and accuracy for different networks is shown in Fig. 7. As one can see, SqueezeNext provides a family of networks that provide superior accuracy with good power and inference speed.
In this work, we presented SqueezeNext, a new family of neural network architectures that is able to achieve AlexNet’s top-5 performance with fewer parameters. A deeper variation of the SqueezeNext architecture exceeds VGG-19’s accuracy with fewer parameters. MobileNet is a very novel network for Mobile applications, but SqueezeNext was able to exceed MobileNet’s top-5 accuracy by , with fewer parameters. SqueezeNext accomplished this without using depthwise-separable convolutions that are troublesome for some mobile processor-architectures. The baseline network consists of a two-stage bottleneck module to reduce the number of input channels to spatial convolutions, use of low rank separable convolutions, along with an expansion module. We also restrict the number of input channels to the fully connected layer to further reduce the model parameters. More efficient variations of the baseline architecture are achieved by simulating the hardware performance of the model on the PE array of a realistic neural network accelerator and reported the results in terms of power and inference cycle counts. Using per layer simulation analysis, we proposed different variations of the baseline model that can not only achieve better inference speed and power energy consumption, but also got better classification accuracy with only negligible increase in model size. The tight coupling between neural net design and performance modeling on a neural net accelerator architecture was essential to get our result. This allowed us to design a novel network that is / faster and / more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.
The resulting wide of range of speed, energy, model-size, and accuracy trade-offs provided by the SqueezeNext family allows the user to select the right neural net model for a particular application.
We would like to thank Forrest Iandola for his constructive feedback as well as the generous support from Intel (S. Buck, J, Ota, B. Forgeron, and S. Goswami).
Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks.IEEE Journal of Solid-State Circuits, 52(1):127–138, 2017.
Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pages 109–116, 2011.
International conference on machine learning, pages 448–456, 2015.