Once for All: Train One Network and Specialize it for Efficient Deployment

08/26/2019 ∙ by Han Cai, et al. ∙ MIT 0

Efficient deployment of deep learning models requires specialized neural network architectures to best fit different hardware platforms and efficiency constraints (defined as deployment scenarios). Traditional approaches either manually design or use AutoML to search a specialized neural network and train it from scratch for each case. It is expensive and unscalable since their training cost is linear w.r.t. the number of deployment scenarios. In this work, we introduce Once for All (OFA) for efficient neural network design to handle many deployment scenarios, a new methodology that decouples model training from architecture search. Instead of training a specialized model for each case, we propose to train a once-for-all network that supports diverse architectural settings (depth, width, kernel size, and resolution). Given a deployment scenario, we can later search a specialized sub-network by selecting from the once-for-all network without training. As such, the training cost of specialized models is reduced from O(N) to O(1). However, it's challenging to prevent interference between many sub-networks. Therefore we propose the progressive shrinking algorithm, which is capable of training a once-for-all network to support more than 10^19 sub-networks while maintaining the same accuracy as independently trained networks, saving the non-recurring engineering (NRE) cost. Extensive experiments on various hardware platforms (Mobile/CPU/GPU) and efficiency constraints show that OFA consistently achieves the same level (or better) ImageNet accuracy than SOTA neural architecture search (NAS) methods. Remarkably, OFA is orders of magnitude faster than NAS in handling multiple deployment scenarios (N). With N=40, OFA requires 14x fewer GPU hours than ProxylessNAS, 16x fewer GPU hours than FBNet and 1,142x fewer GPU hours than MnasNet. The more deployment scenarios, the more savings over NAS.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Left: a single once-for-all network is trained to support versatile architectural configurations including depth, width, kernel size, and resolution. Given a deployment scenario, a specialized sub-network is directly selected from the once-for-all network without training. Middle: this approach reduces the cost of specialized deep learning deployment from O(N) to O(1). Right: once-for-all network followed by model selection can derive many accuracy-latency trade-offs by training only once, compared to conventional methods that require repeated training. See Table 2 for search cost comparison and Figure 6 for results on more hardware platforms.

Deep Neural Networks (DNNs) deliver state-of-the-art accuracy in many machine learning applications. However, the explosive growth in model size and computation cost gives rise to new challenges on how to efficiently deploy these deep learning models on

diverse hardware platforms, since they have to meet different efficiency constraints (e.g., latency, energy consumption). For instance, one mobile application on App Store has to support a diverse range of hardware devices, from a high-end iPhone-XS-Max with dedicated neural network accelerator to a 5-year-old iPhone-6 with a much slower processor. With different hardware resources (e.g., on-chip memory size, #arithmetic units), the optimal neural network architecture varies significantly. Even running on the same hardware, under different battery conditions or workloads, the best model architecture also differs a lot.

Given different hardware platforms and efficiency constraints, researchers either design compact models specialized for mobile Howard et al. (2017); Sandler et al. (2018); Zhang et al. (2018) or accelerate the existing models by compression He et al. (2018) for efficient deployment. However, designing specialized DNNs for every deployment scenario is engineer-expensive and computationally expensive, either with human-based or AutoML-based methods. Since such methods need to repeat the architecture design process and retrain the designed network from scratch for each case. Their total cost grows linearly as the number of deployment scenarios increases. It makes them unable to handle the vast amount of hardware devices (23.14 billion IoT devices till 2018111https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/) and highly dynamic deployment environments (different battery conditions, varied workloads, different latency requirements, etc.). The NRE cost is high.

This paper introduces a new solution to tackle this challenge – designing a once-for-all network that can be directly deployed under diverse architectural configurations. Inference can be performed by selecting only part of the once-for-all network. It flexibly supports different depths, widths, kernel sizes, and resolutions without retraining. A simple example of Once for All (OFA) is illustrated in Figure 1 (left). Specifically, we decouple the model training stage and model specialization stage. In the model training stage, we train a single once-for-all network, from which various sub-networks with different architectural configurations can be generated, and we focus on improving the accuracy of each sub-network without interfering with each other. In the model specialization stage, we prebuild the accuracy table and hardware efficiency (latency or energy) table for a subset of sub-networks. The weights of the sub-networks are directly derived from the once-for-all network without retraining, so this process is fast and computationally efficient. Furthermore, since the accuracy table can be shared among all hardware platforms, this cost is paid once. In the test time, given a deployment scenario, we only need to query the accuracy table and hardware latency table to get a specialized sub-network, and the cost is negligible. As such, we can reduce the cost of specialized neural network architecture design from O(N) to O(1) (Figure 1 middle). However, training the once-for-all network is a non-trivial task, since it requires joint optimization of the weights to maintain the accuracy of a large number of sub-networks (more than 10 in our experiments). It is computationally prohibitive to enumerate all sub-networks and train each one individually. Even more challenging, sub-networks share weights, but they shouldn’t interfere with each other. To address these issues, we propose the progressive shrinking algorithm for training the once-for-all network. We first train a neural network with maximum depth, width, and kernel size, then progressively train the network to support smaller sub-networks. This progressive shrinking scheme is crucial to prevent smaller sub-networks from hurting the accuracy of larger sub-networks. Moreover, progressive shrinking also allows us to provide good initialization and better supervision for small sub-networks with the help of large sub-networks rather than training them from scratch.

We evaluate the effectiveness of our proposed framework on ImageNet with various hardware platforms (Mobile/CPU/GPU) and efficiency constraints. Under all deployment scenarios, OFA consistently achieves the same level (or better) ImageNet accuracy than state-of-the-art hardware-aware NAS methods while being orders of magnitude more efficient handling diverse deployment scenarios.

2 Related Work

Efficient Deep Learning.

Improving the efficiency of deep neural networks is crucial for deploying deep learning algorithms on resource-constrained edge devices. Towards this goal, many efficient neural network architectures are proposed, such as SqueezeNetIandola et al. (2016), MobileNets Howard et al. (2017); Sandler et al. (2018), ShuffleNets Ma et al. (2018); Zhang et al. (2018), etc. Orthogonal to directly designing efficient neural network architectures, model compression Han et al. (2016) is another very effective technique for efficient deep learning, which focuses on improving the efficiency of a given neural network without affecting its accuracy. Specifically, network pruning approaches achieve this by removing redundant units Han et al. (2015) or redundant channels He et al. (2018); Liu et al. (2017) in a neural network, while quantization approaches improve the efficiency by representing the weights and activations of the neural network with low-bit representations Han et al. (2016); Courbariaux et al. (2015); Zhu et al. (2017).

Neural Architecture Search.

Manually designing neural network architectures requires tremendous human effort, which is expensive and sub-optimal. Neural architecture search (NAS) focuses on automating the architecture design process Zoph and Le (2017); Zoph et al. (2018); Real et al. (2018); Cai et al. (2018a); Liu et al. (2019). Early NAS methods Zoph et al. (2018); Real et al. (2018) search for high-accuracy neural network architectures without taking hardware efficiency into consideration. Therefore, their searched architectures (e.g., NASNet, AmoebaNet) are not efficient when deployed on hardware platforms. Recent hardware-aware NAS methods Tan et al. (2018); Cai et al. (2019); Wu et al. (2019) directly incorporate the hardware feedback into the architecture search process as a part of the reward signal Tan et al. (2018); Cai et al. (2019) or a loss regularization term Cai et al. (2019); Wu et al. (2019) that makes latency differentiable. As such, they are able to design specialized neural networks for different hardware platforms and efficiency constraints, showing significant improvements over non-specialized baselines (e.g., MobileNetV2). However, when a new inference hardware platform appears, these methods need to repeat the architecture search process and retrain the model. They are not scalable to a large number of deployment scenarios.

Dynamic Neural Networks.

The idea of training a single model to support different architectural configurations is related to dynamic neural network approaches that focus on skipping part of an existing model (e.g., ResNet50) based on input images. For example, Wu et al. (2018); Liu and Deng (2018); Wang et al. (2018) propose to learn an additional controller or gating modules to adaptively drop blocks in a given neural network; Huang et al. (2018) introduces early-exit branches in the computation graph, allowing to exit in the middle based on the current prediction confidence; Lin et al. (2017) proposes to adaptively prune channels based on the input feature map at runtime; Slimmable Nets Yu et al. (2019) train a model to support multiple width multipliers (specifically 4 different global width multipliers), building upon existing human-designed neural networks (e.g., MobileNetV2 0.35, 0.5, 0.75, 1.0). Such methods can save the computational cost while maintaining the accuracy by skipping more when given easy input images and skipping less when given difficult input images. However, they inherit a pre-designed neural network, which limits their performances on new deployment scenarios where the pre-designed neural network is not optimal. The degree of flexibility is also limited (e.g., only global width multiplier can adapt), and only a limited number of architectural configurations are supported (e.g., 4).

3 Method

3.1 Problem Formalization

We start with formalizing the problem of training the once-for-all network that supports versatile architectural configurations. We denote the weights of the once-for-all network as and the architectural configurations as . Then the problem can be formalized as

(1)

where denotes a selection scheme that selects part of the model from the once-for-all network  and forms a sub-network with architectural configuration . For example, to get the weights of a 3-layer sub-network from a 4-layer once-for-all network, one possible could be “taking the weights of the first three layers”.

In this work, we explore four important dimensions of the convolutional neural network architectures, i.e., depth, width, kernel size, and resolution. Other dimensions such as dilation and #groups can be naturally incorporated, and we leave them for future work. The overall objective is to train

to make each supported sub-network achieving the same level of accuracy as independently training a network with the same architectural configuration.

Figure 2: An example of the progressive shrinking process. We cover four important dimensions of CNN architectures (depth , width , kernel size and resolution ), resulting in a large space comprising diverse sub-networks ().

3.2 Training the Once-for-all Network

Preliminary.

A convolutional neural network (CNN) typically consists of several stages. A stage is a sequence of building blocks with the same resolution. In the network-level, we allow the model to be executed under different input image sizes (i.e., elastic resolution). In the stage-level, we allow each stage to skip different numbers of blocks (i.e., elastic depth). In the block-level, we allow each block to use different numbers of channels (i.e., elastic width) and different kernel sizes (i.e., elastic kernel size). Therefore, unlike previous methods that inherit a given neural network architecture (e.g., ResNet, MobileNetV2) Yu et al. (2019); Wu et al. (2018); He et al. (2018), we have a much more diverse architecture space while allowing a significantly larger number of architectural configurations ( v.s. 4 Yu et al. (2019)) . Thanks to the diversity and the large design space, we can derive new specialized neural networks for many different deployment scenarios rather than working on top of existing pre-designed neural networks. If a pre-designed neural network is inefficient, the optimization headroom will be small if directly working on top of it (Figure 6 and Figure 1 right). However, given such a large number of sub-networks to support, it becomes challenging to train the once-for-all network to achieve this flexibility: the training cost should be affordable; different sub-networks shouldn’t interfere. In the following section, we introduce an effective progressive shrinking approach to solve this problem.

Figure 3: Kernel transformation matrix for elastic kernel size. We support 77, 55, and 33 kernels. Weight sharing makes it more efficient than independent settings.
Figure 4: An overview of the training process for elastic depth. Instead of skipping each block independently, we keep the first blocks and skip the last blocks. The weights of the blue and green blocks are shared across ; The orange block is shared across .
Figure 5: An overview of the progressive shrinking process for elastic width. In this example, we progressively support 4-, 3-, and 2-channel settings. Smaller channel settings are initialized with the most important channels (large L1 norm) after channel sorting.
A Progressive Shrinking Approach.

Instead of directly training the once-for-all network to support all sub-networks from scratch based on Eq. (1), which is difficult to optimize, we propose to decompose the optimization problem as a sequence of sub-tasks. An example of the progressive shrinking process is provided in Figure 2. Specifically, we start with training a full neural network with the maximum under elastic resolution. Then we fine-tune the neural network to support both full and partial in a progressive manner (from large sub-networks to small sub-networks).

This progressive shrinking scheme offers three unique advantages. First, it makes the once-for-all network easier to optimize since each sub-task is much simpler than the full task. Second, small models are easier to train with the help of large models. Progressive shrinking allows us to provide good initialization for small sub-networks by keeping the most important weights of the large sub-networks (Figure 5) and provide better supervision via knowledge distillation (Figure 2), which is better than training small sub-networks from scratch. Third, the progressive shrinking gives an ordering to the shared weights and prevents the smaller sub-networks from hurting the performances of larger sub-networks. We describe the details of the training flow as follows:

  • [leftmargin=*]

  • Elastic Resolution (Figure 2). Theoretically, we can feed images with any resolution into a trained CNN model, since the image size does not affect the weights of the CNN model. However, practically, it will cause significant accuracy drop if feeding images with resolutions that are never seen during training. Therefore, to support the elastic resolution, we sample different image size for each batch of training data when training our models. It is implemented by modifying the data loader.

  • Elastic Kernel Size (Figure 3). If trained properly, the center of a 7x7 convolution kernel can also serve as a 5x5 convolution kernel, the center of which can also be a 3x3 convolution kernel. Therefore, the kernel size becomes elastic. The challenge is that the centered sub-kernels (e.g., 3x3 and 5x5) are shared and need to play multiple roles (independent kernel and part of a large kernel). The weights of centered sub-kernels may need to have different distribution or magnitude as different roles. Forcing them to be the same may degrade the performance of some sub-networks. Therefore, we introduce kernel transformation matrices when sharing the kernel weights. Concretely, we use separate kernel transformation matrices among different blocks. Within each block, the kernel transformation matrices are shared among different channels. As such, we only need extra parameters to store the kernel transformation matrices in each block, which is negligible.

  • Elastic Depth (Figure 4). The elastic depth is supported in the stage-level, where a stage corresponds to a sequence of building blocks that have the same output resolution. Each building block consists of one depth-wise convolution and two point-wise convolutions. To derive a sub-network that has blocks in a stage that originally has blocks, we keep the first D blocks and skip the last blocks, rather than keeping any blocks as done in current NAS methods Cai et al. (2019); Wu et al. (2019). As such, one depth setting only corresponds to one combination of blocks. In the end, the weights of the first D blocks are shared between large and small models.

  • Elastic Width (Figure 5). Width means the number of channels. We give each layer the flexibility to choose different channel expansion ratio. Following the progressive shrinking scheme, we first train a full-width neural network. Then we introduce a channel sorting operation to support partial widths. It reorganizes the channels according to their importance, which is calculated based on the L1 norm of a channel’s weight. Larger L1 norm means more important. For example, when shrinking from a 4-channel-layer to a 3-channel-layer, we select the largest 3 channels; whose weights are shared with the 4-channel-layer (Figure 5 left and middle). Thereby, the smaller sub-networks are initialized with the most important channels on the once-for-all network which is already well trained. Notably, this channel sorting operation does not hurt the performances of larger sub-networks.

  • Knowledge Distillation (Figure 2). We use both the hard labels given by the training data and the soft labels Hinton et al. (2015) given by the trained full network when training the once-for-all network. These two loss terms are combined with a scaling factor :

    (2)

3.3 Specialized Model Deployment with Once-for-all Network

Having trained a once-for-all network, the next stage is to derive the specialized sub-network for a given deployment scenario. The goal is to search for a neural network that satisfies the efficiency (e.g., latency, energy) constraints on the target hardware platform while optimizing the accuracy. The “Once for All” methodology decouples model training from architecture search, thereby we do not have any training cost in this stage.

Generally, OFA can be combined with any search algorithm, such as reinforcement learning

Zoph and Le (2017); Cai et al. (2018b)

, evolutionary algorithms

Real et al. (2018), gradient descent Liu et al. (2019); Wu et al. (2019), etc. However, these algorithms require search cost in each deployment scenario, leading to linear growth of the total cost (Table 2). In this work, we present a simple solution to eliminate the linear term. We randomly sample a subset of sub-networks and build their accuracy table and latency table. As such, given a target hardware and latency constraints, we can directly query the accuracy table and corresponding latency table to get the best sub-networks within the table. The cost of querying tables is negligible, thereby avoiding the linear growth of the total cost.

Specifically, we sample 16K sub-networks in our experiments and build the accuracy table on 10K validation images (sampled from the original training set). Additionally, since we support the elastic resolution, the same sub-network is measured under multiple input image sizes. Empirically, we find that the accuracy of a sub-network grows smoothly as the resolution increases. Therefore, to save the cost, we measure the accuracy of sub-networks under a subset of resolutions with a stride 16 (e.g.,

). For an unmeasured image size (e.g., 164), we predict its accuracy as follows:

(3)

where and denote the accuracy and FLOPs of under input image size , respectively. Since all sub-networks directly grab weights from the once-for-all network without training, this process takes only 200 GPU hours to complete. More importantly, this cost is paid once.

D = 2 D = 4
Sub-networks W = 4 W = 6 W = 4 W = 6
K = 3 K = 7 K = 3 K = 7 K = 3 K = 7 K = 3 K = 7
Parameters 2.8M 2.9M 3.3M 3.5M 3.7M 4.0M 4.7M 5.1M
FLOPs 191M 233M 266M 328M 329M 419M 473M 607M
Independent 68.7 70.5 70.9 72.6 72.8 74.3 74.6 75.4
W/o progressive shrink -1.4 -0.7 -1.7 -1.3 -1.4 -1.6 -2.0 -1.9
Progressive shrink 0.0 +0.7 +0.1 +0.7 +0.5 +0.5 +0.2 +0.5
Table 1: ImageNet top1 accuracy (%) performances of sub-networks under resolution . “(D = , W = , K = )” denotes a sub-network with blocks in each stage, and each block has an width expansion ratio and kernel size . “Independent” indicates that the sub-networks are trained independently. We report the accuracy differences between the sub-networks derived from once-for-all network and independently trained sub-networks with the same architecture. Progressive shrinking consistently achieves the same level (or better) ImageNet accuracy as independent training.

4 Experiments

In this section, we first apply the progressive shrinking algorithm to train the once-for-all network on ImageNet Deng et al. (2009). Then we demonstrate the effectiveness of our trained once-for-all network on various hardware platforms (Samsung Note8, Google Pixel1, Pixel2, NVIDIA 1080Ti, 2080Ti, V100 GPUs, and Intel Xeon CPU) with different latency constraints.

4.1 Training the Once-for-all Network on ImageNet

Training Details.

For a fair comparison, we use the same architecture space as ProxylessNAS Cai et al. (2019), without SE Hu et al. (2018)

and Swish activation function

Ramachandran et al. (2017) that are orthogonal methods to boost the accuracy Tan and Le (2019). We train a once-for-all network that supports elastic depth (the number of blocks in each stage can be 2, 3 or 4), elastic width (expansion ratio in each block can be 4, 5 or 6) and elastic kernel size (the kernel size of each depthwise-separable convolution layer can be 3, 5 or 7). Therefore, with 5 stages, we have roughly sub-networks. Additionally, the input image size is also elastic, ranging from 128 to 224 with a stride 4.

We use the standard stochastic gradient descent (SGD) optimizer with Nesterov momentum 0.9 and weight decay

to train models on ImageNet. The initial learning rate is 0.4, and we use the cosine schedule Loshchilov and Hutter (2016)

for learning rate decay. The independent models are trained for 150 epochs with batch size 2048 on 32 GPUs. For training the once-for-all network, we use the same training setting with larger training cost (roughly 8

), taking around 1,200 GPU hours on V100 GPUs. This is one-time training cost which can be amortized by many deployment scenarios. Conventional models, even trained longer, can not achieve the same accuracy (2nd row of Table 2).

Results.

The top1 accuracy of both independently trained models and the once-for-all networks under the same architectural configurations are reported in Table 1. Due to space limits, we take 8 sub-networks for comparison, and each of them is denoted as “(D = , W = , K = )”. It represents a sub-network that has blocks for all stages while the expansion ratio and kernel size are set to and for all blocks. We also report the FLOPs and #parameters of these sub-networks for reference. Compared to independently trained models, the once-for-all network trained by the progressive shrinking (PS) algorithm can maintain the same level (or better) accuracy under all architectural configurations. We hypothesize that knowledge is transferred from larger sub-networks to smaller sub-networks through progressive shrinking and distillation, which enable them to learn better jointly. In contrast, without PS (i.e., training the once-for-all network from scratch following Eq. 1), the once-for-all network cannot maintain the accuracy of the sub-networks. The maximum top1 accuracy drop reaches 2.0% on ImageNet. It shows the benefits and effectiveness of the progressive shrinking algorithm.

4.2 Specialized Sub-networks for Different Hardware Platforms and Constraints

We apply our trained once-for-all network to get specialized sub-networks for different hardware platforms, aiming to optimize the trade-off between accuracy and latency. We use 7 different hardware platforms . For the GPU platforms, the latency is measured with batch size 32 and 64 on NVIDIA 1080Ti, 2080Ti and V100 with Pytorch 1.0+cuDNN. The CPU latency is measured with batch size 1 on Intel Xeon E5-2690 v4. Additionally, we use the MKL-DNN

222https://github.com/intel/mkl-dnn

library to speedup CPU inference. To measure the mobile latency, we use Samsung Note8, Google Pixel1 and Pixel2 with TF-Lite with batch size 1. On all hardware platforms, we fuse the batch normalization layers into the convolution layers. No quantization is applied. In total, we have 40 deployment scenarios (Figure 

6, Figure 1 right) in the experiments: eight hardware platforms, four latency requirements each.

Model ImageNet FLOPs Mobile Search cost Training cost Total GPU hours
Top1 (%) latency (GPU hours) (GPU hours)
MobileNetV2 Sandler et al. (2018) 72.0 300M 106ms 0 0.15K / 6K
MobileNetV2 #1200 Sandler et al. (2018) 73.5 300M 106ms 0 1.2K / 48K
NASNet-A Zoph et al. (2018) 74.0 564M 234ms - 48K / 1,920K
DARTS Liu et al. (2019) 73.1 595M - 0.346K / 13.84K
MnasNet Tan et al. (2018) 74.0 317M 108ms - 40K / 1,600K
FBNet-C Wu et al. (2019) 74.9 375M 129ms 0.576K / 23.04K
ProxylessNAS-Mobile Cai et al. (2019) 74.6 320M 110ms 0.5K / 20K
SinglePathNAS Guo et al. (2019) 74.7 328M - 288 + 0.696K / 16.608K
Once for All w/o PS 72.9 321M 109ms 200 1200 1.4K / 1.4K
Once for All w/ PS 75.0 327M 112ms 200 1200 1.4K / 1.4K
Once for All w/ PS #25 75.3 327M 112ms 200 1.425K / 2.4K
Table 2: Comparison with state-of-the-art hardware-aware NAS methods on Samsung Note8. OFA decouples model training from architecture search. The search cost and training cost both stay constant as the number of deployment scenarios grows ( in our experiments). “#25” denotes the specialized sub-networks are fine-tuned for 25 epochs after grabbing weights from the once-for-all network. We cite the results of MnasNet without SE for a fair comparison of the search methodology.
Comparison with NAS on Mobile.

Table 2 reports the comparison between OFA and state-of-the-art hardware-aware NAS methods on the mobile platform (Samsung Note8). OFA is much more efficient than NAS when handling multiple deployment scenarios, since the cost of OFA is constant while others are linear to the number of deployment scenarios (). With = 40, the training time of OFA is 14 faster than ProxylessNAS, 16 faster than FBNet, and 1,142 faster than MnasNet. Without retraining, OFA achieves 75.0% top1 accuracy on ImageNet, which is 1.0% higher than MnasNet, 0.4% higher than ProxylessNAS, and 0.1% higher than FBNet while maintaining similar (or lower) mobile latency. By fine-tuning the specialized sub-network for 25 epochs, we can further improve the accuracy to 75.3%. Besides, we also observe that OFA with progressive shrinking (PS) can achieve 2.4% better accuracy than without PS, which shows the effectiveness of PS.

Model Latency Top1 (%)
MobileNetV2 0.35 28ms 60.3
MnasNet 0.35 27ms 62.4 (+2.1)
Once for All (ours) 31ms 66.3 (+6.0)
Once for All #25 31ms 69.0 (+8.7)
MobileNetV2 0.5 40ms 65.4
MnasNet 0.5 41ms 67.8 (+2.4)
ProxylessNAS 0.5 41ms 68.2 (+2.8)
Once for All (ours) 43ms 69.6 (+4.2)
Once for All #25 43ms 71.1 (+5.7)
MobileNetV2 0.75 77ms 69.8
MnasNet 0.75 75ms 71.5 (+1.7)
Once for All (ours) 79ms 73.7 (+3.9)
Once for All #25 79ms 74.3 (+4.5)
Table 3: ImageNet accuracy results on Samsung Note8 under various latency constraints.
Results under Different Efficiency Constraints.

Table 3 summarizes the results on the mobile platform under different latency constraints. Benefiting from the OFA framework, we can design specialized neural networks for all scenarios without additional cost while previous methods typically rescale an existing model using a width multiplier to fit different latency constraints Sandler et al. (2018); Cai et al. (2019); Tan et al. (2018). Therefore, as shown in Table 3, we can achieve much higher improvements over the baselines in such cases. Specifically, with similar latency as MobileNetV2 0.35, we improve the ImageNet top1 accuracy from the MobileNetV2 baseline 60.3% to 66.3% (+6.0%) without retraining, and to 69.0% (+8.7%) after fine-tuning for 25 epochs.

Results on More Hardware Platforms.

Figure 6 shows the detailed results on the other six hardware platforms (GPUs have two rows for two different batch sizes). OFA consistently improve the trade-off between accuracy and latency by a significant margin, especially on CPU and GPUs, since previous work on compact model design emphasized the edge, overlooking the cloud. OFA excels at both with low NRE cost. Specifically, with similar latency as MobileNetV2 0.35, “OFA #25” improves the ImageNet top1 accuracy from MobileNetV2’s 60.3% to 70.9% (+10.6%) on Intel CPU and to 70.4+% (+10.1%) on NVIDIA GPUs. It reveals the insight that using the same model for different deployment scenarios with only the width multiplier modified has limited impact on efficiency improvement: the accuracy drops quickly as the latency constraint gets tighter. We provide an efficient way to specialize our models at the architectural level, decoupling model training and architecture search, which offers a large design space and achieves better accuracy.

Figure 6: Specialized deployment results on mobile devices, Intel CPU and GPUs. On mobile, we can achieve up to 8.7% higher ImageNet top1 accuracy (60.3% -> 69.0%, upper middle) than MobileNetV2. On NVIDIA GPUs and Intel CPU, we can achieve up to 10+% higher ImageNet top1 accuracy than MobileNetV2. Specializing for a new hardware platform does not add the training cost.

5 Conclusion

We proposed Once for All (OFA), a new methodology that decouples model training from architecture search for efficient deep learning deployment under a large number of deployment scenarios. Unlike previous approaches that design and train a neural network for each deployment scenario, we designed a once-for-all network that supports different architectural configurations, including elastic depth, width, kernel size, and resolution. It greatly reduces the training cost (GPU hours) compared to conventional methods. To prevent sub-networks of different sizes from interference, we proposed a progressive shrinking algorithm that enable each sub-network to achieve the same level of accuracy compared to training them independently. Experiments on a diverse range of hardware platforms and efficiency constraints demonstrated the effectiveness of our approach.

Acknowledgments

We thank MIT Quest for Intelligence, MIT-IBM Watson AI Lab, MIT-SenseTime Alliance, Samsung, Intel, ARM, Xilinx, SONY, AWS Machine Learning Research Award, Google AR/VR Research Award for supporting this research. We thank Samsung and Google for donating mobile phones.

References

  • [1] H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang (2018) Efficient architecture search by network transformation. In AAAI, Cited by: §2.
  • [2] H. Cai, J. Yang, W. Zhang, S. Han, and Y. Yu (2018) Path-level network transformation for efficient architecture search. In ICML, Cited by: §3.3.
  • [3] H. Cai, L. Zhu, and S. Han (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In ICLR, External Links: Link Cited by: §2, 3rd item, §4.1, §4.2, Table 2.
  • [4] M. Courbariaux, Y. Bengio, and J. David (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In NeurIPS, Cited by: §2.
  • [5] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §4.
  • [6] Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, and J. Sun (2019) Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420. Cited by: Table 2.
  • [7] S. Han, H. Mao, and W. J. Dally (2016) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, Cited by: §2.
  • [8] S. Han, J. Pool, J. Tran, and W. Dally (2015) Learning both weights and connections for efficient neural network. In NeurIPS, Cited by: §2.
  • [9] Y. He, J. Lin, Z. Liu, H. Wang, L. Li, and S. Han (2018) AMC: automl for model compression and acceleration on mobile devices. In ECCV, Cited by: §1, §2, §3.2.
  • [10] G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: 5th item.
  • [11] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: §1, §2.
  • [12] J. Hu, L. Shen, and G. Sun (2018) Squeeze-and-excitation networks. In CVPR, Cited by: §4.1.
  • [13] G. Huang, D. Chen, T. Li, F. Wu, L. van der Maaten, and K. Q. Weinberger (2018) Multi-scale dense networks for resource efficient image classification. In ICLR, Cited by: §2.
  • [14] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer (2016) SqueezeNet: alexnet-level accuracy with 50x fewer parameters and< 0.5 mb model size. arXiv preprint arXiv:1602.07360. Cited by: §2.
  • [15] J. Lin, Y. Rao, J. Lu, and J. Zhou (2017) Runtime neural pruning. In NeurIPS, Cited by: §2.
  • [16] H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In ICLR, Cited by: §2, §3.3, Table 2.
  • [17] L. Liu and J. Deng (2018) Dynamic deep neural networks: optimizing accuracy-efficiency trade-offs by selective execution. In AAAI, Cited by: §2.
  • [18] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang (2017) Learning efficient convolutional networks through network slimming. In ICCV, Cited by: §2.
  • [19] I. Loshchilov and F. Hutter (2016) Sgdr: stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983. Cited by: §4.1.
  • [20] N. Ma, X. Zhang, H. Zheng, and J. Sun (2018) ShuffleNet v2: practical guidelines for efficient cnn architecture design. In ECCV, Cited by: §2.
  • [21] P. Ramachandran, B. Zoph, and Q. V. Le (2017) Searching for activation functions. arXiv preprint arXiv:1710.05941. Cited by: §4.1.
  • [22] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2018)

    Regularized evolution for image classifier architecture search

    .
    arXiv preprint arXiv:1802.01548. Cited by: §2, §3.3.
  • [23] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen (2018) MobileNetV2: inverted residuals and linear bottlenecks. In CVPR, Cited by: §1, §2, §4.2, Table 2.
  • [24] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le (2018) MnasNet: platform-aware neural architecture search for mobile. arXiv preprint arXiv:1807.11626v1. Cited by: §2, §4.2, Table 2.
  • [25] M. Tan and Q. Le (2019) EfficientNet: rethinking model scaling for convolutional neural networks. In ICML, Cited by: §4.1.
  • [26] X. Wang, F. Yu, Z. Dou, T. Darrell, and J. E. Gonzalez (2018) Skipnet: learning dynamic routing in convolutional networks. In ECCV, Cited by: §2.
  • [27] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, and K. Keutzer (2019) FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In CVPR, Cited by: §2, 3rd item, §3.3, Table 2.
  • [28] Z. Wu, T. Nagarajan, A. Kumar, S. Rennie, L. S. Davis, K. Grauman, and R. Feris (2018) Blockdrop: dynamic inference paths in residual networks. In CVPR, Cited by: §2, §3.2.
  • [29] J. Yu, L. Yang, N. Xu, J. Yang, and T. Huang (2019) Slimmable neural networks. In ICLR, Cited by: §2, §3.2.
  • [30] X. Zhang, X. Zhou, M. Lin, and J. Sun (2018) Shufflenet: an extremely efficient convolutional neural network for mobile devices. In CVPR, Cited by: §1, §2.
  • [31] C. Zhu, S. Han, H. Mao, and W. J. Dally (2017) Trained ternary quantization. In ICLR, Cited by: §2.
  • [32] B. Zoph and Q. V. Le (2017) Neural architecture search with reinforcement learning. In ICLR, Cited by: §2, §3.3.
  • [33] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In CVPR, Cited by: §2, Table 2.