FNA++: Fast Network Adaptation via Parameter Remapping and Architecture Search

06/21/2020 ∙ by Jiemin Fang, et al. ∙ Huazhong University of Science u0026 Technology 2

Deep neural networks achieve remarkable performance in many computer vision tasks. Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone, commonly pre-trained on ImageNet. However, performance gains can be achieved by designing network architectures specifically for detection and segmentation, as shown by recent neural architecture search (NAS) research for detection and segmentation. One major challenge though is that ImageNet pre-training of the search space representation (a.k.a. super network) or the searched networks incurs huge computational cost. In this paper, we propose a Fast Network Adaptation (FNA++) method, which can adapt both the architecture and parameters of a seed network (e.g. an ImageNet pre-trained network) to become a network with different depths, widths, or kernel sizes via a parameter remapping technique, making it possible to use NAS for segmentation/detection tasks a lot more efficiently. In our experiments, we conduct FNA++ on MobileNetV2 to obtain new networks for semantic segmentation, object detection, and human pose estimation that clearly outperform existing networks designed both manually and by NAS. We also implement FNA++ on ResNets and NAS networks, which demonstrates a great generalization ability. The total computation cost of FNA++ is significantly less than SOTA segmentation/detection NAS approaches: 1737x less than DPC, 6.8x less than Auto-DeepLab, and 8.0x less than DetNAS. The code will be released at https://github.com/JaminFong/FNA.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 10

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep convolutional neural networks have achieved great successes in computer vision tasks such as image classification 

[1, 2, 3], semantic segmentation [4, 5, 6], object detection [7, 8, 9] and pose estimation [10, 11] etc. Image classification has always served as a fundamental task for neural architecture design. It is common to use networks designed and pre-trained on the classification task as the backbone and fine-tune them for segmentation or detection tasks. However, the backbone plays an important role in the performance on these tasks and the difference between these tasks calls for different design principles of the backbones. For example, segmentation tasks require high-resolution features and object detection tasks need to make both localization and classification predictions from each convolutional feature. Such distinctions make neural architectures designed for classification tasks fall short. Some attempts [12, 13] have been made to tackle this problem by manually modifying the architectures designed for classification to better accommodate to the characteristics of new tasks.

Handcrafted neural architecture design is inefficient, requires a lot of human expertise, and may not find the best-performing networks. Recently, neural architecture search (NAS) methods [14, 15, 16]

see a rise in popularity. Automatic deep learning methods aim at helping engineers get rid of tremendous trial and error on architecture designing and further promoting the performance of architectures over manually designed ones. Early NAS works 

[14, 17, 18] explore the search problem on the classification tasks. As the NAS methods develop, some works [19, 20, 21] propose to use NAS to specialize the backbone architecture design for semantic segmentation or object detection tasks. Nevertheless, backbone pre-training remains an inevitable but costly procedure. Though some works like [22] recently demonstrate that pre-training is not always necessary for accuracy considerations, training from scratch on the target task still takes more iterations than fine-tuning from a pre-trained model. For NAS methods, the pre-training cost is non-negligible for evaluating the networks in the search space. One-shot search methods [23, 24, 21] integrate all possible architectures in one super network but pre-training the super network and the searched network still bears huge computation cost.

Fig. 1: The framework of our proposed FNA++. Firstly, we select an pre-trained network as the seed network and expand to a super network which is the representation of the search space. The parameters of are remapped to and architecture adaptation is performed. Then we derive the target architecture based on the parameter distribution in . Before parameter adaptation, we remap the parameters of to . Finally, we adapt the parameters of to get the target network .

As ImageNet [25] pre-training has been a standard practice for many computer vision tasks, there are lots of models trained on ImageNet available in the community. To take full advantages of these pre-trained models, we propose a Fast Network Adaptation (FNA++) method based on a novel parameter remapping paradigm. Our method can adapt both the architecture and parameters of one network to a new task with negligible cost. Fig. 1 shows the whole framework. The adaptation is performed on both the architecture- and parameter-level. We adopt the NAS methods [14, 26, 27] to implement the architecture-level adaptation. We select the manually designed network as the seed network, which is pre-trained on ImageNet. Then, we expand the seed network to a super network which is the representation of the search space in FNA++. We initialize new parameters in the super network by mapping those from the seed network using the proposed parameter remapping mechanism. Compared with previous NAS methods [28, 19, 21] for segmentation or detection tasks that search from scratch, our architecture adaptation is much more efficient thanks to the parameter remapped super network. With architecture adaptation finished, we obtain a target architecture for the new task. Similarly, we remap the parameters of the super network which are trained during architecture adaptation to the target architecture. Then we fine-tune the parameters of the target architecture on the target task with no need of backbone pre-training on a large-scale dataset.

We demonstrate FNA++’s effectiveness and efficiency via experiments on semantic segmentation, object detection and human pose estimation tasks. We adapt the manually designed network MobileNetV2 [29] to the semantic segmentation framework DeepLabv3 [6], object detection framework RetinaNet [9] and SSDLite [8, 29] and human pose estimation framework SimpleBaseline [10]. Networks adapted by FNA++ surpass both manually designed and NAS networks in terms of both performance and model MAdds. Compared to NAS methods, FNA++ costs 1737 less than DPC [28], 6.8 less than Auto-DeepLab [19] and 8.0 less than DetNAS [21]. To demonstrate the generalizability of our method, we implement FNA++ on diverse networks, including ResNets [2] and NAS networks, i.e., FBNet [30] and ProxylessNAS [18], which are searched on the ImageNet classification task. Experimental results show that FNA++ can further promote the performance of ResNets and NAS networks on the new task (object detection in our experiment).

Our main contributions can be summarized as follows:

  • We propose a novel FNA++ method that automatically fine-tunes both the architecture and the parameters of an ImageNet pre-trained network on target tasks. FNA++ is based on a novel parameter remapping mechanism which is performed for both architecture adaptation and parameter adaptation.

  • FNA++ promotes the performance on semantic segmentation, object detection and human pose estimation tasks with much lower computation cost than previous NAS methods, e.g. 1737 less than DPC, 6.8 less than Auto-DeepLab and 8.0 less than DetNAS.

  • FNA++ is generally helpful for various visual recognition tasks and improves over various pre-trained networks, e.g., MobileNets, ResNets and NAS networks (FBNet [30] and ProxylessNAS [18]).

Our preliminary version of this manuscript was previously published as a conference paper [31]. We make some improvements to the preliminary version as follows. First, we generalize the paradigm of parameter remapping and now it is applicable to more architectures, e.g., ResNet [2] and NAS networks with various depths, widths and kernel sizes. Second, we improve the remapping mechanism for parameter adaptation and achieve better results than our former version over different frameworks and tasks with no computation cost increased. Third, we implement FNA++ on one more task (SimpleBaseline for human pose estimation) and achieve great performance.

The remaining part of the paper is organized as follows. In Sec. 2, we describe the related works from three aspects, neural architecture search, backbone design and parameter remapping. Then we introduce our method in Sec. 3, including the proposed parameter remapping mechanism and the detailed adaptation process. In Sec. 4, we evaluate our method on different tasks and frameworks. The method is also implemented on various networks. A series of experiments are performed to study the proposed method comprehensively. We finally conclude in Sec. 5.

2 Related Work

2.1 Neural Architecture Search

Early NAS works automate network architecture design by applying the reinforcement learning (RL) 

[32, 14, 17]

or evolutionary algorithm (EA) 

[33, 26] to the search process. The RL/EA-based methods obtain architectures with better performance than handcrafted ones but usually bear tremendous search cost. Afterwards, ENAS [15] proposes to use parameter sharing to decrease the search cost but the sharing strategy may introduce inaccuracy on evaluating the architectures. NAS methods based on the one-shot model [23, 24, 34] lighten the search procedure by introducing a super network as a representation of all possible architectures in the search space. Recently, differentiable NAS [27, 18, 30, 35, 36] arises great attention in this field which achieves remarkable results with far lower search cost compared with previous ones. Differentiable NAS assigns architecture parameters to the super network and updates the architecture parameters by gradient descent. The final architecture is derived based on the distribution of architecture parameters. We use the differentiable NAS method to implement network architecture adaptation, which adjusts the backbone architecture automatically to new tasks with remapped seed parameters accelerating. In experiments, we perform random search and still achieve great performance, which demonstrates FNA++ is agnostic of NAS methods and can be equipped with diverse NAS methods.

2.2 Backbone Design

As deep neural network designing [37, 38, 2] develops, the backbones of semantic segmentation or object detection networks evolve accordingly. Most previous methods [7, 9, 8, 6] directly reuse the networks designed on classification tasks as the backbones. However, the reused architecture may not meet the demands of the new task characteristics. Some works improve the backbone architectures by modifying existing networks. PeleeNet [39] proposes a variant of DenseNet [40] for more real-time object detection on mobile devices. DetNet [12] applies dilated convolutions [41] in the backbone to enlarge the receptive field which helps to detect objects more precisely. BiSeNet [42] and HRNet [13] design multiple paths to learn both high- and low- resolution representations for better dense prediction. Recently, some works propose to use NAS methods to redesign the backbone networks automatically. Auto-DeepLab [19] searches for architectures with cell structures of diverse spatial resolutions under a hierarchical search space. The searched resolution change patterns benefit to dense image prediction problems. CAS [20] proposes to search for the semantic segmentation architecture under a lightweight framework while the inference speed optimization is considered. DetNAS [21] searches for the backbone of the object detection network under a ShuffleNet [43, 44]-based search space. They use the one-shot NAS method to decrease the search cost. However, pre-training the super network on ImageNet and the final searched network bears a huge cost. Benefiting from the proposed parameter remapping mechanism, our FNA++ adapts the architecture to new tasks with a negligible cost.

2.3 Parameter Remapping

Net2Net [45] proposes the function-preserving transformations to remap the parameters of one network to a new deeper or wider network. This remapping mechanism accelerates the training of the new larger network and achieves great performance. Following this manner, EAS [46] uses the function-preserving transformations to grow the network depth or layer width for architecture search. The computation cost can be saved by reusing the weights of previously validated networks. Moreover, some NAS works [15, 47, 48] apply parameter sharing on child models to accelerate the search process while the sharing strategy is intrinsically parameter remapping. Our parameter remapping paradigm extends the mapping dimension to the depth-, width- and kernel- level. Compared to Net2Net which focuses on mapping parameters to a deeper and wider network, the remapping mechanism in FNA++ has more flexibility and can be performed on architectures with various depths, widths and kernel sizes. The remapping mechanism helps both the architecture and parameter adaptation achieve great performance with low computation cost.

(a) Depth level
(b) Width level
(c) Kernel level
Fig. 2: Parameters are remapped on three levels. (a) shows the depth-level remapping. The parameters of the new network are remapped from the corresponding layers in the seed network. The parameters of new layers are remapped from the last layer in the seed network. (b) shows the width-level remapping. For parameters with fewer channels (left), the seed parameters are remapped to the new network with corresponding channels. For parameters with more channels (right), the seed parameters are remapped to corresponding channels and parameters in new channels are assigned with (denoted as the white cuboid). (c) shows the kernel-level remapping. For parameters in a smaller kernel (left), the central part of seed parameters are remapped to the new kernel. For parameters in a larger kernel (right), the seed parameters are remapped to the central part of the new kernel. The values of the other part are assigned with .

3 Method

In this section, we first introduce the proposed parameter remapping paradigm, which is performed on three levels, i.e., network depth, layer width and convolution kernel size. Then we explain the whole procedure of the network adaptation including three main steps, network expansion, architecture adaptation and parameter adaptation. The parameter remapping paradigm is applied before architecture and parameter adaptation.

Fig. 3: Inverted residual block (MBConv) [29]. The normal block (upper

) inputs and outputs the tensor with the same channel number and spatial resolution. The residual connection is used in the normal block. The reduction block (

bottom

) performs the down-sampling operation with stride 2 and transforms the channel number.

3.1 Parameter Remapping

We define parameter remapping as one paradigm which maps the parameters of one seed network to another one. We denote the seed network as and the new network as , whose parameters are denoted as and respectively. The remapping paradigm is illustrated in the following three aspects. The remapping on the depth-level is firstly carried out and then the remapping on the width- and kernel- level is conducted simultaneously. Moreover, we study different remapping strategies in the experiments (Sec. 4.9).

3.1.1 Remapping on Depth-level

We introduce diverse depth settings in our architecture adaptation process. Specifically, we adjust the number of MobileNetV2 [29] or ResNet [2] blocks in every stage of the network. We assume that one stage in the seed network has layers. The parameters of each layer can be denoted as . Similarly, we assume that the corresponding stage with layers in the new network has parameters . The remapping process on the depth-level is shown in Fig. 2(a). The parameters of layers in which also exit in are just copied from . The parameters of new layers are all copied from the last layer in the stage of . Parameter remapping in layer is formulated as

(1)

3.1.2 Remapping on Width-level

As shown in Fig. 3, in the MBConv block of the MobileNetV2 [29] network, the first point-wise convolution expands the low-dimensional features to a high dimension. This practice can be used for expanding the width and capacity of one neural network. We allow diverse expansion ratios for architecture adaptation. We denote the parameters of one convolution in as and that in as , where , denotes the output, input dimension of the parameter and denote the spatial dimension. The width-level remapping is illustrated in Fig. 2(b). If the channel number of is smaller, the first or channels of are directly remapped to . If the channel number of is larger than , the parameters of are remapped to the first or channels in . The parameters of the other channels in are initialized with . The above remapping process can be formulated as follows.

  1. :

    (2)
  2. :

    (3)

In our ResNet [2] adaptation, we allow architectures with larger receptive field by introducing grouped convolutions with larger kernel sizes, which do not introduce much additional MAdds. For architecture adaptation, the parameters of the plain convolution in the seed network need to be remapped to the new parameters of the grouped convolution in the super network . We assume the group number in the grouped convolution is . The input channel number of the grouped convolution is of the plain convolution, i.e., and . The parameters of the plain convolution are remapped to of the grouped convolution with the corresponding input dimension. This process can be formulated as,

(4)

3.1.3 Remapping on Kernel-level

The kernel size is commonly set as in most artificially-designed networks [2, 29]. However, the optimal kernel size settings may not be restricted to a fixed one. In a neural network, the larger kernel size can be used to expand the receptive field and capture abundant contextual features in segmentation or detection tasks but takes more computation cost than the smaller one. How to allocate the kernel sizes in a network more flexibly is explored in our method. We introduce the parameter remapping on the kernel size level and show it in Fig. 2(c). We denote the weights of the convolution in the seed network as whose kernel size is . The weights in is denoted as with kernel size. If the kernel size of is smaller than , the parameters of are remapped from the central region in . Otherwise, we assign the parameters of the central region in with the values of . The values of the other region surrounding the central part are assigned with . The remapping process on the kernel-level is formulated as follows.

  1. :

    (5)
  2. :

    (6)

where denote the indices of the spatial dimension.

3.2 Fast Network Adaptation

We divide our neural network adaptation into three steps. Fig. 1 illustrates the whole adaptation procedure. Firstly, we expand the seed network to a super network which is the representation of the search space in the latter architecture adaptation process. Secondly, we perform the differentiable NAS method to implement network adaptation on the architecture-level and obtain the target architecture . Finally, we adapt the parameters of the target architecture and obtain the target network . The aforementioned parameter remapping mechanism is deployed before the two stages, i.e., architecture adaptation and parameter adaptation.

Block chs n s
seg det pose seg det pose
Conv 32 1 1 1 2 2 2
MBConv(k3e1) 16 1 1 1 1 1 1
SBlock 24 4 4 4 2 2 2
SBlock 32 4 4 4 2 2 2
SBlock 64 6 4 4 2 2 2
SBlock 96 6 4 4 1 1 1
SBlock 160 4 4 4 1 2 2
SBlock 320 1 1 1 1 1 1
TABLE I: Search space with MobileNetV2 [29] as the seed network. “chs”: the number of output channels. “n”: the number of layers. “s”: the stride of the convolution. “seg”, “det” and “pose” denote the tasks of semantic segmentation, object detection and human pose estimation respectively. “SBlock” denotes the block for search.

3.2.1 Network Expansion

We expand the seed network to a super network by introducing more options of architecture elements. For every MBConv layer, we allow for more kernel size settings and more expansion ratios . As most differentiable NAS methods [27, 18, 30] do, we construct a super network as the representation of the search space. In the super network, we relax every layer by assigning each candidate operation with an architecture parameter. The output of each layer is computed as a weighted sum of output tensors from all candidate operations.

(7)

where denotes the operation set, denotes the architecture parameter of operation in the th layer, and denotes the input tensor. We set more layers in one stage of the super network and add the identity connection to the candidate operations for depth search. The structure of the search space is detailed in Tab. I. After expanding the seed network to the super network , we remap the parameters of to based on the paradigm illustrated in Sec. 3.1. As shown in Fig. 1, the parameters of different candidate operations (except the identity connection) in one layer of are all remapped from the same remapping layer of . This remapping strategy prevents the huge cost of ImageNet pre-training involved in the search space, i.e. the super network in differentiable NAS.

3.2.2 Architecture Adaptation

We start the differentiable NAS process with the expanded super network directly on the target task, i.e

., semantic segmentation, object detection and human pose estimation in our experiments. As NAS works commonly do, we split a part of data from the original training dataset as the validation set. In the preliminary search epochs, as the operation weights are not sufficiently trained, the architecture parameters cannot be updated towards a clear and correct direction. We first train operation weights of the super network for some epochs on the training dataset, which is also mentioned in some previous differentiable NAS works 

[30, 19]. After the weights get sufficiently trained, we start alternating the optimization of operation weights and architecture parameters . Specifically, we update on the training dataset by computing and optimize on the validation dataset with

. To control the computation cost (MAdds in our experiments) of the searched network, we define the loss function as follows.

(8)

where in the second term controls the magnitude of the MAdds optimization. The term during search is computed as

(9)

where is obtained by measuring the cost of operation in layer , is the total cost of layer which is computed by a weighted-sum of all operation costs and is the total cost of the network obtained by summing the cost of all the layers. To accelerate the search process and decouple the parameters of different sub-networks, we only sample one path in each iteration according to the distribution of architecture parameters for operation weight updating. As the search process terminates, we use the architecture parameters to derive the target architecture . The final operation type in each searched layer is determined as the one with the maximum architecture parameter .

Method OS iters Params MAdds mIOU(%)
MobileNetV2 [29] DeepLabv3 16 100K 2.57M 24.52B 75.5
DPC [28] 2.51M 24.69B 75.4(75.7)
FNA [31] 2.47M 24.17B 76.6
FNA++ 2.47M 24.17B 77.1
Auto-DeepLab-S [19] DeepLabv3+ 8 500K 10.15M 333.25B 75.2
FNA [31] 16 100K 5.71M 210.11B 77.2
FNA++ 16 100K 5.71M 210.11B 78.2
FNA [31] 8 100K 5.71M 313.87B 78.0
FNA++ 8 100K 5.71M 313.87B 78.4
TABLE II: Semantic segmentation results on the Cityscapes validation set. “OS”: output stride, the spatial resolution ratio of the input image to the backbone output. “iters”: the number of total training iterations. The result of DPC in the brackets is our re-implemented version under the same settings as FNA++. The MAdds of the models are computed with the input resolution.
Method Total Cost ArchAdapt Cost ParamAdapt Cost
DPC [28] 62.2K GHs 62.2K GHs 30.0 GHs
Auto-DeepLab-S [19] 244.0 GHs 72.0 GHs 172.0 GHs
FNA++ 35.8 GHs 1.4 GHs 34.4 GHs
TABLE III: Comparison of computational cost on the semantic segmentation task. “ArchAdapt”: architecture adaptation. “ParamAdapt”: parameter adaptation. “GHs”: GPU hours. denotes the computation cost computed under our reproducing settings. denotes the cost estimated according to the description in the original paper [19].

3.2.3 Parameter Adaptation

We obtain the target architecture from architecture adaptation. To accommodate the new tasks, the target architecture becomes different from that of the seed network (which is primitively designed for the image classification task). Unlike conventional training strategy, we discard the cumbersome pre-training process of on ImageNet. We remap parameters of to before parameter adaptation. As shown in Fig. 1, the parameters of every searched layer in are remapped from the operation with the same type in the corresponding layer in . As the shape of the parameters is the same for the same operation type, the remapping process here can be performed as a pure collection manner. All the other layers in , including the input convolution and the head part of the network etc., are directly remapped from as well. In our former conference version [31], the parameters of are remapped from the seed network . We find that performing parameter remapping from can achieve better performance than from . We further study the remapping mechanism for parameter adaptation in experiments (Sec. 4.6). With parameter remapping on finished, we fine-tune the parameters of on the target task and obtain the final target network .

4 Experiments

In this section, we first select the ImageNet pre-trained model MobileNetV2 [29] as the seed network and apply our FNA++ method on three common computer vision tasks in Sec. 4.1 - 4.3, i.e., semantic segmentation, object detection and human pose estimation. We implement FNA++ on more network types to demonstrate the generalization ability, including ResNets [2] in Sec. 4.4 and NAS networks in Sec. 4.5. We study the remapping mechanism for parameter adaptation in Sec. 4.6 by comparing and analyzing two remapping mechanisms. Then in Sec. 4.7, we evaluate the effectiveness of parameter remapping for the two adaptation stages. Random search experiments in Sec. 4.8 are performed to demonstrate our method can be used as a NAS-method agnostic mechanism. Finally we study different remapping strategies in Sec. 4.9.

4.1 Network Adaptation on Semantic Segmentation

4.1.1 Implementation Details

The semantic segmentation experiments are conducted on the Cityscapes [49] dataset. In the architecture adaptation process, we map the seed network to the super network, which is used as the backbone of DeepLabv3 [6]. We randomly sample images from the training set as the validation set for architecture parameters updating. The original validation set is not used in the search process. The image is first resized to and patches are randomly cropped as the input data. The output feature maps of the backbone are down-sampled by a factor of . Depthwise separable convolutions [3] are used in the ASPP module [50, 6]. As shown in Tab. I, the stages where the expansion ratio of MBConv is 6 in the original MobileNetV2 are searched and adjusted. We set the maximum numbers of layers in each searched stage of the super network as . We set a warm-up stage in the first epochs to linearly increase the learning rate from to . Then, the learning rate decays to with the cosine annealing schedule [51]. The batch size is set as . We use the SGD optimizer with momentum and weight decay for operation weights and the Adam optimizer [52] with weight decay and a fixed learning rate of for architecture parameters. For the loss function defined in Eq. 8, we set as to optimize the MAdds of the searched network. The search process takes epochs in total. The architecture optimization starts after epochs. The whole search process is conducted on a single V100 GPU and takes only 1.4 hours in total.

Method Params MAdds mAP(%)
ShuffleNetV2-20 [21] RetinaNet 13.19M 132.76B 32.1
MobileNetV2 [29] 11.49M 133.05B 32.8
DetNAS [21] 13.41M 133.26B 33.3
FNA [31] 11.73M 133.03B 33.9
FNA++ 11.91M 132.99B 34.7
MobileNetV2 [29] SSDLite 4.3M 0.8B 22.1
Mnasnet-92 [17] 5.3M 1.0B 22.9
FNA [31] 4.6M 0.9B 23.0
FNA++ 4.4M 0.9B 24.0
TABLE IV: Object detection results on MS-COCO. The MAdds are computed with input images for RetinaNet and for SSDLite.
Method Total Cost Super Network Target Network
Pre-training Finetuning Search Pre-training Finetuning
DetNAS [21] 68 GDs 12 GDs 12 GDs 20 GDs 12 GDs 12 GDs
FNA++ (RetinaNet) 8.5 GDs - - 5.3 GDs - 3.2 GDs
FNA++ (SSDLite) 21.0 GDs - - 5.7 GDs - 15.3 GDs
TABLE V: Comparison of computational cost on the object detection task. All our experiments on object detection are conducted on TITAN-Xp GPUs. “GDs”: GPU days.

In the parameter adaptation process, we remap the parameters of the super network to the target architecture obtained in the aforementioned architecture adaptation. The training data is cropped as a patch from the rescaled image. The scale is randomly selected from

. The random left-right flipping is used. We update the statistics of the batch normalization (BN) 

[53] for iterations before the parameter fine-tuning process. We use the same SGD optimizer as the search process. The learning rate linearly increases from to and then decays to with the polynomial schedule. The batch size is set as . The whole parameter adaptation process is conducted on TITAN-Xp GPUs and takes K iterations, which costs only hours in total.

4.1.2 Experimental Results

Our semantic segmentation results are shown in Tab. II. The FNA++ network achieves mIOU on Cityscapes with the DeepLabv3 [6] framework, mIOU better than the manually designed seed network MobileNetV2 [29] with fewer MAdds. Compared with the NAS method DPC [28] (with MobileNetV2 as the backbone) which searches a multi-scale module for semantic segmentation tasks, FNA++ gets mIOU promotion with B fewer MAdds. For fair comparison with Auto-DeepLab [19] which searches the backbone architecture on DeepLabv3 and retrains the searched network on DeepLabv3+ [54], we adapt the parameters of the target architecture to the DeepLabv3+ framework. Comparing with Auto-DeepLab-S, FNA++ achieves far better mIOU with fewer MAdds, Params and training iterations. With the output stride of 16, FNA++ promotes the mIOU by with only MAdds of Auto-DeepLab-S. With the improved remapping mechanism for parameter adaptation, FNA++ achieves better performance than our former version [31]. We compare the computation cost in Tab. III. With the remapping mechanism, FNA++ greatly decreases the computation cost for adaptation, only taking 35.8 GPU hours, less than DPC and less than Auto-DeepLab.

Fig. 4: Architectures adapted by FNA++ on different tasks and frameworks. Each MBConv block is denoted as a colored rectangle. Different colors and shapes of rectangles represent different settings of convolution blocks. “Conv” denotes the conventional

convolution followed by a batch normalization layer and a ReLU layer (ReLU6 is commonly used for MobileNetV2-based networks). “KxEy” denotes the MBConv with “x” kernel size and “y” expansion ratio. “dila” denotes the dilation ratio of the convolution, which is commonly used in the last stage of semantic segmentation networks 

[41, 6]. We contain MBConv blocks which output tensors with the same number of channels in a dashed box.

4.2 Network Adaptation on Object Detection

4.2.1 Implementation Details

We further implement our FNA++ method on object detection tasks. We adapt the MobileNetV2 seed network to two commonly used detection systems, RetinaNet [9] and SSDLite [8, 29], on the MS-COCO dataset [55]

. Our implementation is based on the PyTorch 

[56] framework and the MMDetection [57] toolkit. In the search process of architecture adaptation, we randomly sample data from the original trainval35k set as the validation set.

RetinaNet. We describe the details in the search process of architecture adaptation as follows. The maximum layer numbers in each searched stage are set as , as Tab. I shows. For the input image, the short side is resized to while the maximum long side is set as . For operation weights, we use the SGD optimizer with weight decay and momentum. We set a warm-up stage in the first iterations to linearly increase the learning rate from to . Then we decay the learning rate by a factor of at the 8th and 11th epoch. For the architecture parameters, we use the Adam optimizer [52] with weight decay and a fixed learning rate . For the multi-objective loss function, we set as in Eq. 8. We begin optimizing the architecture parameters after epochs. All the other training settings are the same as the RetinaNet implementation in MMDetection [57]. For fine-tuning of the parameter adaptation, we use the SGD optimizer with weight decay and momentum. The same warm-up procedure is set in the first iterations to increase the learning rate from to . Then we decay the learning rate by at the 8th and 11th epoch. The whole architecture search process takes epochs, hours on 8 TITAN-Xp GPUs with the batch size of 16 and the whole parameter fine-tuning takes 12 epochs, about hours on 8 TITAN-Xp GPUs with 32 batch size.

SSDLite. We resize the input images to

ones. For operation weights in the search process, we use the standard RMSProp optimizer with

weight decay. The warm-up stage in the first iterations increases learning rate from to . Then we decay the learning rate by at the , and epoch. The architecture optimization starts after epochs. We set as for the loss function. The other search settings are the same as the RetinaNet experiment. For parameter adaptation, the initial learning rate is and decays after , and epochs by a factor of . The other training settings follow the SSD [8] implementation in MMDetection [57]. The search process takes epochs in total, hours on TITAN-Xp GPUs with batch size. The parameter adaptation takes epochs, hours on TITAN-Xp GPUs with batch size.

4.2.2 Experimental Results

We show the results on the MS-COCO dataset in Tab. IV. For the RetinaNet framework, compared with two manually designed networks, ShuffleNetV2-10 [44, 21] and MobileNetV2 [29], FNA++ achieves higher mAP with similar MAdds. Compared with DetNAS [21] which searches the backbone of the detection network, FNA++ achieves higher mAP with M fewer Params and B fewer MAdds. As shown in Tab. V, our total computation cost is only of DetNAS on RetinaNet. For SSDLite in Tab. IV, FNA++ surpasses both the manually designed network MobileNetV2 and the NAS-searched network MnasNet-92 [17], while MnasNet takes around 3.8K GPU days to search for the backbone network on ImageNet [25]. The total computation cost of MnasNet is far larger than ours and is unaffordable for most researchers or engineers. The specific cost FNA++ takes on SSDLite is shown in Tab. V. It is difficult to train the small network due to the simplification [58]. Therefore, experiments on SSDLite need longer training schedules and take larger computation cost than RetinaNet. The experimental results further demonstrate the efficiency and effectiveness of direct adaptation on the target task with parameter remapping and architecture search.

Method Params MAdds PCKh@0.5
MobileNetV2 5.23M 6.09B 85.9
FNA++ 5.25M 6.14B 86.9
TABLE VI: Human pose estimation results on the MPII validation set.

4.3 Network Adaptation on Human Pose Estimation

4.3.1 Implementation Details

We apply FNA++ on the human pose estimation task. The experiments are performed on the MPII dataset [59] with the SimpleBaseline framework [10]. MPII dataset contains around 25K images with about 40K people. For the search process in architecture adaptation, we randomly sample data from the original training set as the validation set for architecture parameter optimization. The other data is used as the training set for search. For architecture parameters, we use the Adam optimizer [52] with a fixed learning rate of and weight decay. We set in Eq. 8 as for MAdds optimization. The input image is cropped and then resized to following the standard training settings [10, 11]. The batch size is set as 32. All the other training hyper-parameters are the same as SimpleBaseline. The search process takes epochs in total and the architecture parameter updating starts after 70 epochs. For parameter adaptation, we use the same training settings as SimpleBaseline. PCKh@0.5 [59]

is used as the evaluation metric.

4.3.2 Experimental Results

The architecture adaptation takes 16 hours in total on only one TITAN X GPU and parameter adaptation takes 5.5 hours on one TITAN X GPU. The search cost is only 16 GPU hours and parameter adaptation takes 5.5 GPU hours. The total computation cost is 21.5 GPU hours. As shown in Tab. VI, FNA++ promotes the PCKh@0.5 by with similar model MAdds. As we aim at validating the effectiveness of FNA++ on networks, we do not tune the training hyper-parameters and just follow the default ResNet-50 [2] training settings in SimpleBaseline for both MobileNetV2 and the FNA++ network training.

Fig. 5: The searchable ResNet blocks in FNA++, including the basic block (left) and the bottleneck block (right). The first convolution in the basic block and the middle convolution in the bottleneck block are searchable, while the kernel size is denoted as “” and the group number is denoted as “”.
block type kernel size group number
k3g1 3 1
k5g2 5 2
k5g4 5 4
k7g4 7 4
k7g8 7 8
TABLE VII: Optional block types in ResNet search space. The block type “k3g1” is equivalent to the original basic block or bottleneck block in ResNet [2]. The type with the group number of 1 represents the plain convolution.
Model Params MAdds mAP(%)
ResNet-18 21.41M 160.28B 32.1
FNA++ 20.66M 159.64B 32.9
ResNet-50 37.97M 202.84B 35.5
FNA++ 36.27M 200.33B 36.8
TABLE VIII: Object detection results of RetinaNet on MS-COCO with ResNets [2] as the seed networks.

4.4 Network Adaptation on ResNet

To evaluate the generalization ability on different network types, we perform our method on ResNets [2], including ResNet-18 and ResNet-50. As ResNets are composed of plain convolutions, kernel size enlargement will cause huge MAdds increase. We propose to search for diverse kernel sizes in ResNets without much MAdds increase by introducing grouped convolutions [1]. The searchable ResNet blocks are shown in Fig. 5. We allow the first convolution in the basic block and the second convolution in the bottleneck block to be searched. All the optional block types in the designed ResNet search space are shown in Tab. VII. As the kernel size enlarges, we set more groups in the convolution block to maintain the MAdds.

We perform the adaptation on ResNet-18 and -50 to the RetinaNet [9] framework. For ResNet-18, the input image for search is resized to ones with the short side to and the long side not exceeding (shortly denoted as in MMDetection [57]). The SGD optimizer for operation weights is used with weight decay and initial learning rate. in Eq. 8 is set as . All the other search and training settings are the same as the MobileNetV2 experiments on RetinaNet. The total adaptation cost is only GPU days, including hours on TITAN-Xp GPUs for search and hours on GPUs for parameter adaptation. For ResNet-50, the batch size is set as in total for search. The input image is also resized to . For the SGD optimizer, the initial learning rate is and the weight decay is . The other hyper-parameters for search are the same as that for ResNet-18. For the training in parameter adaptation, we first recalculate the running statistics of BN for iterations with the synchronized batch normalization across GPUs (SyncBN). Then we freeze the BN layers111Freezing BN means using the running statistics of BN during training and not updating the BN parameters. It is implemented as .eval() in PyTorch [56]. and train the target architecture on MS-COCO using the same hyper-parameters as ResNet-50 training in MMDetection. The architecture adaptation takes hours and parameter adaptation takes hours on TITAN-Xp GPUs, GPU days in total. The results are shown in Tab. VIII. Compared with the original ResNet-18 and -50, FNA++ can further promote the mAP by and with fewer Params and MAdds.

Block chs n s
FB Proxy
Conv 16 32 1 2
MBConv(k3e1) 16 16 1 1
SBlock 24 32 4 2
SBlock 32 40 4 2
SBlock 64 80 4 2
SBlock 112 96 4 1
SBlock 184 192 4 2
SBlock 352 320 1 1
TABLE IX: Search space with NAS networks as the seed networks. “FB” denotes the network FBNet-C and “Proxy” denotes Proxyless (mobile). The other abbreviations are the same as Tab. I.
Model Params MAdds mAP(%)
MobileNetV2 [29] 11.49M 133.05B 32.8
FBNet-C [30] 12.65M 134.17B 34.9
FNA++ 12.51M 134.20B 35.5
Proxyless (mobile) [18] 12.07M 133.42B 34.6
FNA++ 12.10M 133.23B 35.3
TABLE X: Object detection results of RetinaNet on MS-COCO with NAS networks as the seed networks.
Fig. 6: Architectures of NAS networks and ones adapted by FNA++. All the abbreviations and definitions are the same as that in Fig. 4.

4.5 Network Adaptation on NAS networks

Our proposed parameter remapping paradigm can be implemented on various types of networks. We further apply FNA++ on two popular NAS networks, i.e., FBNet-C [30] and Proxyless (mobile) [18]. The search space is constructed as Tab. IX shows. FBNet and ProxylessNAS search for architectures on the ImageNet classification task. To compare with the seed networks FBNet-C and Proxyless (mobile), we re-implement the two NAS networks and deploy them on the RetinaNet [9] framework. Then we train them on the MS-COCO [60] dataset with the ImageNet pre-trained parameters using the same training hyper-parameters as ours. The results are shown in Tab. X. Though the NAS networks already achieve far better performance than handcrafted MobileNetV2 on the detection task, our FNA++ networks further promote the mAP which cost similar MAdds with the NAS seed networks. This experiment demonstrates that FNA++ can not only promote the performance of manually designed networks, but also improve the NAS networks which are not searched on the target task. In real applications, if there is a demand for a new task, FNA++ helps to adapt the network with a low cost, avoiding cumbersome cost for extra pre-training and huge cost for searching from scratch. We visualize the architectures in Fig. 6.

Method Params MAdds mAP(%)
from seed RetinaNet 132.99B 11.91M 33.7
from sup 132.99B 11.91M 34.7
from seed () 132.99B 11.91M 35.6
from sup () 132.99B 11.91M 36.0
from seed SSDLite 0.9B 4.4M 24.0
from sup 0.9B 4.4M 24.0
TABLE XI: Comparison results with different remapping mechanisms for object detection on MS-COCO. “from seed” denotes parameters of are remapped from the seed network . “from sup” denotes parameters of are remapped from the super network . “” denotes the longer training schedule for RetinaNet, i.e. 24 epochs in MMDetection [57].
Method Params MAdds mIOU(%)
from seed 2.47M 24.17B 76.6
from sup 2.47M 24.17B 77.1
TABLE XII: Comparison results with different remapping mechanisms for semantic segmentation on Cityscapes. The experiments are performed on DeepLabv3. All the abbreviation definitions are the same as Tab. XI.

4.6 Study the Remapping Mechanism for Parameter Adaptation

In our preliminary version [31], with the target architecture obtained by architecture adaptation, we remap the parameters of the seed network to the target architecture for latter parameter adaptation. As we explore the mechanism of parameter remapping, we find that parameters remapped from the super network can bring further performance promotion for parameter adaptation. However, the batch normalization (BN) parameters during search may cause unstability and damage the training performance of the sub-architectures in the super network. The parameters of BN are usually disabled during search in many differentiable/one-shot NAS methods [27, 24]

. We open the BN parameter updating in the search process, including learnable affine parameters and global mean/variance statistics, so as to completely use parameters from

for parameter adaptation. Experiments show that BN parameters updating causes little effect on the search performance.

(a) RetinaNet
(b) RetinaNet ()
(c) SSDLite
Fig. 7: Training loss (upper) and mAP (bottom) comparisons between two remapping mechanisms, i.e. remapping parameters from the seed network (red) and from the super network (blue). Remapping from the super network greatly accelerates the training convergence in early epochs and achieves higher performance. With the training schedule lengthened (), the performance gap between two remapping mechanisms narrows. As SSDLite training takes long epochs, two remapping mechanisms achieve the same results.
Row Num Method MAdds mIOU(%)
(1) Remap ArchAdapt RemapSuper ParamAdapt (FNA++) 24.17B 77.1
(2) Remap ArchAdapt Remap ParamAdapt (FNA [31]) 24.17B 76.6
(3) RandInit ArchAdapt Remap ParamAdapt 24.29B 76.0
(4) Remap ArchAdapt RandInit ParamAdapt 24.17B 73.0
(5) RandInit ArchAdapt RandInit ParamAdapt 24.29B 72.4
(6) Remap ArchAdapt Pretrain ParamAdapt 24.17B 76.5
TABLE XIII: Effectiveness evaluation of parameter remapping. The experiments are conducted with DeepLabv3 on Cityscapes. “Remap”: parameter remapping. “ArchAdapt”: architecture adaptation. “RemapSuper”: parameter remapping from the super network. “ParamAdapt”: parameter adaptation. “RandInit”: random initialization. “Pretrain”: ImageNet pre-training.

As shown in Tab. XI and Tab XII, remapping from the super network demonstrates better performance on both object detection framework RetinaNet [9] and semantic segmentation framework DeepLabv3 [6]. However, for SSDLite [8, 29], remapping parameters from the super network achieves the same mAP as that from the seed network. We deduce this is due to the long training schedule of SSDLite, i.e., 60 epochs. We further perform a long training schedule on RetinaNet (24 epochs in MMDetection [57]). The results in Tab. XI show performance promotion that remapping from can bring over from decays from to with the training schedule set to . It indicates that remapping from the super network for parameter adaptation shows more effectiveness in short training scenarios. This conclusion is somewhat similar to that in [22], which demonstrates longer training schedules from scratch can achieve comparable results with training with a pre-trained model. We compare the training loss and mAP with different remapping mechanisms in Fig. 7. Model training with initial parameters remapped from the super network converges much faster than that remapped from the seed network in early epochs and achieves a higher final mAP in short training schedules. Training with the two remapping mechanisms can achieve similar results in long training schedules, e.g., SSDLite training. It is suggested to remap the parameters from the super network when computation resources are constrained.

4.7 Effectiveness of Parameter Remapping

To evaluate the effectiveness of the parameter remapping paradigm in our method, we attempt to optionally remove the parameter remapping process before the two stages, i.e. architecture adaptation and parameter adaptation. The experiments are conducted with the DeepLabv3 [6] semantic segmentation framework on the Cityscapes dataset [49].

Tab. XIII shows the complete experiments we perform on parameter remapping. Row (1) denotes the procedure of FNA++ and Row (2) denotes the former version which remaps the seed parameters for parameter adaptation. In Row (3) we remove the parameter remapping process before architecture adaptation. In other word, the search is performed from scratch without using the pre-trained network. The mIOU in Row (3) drops by 0.6% compared to Row (2). Then we remove the parameter remapping before parameter adaptation in Row (4), i.e. training the target architecture from scratch on the target task. The mIOU decreases by 3.6% compared with (2). When we remove the parameter remapping before both stages in Row (5), it gets the worst performance. In Row (6), we first pre-train the searched architecture on ImageNet and then fine-tune it on the target task. It is worth noting that FNA achieves a higher mIOU by a narrow margin (0.1%) than the ImageNet pre-trained one in Row (6). We conjecture that this may benefit from the regularization effect of parameter remapping before the parameter adaptation stage.

All the experiments are conducted using the same searching and training settings for fair comparisons. With parameter remapping applied on both stages, the adaptation achieves the best results in Tab. XIII. Especially, the remapping process before parameter adaptation tends to provide greater performance gains than the remapping before architecture adaptation. All the experimental results demonstrate the importance and effectiveness of the proposed parameter remapping scheme.

Row Num Method MAdds(B) mAP(%)
(1) DetNAS [21] 133.26 33.3
(2) Remap DiffSearch Remap ParamAdapt 133.03 33.9
(3) Remap RandSearch Remap ParamAdapt 133.11 33.5
(4) RandInit RandSearch Remap ParamAdapt 133.08 31.5
(5) Remap RandSearch RandInit ParamAdapt 133.11 25.3
(6) RandInit RandSearch RandInit ParamAdapt 133.08 24.9
TABLE XIV: Results of random search experiments with the RetinaNet framework on MS-COCO. “DiffSearch”: differentiable NAS. “RandSearch”: random search. The other abbreviation definitions are the same as Tab. XIII.
Method Width-BN Width-Std Width-L1 Kernel-Dilate FNA FNA++
mIOU(%) 75.8 75.8 75.3 75.6 76.6 77.1
TABLE XV: Study the strategies of parameter remapping. “Wdith-BN” denotes remapping with BN statistics on the width-level. “Width-Std” and “Width-L1” denote remapping with std- and norm- based weight importance on the width-level. “Kernel-Dilate” denotes remapping with a dilation manner on the kernel-level.

4.8 Random Search Experiments

We carry out the Random Search (RandSearch) experiments with the RetinaNet [9] framework on the MS-COCO [60] dataset. All the results are shown in the Tab. XIV. We purely replace the original differentiable NAS (DiffSearch) method in FNA++ with the random search method in Row (3). The random search takes the same computation cost as the search in FNA++ for fair comparisons. We observe that FNA++ with RandSearch achieves comparable results with our original method. It further confirms that FNA++ is a general framework for network adaptation and has great generalization ability. NAS is only an implementation tool for architecture adaptation. The whole framework of FNA++ can be treated as a NAS-method agnostic mechanism. It is worth noting that even using random search, our FNA++ still outperforms DetNAS [21] with 0.2% mAP better and 150M MAdds fewer.

We further conduct similar ablation studies with experiments in Sec. 4.7 about the parameter remapping scheme in Row (4) - (6). All the experiments further support the effectiveness of the parameter remapping scheme.

4.9 Study Parameter Remapping Strategies

We explore more strategies for the parameter remapping paradigm. All the experiments are conducted with the DeepLabv3 [6] framework on the Cityscapes dataset [49]. We make exploration from the following respects. For simplicity, we denote the weights of the seed network and the new network on the remapping dimension (output/input channel) as and .

4.9.1 Remapping with BN Statistics on Width-level

We review the formulation of batch normalization [53] as follows,

(10)

where denotes the -dimensional input tensor of the th layer, denotes the learnable parameter which scales the normalized data on the channel dimension. We compute the absolute values of as . When remapping the parameters on the width-level, we sort the values of and map the parameters with the sorted top- indices. More specifically, we define a weights remapping function in Algo. 1

, where the reference vector

is .

Input: the seed weights and the new
weights , the reference vector
1 // get indices of topk values of the vector
2 -
3 // sort the indices
4
5 for  do
6      
7 end for
Output: with remapped values
Algorithm 1 Weights Remapping Function

figureParameter Remapping on the kernel-level with a dilation manner.

4.9.2 Remapping with Weight Importance on Width-level

We attempt to use a canonical form of convolution weights to measure the importance of parameters. Then we remap the seed network parameters with great importance to the new network. The remapping operation is conducted based on Algo. 1

as well. We experiment with two canonical forms of weights to compute the reference vector, the standard deviation of

as and the norm of as .

4.9.3 Remapping with Dilation on Kernel-level

We experiment with another strategy of parameter remapping on the kernel-level. Different from the method defined in Sec. 3.1, we remap the parameters with a dilation manner as shown in Fig. 4.9.1. The values in the convolution kernel without remapping are all assigned with . It is formulated as

(11)

where and denote the weights of the new network and the seed network respectively, denote the spatial indices.

Tab. XV shows the experimental results and all the searched models hold the similar MAdds. The network adaptation with the parameter remapping paradigm defined in Sec. 3.1 achieves the best results. Furthermore, the remapping operation of FNA++ is easier to implement compared to the several aforementioned ones. We explore limited number of methods to implement the parameter remapping paradigm. How to conduct the remapping strategy more efficiently remains a significative work.

5 Conclusion

In this paper, we propose a fast neural network adaptation method (FNA++) with a novel parameter remapping paradigm and the architecture search method. We adapt the manually designed network MobileNetV2 to semantic segmentation, object detection and human pose estimation tasks on both architecture- and parameter- level. The generalization ability of FNA++ is further demonstrated on both ResNets and NAS networks. The parameter remapping paradigm takes full advantages of the seed network parameters, which greatly accelerates both the architecture search and parameter fine-tuning process. With our FNA++ method, researchers and engineers could fast adapt more pre-trained networks to various frameworks on different tasks. As there are lots of ImageNet pre-trained models available in the community, we could conduct adaptation with low cost and do more applications, e.g., face recognition, depth estimation, etc. Towards real scenarios with dynamic dataset or task demands, FNA++ is a good solution to adapt or update the network with negligible cost. For researchers with constrained computation resources, FNA++ can be an efficient tool to perform various explorations on computation consuming tasks.

Acknowledgement

We thank Liangchen Song for the discussion and assistance.

Fig. 8: Visualization of semantic segmentation results on the Cityscapes validation dataset. upper: the input images. bottom: the prediction results.
Fig. 9: Visualization of object detection results on the MS-COCO validation dataset. upper: results of RetinaNet. bottom: results of SSDLite.
Fig. 10: Visualization of human pose estimation results on the MPII validation dataset.

References