Accelerating Multi-Objective Neural Architecture Search by Random-Weight Evaluation

10/08/2021
by   Shengran Hu, et al.
Microsoft
NetEase, Inc
0

For the goal of automated design of high-performance deep convolutional neural networks (CNNs), Neural Architecture Search (NAS) methodology is becoming increasingly important for both academia and industries.Due to the costly stochastic gradient descent (SGD) training of CNNs for performance evaluation, most existing NAS methods are computationally expensive for real-world deployments. To address this issue, we first introduce a new performance estimation metric, named Random-Weight Evaluation (RWE) to quantify the quality of CNNs in a cost-efficient manner. Instead of fully training the entire CNN, the RWE only trains its last layer and leaves the remainders with randomly initialized weights, which results in a single network evaluation in seconds.Second, a complexity metric is adopted for multi-objective NAS to balance the model size and performance. Overall, our proposed method obtains a set of efficient models with state-of-the-art performance in two real-world search spaces. Then the results obtained on the CIFAR-10 dataset are transferred to the ImageNet dataset to validate the practicality of the proposed algorithm. Moreover, ablation studies on NAS-Bench-301 datasets reveal the effectiveness of the proposed RWE in estimating the performance compared with existing methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/27/2020

Multi-objective Neural Architecture Search with Almost No Training

In the recent past, neural architecture search (NAS) has attracted incre...
08/20/2021

D-DARTS: Distributed Differentiable Architecture Search

Differentiable ARchiTecture Search (DARTS) is one of the most trending N...
01/28/2021

Evolutionary Neural Architecture Search Supporting Approximate Multipliers

There is a growing interest in automated neural architecture search (NAS...
02/16/2021

EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search

Neural Architecture Search (NAS) has shown excellent results in designin...
09/03/2019

MANAS: Multi-Agent Neural Architecture Search

The Neural Architecture Search (NAS) problem is typically formulated as ...
09/14/2020

RelativeNAS: Relative Neural Architecture Search via Slow-Fast Learning

Despite the remarkable successes of Convolutional Neural Networks (CNNs)...
01/23/2020

Multi-objective Neural Architecture Search via Non-stationary Policy Gradient

Multi-objective Neural Architecture Search (NAS) aims to discover novel ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, deep convolutional neural networks (CNNs) have been widely studied and achieved astonishing performance in different computer vision tasks. One crucial component among these studies is the design of dedicated architectures of neural networks, which significantly affects the performance and generalization ability of CNNs among various tasks

krizhevsky2012imagenet; he2016deep; muxconv. Along with the architectural milestones, from the original AlexNet krizhevsky2012imagenet to the ResNet he2016deep, the performance of CNNs across extensive datasets and tasks keeps boosting. However, it still takes researchers enormous works to achieve these architectural advancements via trial-and-error tuning manually. Therefore, Neural Architecture Search (NAS) has emerged as an alternative way to design CNN in an automated manner. Although NAS alleviates the laborious experiments by researchers, existing NAS algorithms still suffer from numerous computational overheads, leading to challenges in the real-world deployment Zoph2018; Real2019.

The expensive evaluations of the performance of architectures contribute to dominant computational consumption in the NAS algorithm. Usually, a brute-force training of a network can cost days to weeks on a single GPU, varying from simple to complex datasets and tasks. Therefore, several approaches have been proposed to approximate the true performance with fewer computational costs and, as a result, fewer fidelities. These works can be roughly divided into three categories.

The first category includes methods that reduce training budgets via decreasing the network sizes (e.g., the number of layers and channels), which are widely adopted in early NAS works Zoph2018; Real2019; he2021efficient. Nevertheless, their effectiveness has not been systematically researched until recently zela2018; Zhou_2020_CVPR, demonstrating that their effectiveness can be limited under inappropriate parameter settings. Moreover, these methods are computationally expensive due to the thorough training for every single network. Finally, these methods require that the CNN architectures in search spaces should be modular, i.e., the networks are constructed by repeatedly stacking modular blocks. For instance, several state-of-the-art search spaces in Liu2018a; lu2020nsganetv2 do not follow the constraints above, and the extension of these methods to new search spaces is not trivial.

The second category is often known as the supernet-based method, which intends to avoid training every architecture from scratch tan2020relativenas; lu2020nsganetv2; lu2021neural. This technique typically decouples NAS into two main stages to share weights during searching. In the first stage, it constructs a supernet that contains all possible architectures in the search space, such that each architecture becomes a subnet of the supernet. In the second stage, the search process begins, and each architecture inherits the weights from the supernet, and thus the evaluation of each architecture becomes a simple inference on the valid set. Despite the fact that this technique can speed up the searching process, the construction of the supernet could be more time-consuming than a complete search cai2020once. Besides, the search spaces require substantial modifications to accommodate the construction of the supernet Liu2019.

The third category consists of several studies known as the zero-cost proxies abdelfattah2021zerocost; mellor2020neural, which estimate the performance with a few mini-batches of forward/backward propagation. To be more specific, this category analyzes information such as gradients, activations, or magnitudes of parameters to achieve estimations for reducing the computational cost drastically. Notably, most of these techniques attempt to validate their effectiveness on several NAS-Bench datasets siems2020bench; dong2020nasbench201 which are public architecture datasets constructed by exhaustively evaluating search spaces. Nevertheless, they may perform well only on certain NAS-Bench datasets abdelfattah2021zerocost, or they are not validated on real-world search spaces mellor2020neural.

On top of the performance estimation methods, a branch of works named predictor-based NAS lu2020nsganetv2; lu2021neural has been proposed to further improve the sampling efficiency. In these works, a regression model, i.e., a performance predictor, would be trained to fit the mapping from the architectures to the corresponding performance. After the establishment of the predictor, the estimated performance in the searching stage is achieved by the evaluations of the predictor instead of the expensive estimation methods, which improves the sampling efficiency of NAS. The predictor can be built upon different performance estimation methods, e.g., based on the training with reduced budgets 8744404; wen2020neural and the evaluations of the supernet lu2020nsganetv2; lu2021neural. Also, several works explore different encoding methods for the architectures Sun2021; ning2020generic; yan2020does

and different machine learning models as the predictor

ning2020generic; 8744404; wen2020neural.

In this work, we propose a Random-Weight Evaluation (RWE) approach. Comparing to the existing methods, it is less expensive, more flexible, and the effectiveness is validated more solid. In detail, by training the last classification layer only and keeping others with randomly initialized weights, RWE saves orders of magnitudes computational resources comparing with conventional methods. At the same time, RWE is conceptually flexible with any search space, and it does not need any modifications to the search space. Moreover, the effectiveness of RWE is validated by the searching on two modern real-world search spaces and some ablation studies on the NAS-Bench-301 dataset. We briefly summarize our main contributions below:

  • We propose a novel performance estimation metric, namely RWE, for efficiently quantifying the quality of CNNs. RWE is highly efficient in computational costs compared with conventional methods, which reduces the wall-clock evaluation time from hours to seconds. Extensive experiments on both real-world search spaces and benchmark search space NAS-Bench further validate the effectiveness of RWE.

  • Paired with a multi-objective evolutionary algorithm, our RWE based NAS algorithm can achieve a set of efficient networks in one run, where both the performance and efficiency of models are considered. For instance, the proposed algorithm achieves state-of-the-art performance on the CIFAR-10 dataset, resulting in the networks from the largest, with 2.98% Top-1 error and 1.5M parameters, to the smallest, with 4.05% Top-1 error and 0.9M parameters. The experiments of transferability on ImageNet further demonstrate the competitiveness of our method. With such competitive performance, the whole searching procedure only costs less than two hours on a single GPU card, making the algorithm highly practical in handling real-world applications.

The rest of this paper is organized as follows. In Section 2, some related work on multi-objective NAS algorithms and the expressive power of randomly initialized convolution filters is introduced. Then we present our proposed approach in Section 3, including the detailed random-Weight evaluation, search strategy, and search space and encoding. Comparative studies are shown in Section 4, and the conclusions are drawn in Section 5.

2 Related Works

In this section, we briefly discuss two topics related to the technicalities of our approach, i.e., multi-objective NAS and randomly initialized convolution filters.

2.1 Multi-Objective NAS

Single-objective optimization algorithms have dominated the early researches in NAS Zoph2018; Liu2019; Real2019, which mainly propose architectures to maximize their performance on certain datasets and tasks. Though NAS algorithms have shown their practicality in solving benchmark tasks, they cannot meet the demands from deployment scenarios varying from GPU servers to edge devices Howard2017. Thus, NAS algorithms are expected to balance between multiple conflicting objectives, such as inference latency, memory footprint, and power consumption. Though recent attempts often convert multiple competing objectives into a single objective in a weighted sum manner Tan2019; Cai2019

, they may miss the global optima of the problem. As a result, multiple runs of the algorithm could be required in real-world applications, due to the difficulty of deciding the best coefficient for weighted sum. Also, the search strategies these works adopted are primarily based on gradient-based methods or reinforcement learning, and they cannot approximate the Pareto front in a single run.

There are also several works that adopt multi-objective evolutionary algorithms as search strategies for NAS Lu2019; lu2020nsganetv2; lu2021neural. The population-based strategies introduce the natural parallelism, which increases the practicality in large-scale applications, and the conflicting nature of multiple objectives is helpful to enhance the diversity of the population. Most of them aim to tradeoff between the performance and the complexity of networks Lu2019; lu2020nsganetv2, while some other works temp to exploit the performance among different datasets, similar to the concepts in multi-task learning lu2021neural. Following successful practices, we adopt a classic multi-objective evolutionary algorithm, namely NSGA-II deb2002fast. We aim to achieve a set of efficient architectures in one run, where the proposed performance metric and a complexity metric FLOPs are two conflicting objectives to be optimized.

2.2 Expressive Power of Randomly Initialized Convolution Filters

The RWE is surprisingly powerful, inspired by the fact that the convolution filters are in terms of extracting the features for input images, even with randomly initialized weights Jarrett2009. It is indicated in Jarrett2009; wann2019 that, with proper architecture, the convolution filters with randomly initialized weights can be as competitive as the ones with fully trained weights on both visual and control tasks. Also, it is validated that the structure itself can introduce prior knowledge to be capable of capturing the features for visual tasks Adebayo2018; ulyanov2018deep. Similarly, the local binary convolutional neural network achieves comparable performance to CNNs with fully trained convolution filters by learning a linear combination of randomly initialized convolution filters lbcnn.

Some early works in the literature conceptually explore the potential of estimating the performance of networks from randomly initialized weights. In detail, Sax et al. mathematically proved that the convolutional filter with random weights still has its key functionality, which is frequency selectivity and translation invariance. These characteristics are utilized to rank shallow neural networks with different network configurations Saxe2011. Rosenfeld and Tsotsos successfully predict the performance ranking on several widely used CNN architectures by only training a fraction of weights in convolutional filters Rosenfeld2019.

Although previous works show the potential of randomly initialized convolution filters, those methods are not scalable to real-world applications. In this work, we randomly initialized and freeze the weights in convolutional kernels in CNN, only training for the last classification layer. Using the predictive performance after doing so as a performance metric, we demonstrate the scalability of our approach on complex datasets and modern CNN search spaces that contains deep yet powerful CNNs.

3 Proposed Approach

The Multi-objective NAS problem for a target dataset can be formulated as the following bilevel optimization problem lu2020, —l— αf_1(α;w^*(α)),f_2(α),…,f_m(α) w^*(α) ∈argmin_w L(w;α) α ∈Ω_α,    w ∈Ω_w where the upper lever variable defines an architecture in the search space , and the lower level variable represents the corresponding weights.

is the loss function on the

for the architecture with weights . The first objective represents the classification error on , which depends on both architectures and weights. Other objectives only depend on architectures, such as the number of parameters, floating-point operations (FLOPs), latencies, etc.

In our approach, we simplify the complex bilevel optimization by using the proposed performance metric RWE as a proxy of . In addition, we adopt the complexity metric FLOPs as the second objective to optimize. As a result, the multi-objective formulation of this work becomes —l— αRWE(α), FLOPs(α) α ∈Ω_α, where RWE and FLOPs represent the values of these metrics with respect to architecture .

3.1 Random-Weight Evaluation

Input : An architecture , a training and validation dataset
Output : The performance metric RWE of .
1 net Decode the architecture into CNN backbone; Randomly initialize the net

and a linear classifier

clsfr; Freeze the weights of net throughout the whole algorithm; features Infer on the net; Train the linear classifier clsfr with the features as input; foreach image, target  do
2       Infer image on net with clsfr; prediction the label approved by clsfr; Compare the prediction with target and record the result;
3 end foreach
RWE Calculate the classification error rate; return RWE;
Algorithm 1 Random-Weight Evaluation

As mentioned in Section 2.2, randomly initialized convolution filters are surprisingly powerful in extracting the features from images, due to the frequency selectivity and translation invariance preserved with random weights Saxe2011. Inspired by this amazing characteristic, this work attempts to judge the quality of the architectures based on the ability of architectures with random weights to extract “good” features. And we quantify the quality of the features by training a linear classifier that takes these features as input and calculates the classification error for that classifier.

We detail the proposed performance metric Random-Weight Evaluation (RWE) as following. The overall procedure is shown in the Algorithm 1. First, we decode the encoding of a candidate architecture into a CNN backbone net, which refers to all layers before the last classification layer. Second, we initialize the net and a linear classifier clsfr with random weights, the latter of which acts as the last classification layer in a complete CNN and its structure is identical for all candidate CNNs in the search space. Here, a modified version of the Kaiming initialization He is adopted to initialize the net

(default setting in PyTorch). The weights in the backbone part will keep frozen throughout the algorithm. Third, we infer the training set

on net and utilize the output feature to train clsfr. Finally, after assembling net and trained clsfr into a complete CNN, this CNN gets tested on the validation set , the output error rate of which becomes the value of RWE.

3.2 Search Strategy

We adopt a classic multi-objective evolutionary algorithm NSGA-II deb2002fast in our approach, where the searching process is detailed as below.

First, we randomly initialize the population, the individual of which get evaluated with RWE and FLOPs as two objectives. Second, we apply the binary tournament selection to select the parents for offspring. Third, the two-point crossover and the polynomial mutation are applied to generate the offspring, followed by the evaluations of offspring. Finally, we apply the environment selection based on the nondominated sorting and the crowding distance deb2002fast, and the process is repeated until reaching the max generation.

3.3 Search Space and Encoding

Figure 1: The micro Zoph2018 and macro Xie2017 search spaces adopted in our approach. LEFT: Overall network architecture. MIDDLE and RIGHT: Design of layers in micro and macro search space.

Our proposed RWE is conceptually flexible and can be applied to any search space. To validate the effectiveness of our algorithm on the real-world application, we experiment with two modern search spaces, the micro Zoph2018 and macro Xie2017 search spaces, in our approach. As shown in Fig. 1 LEFT, these two search spaces are modular search spaces, in which two kinds of layers, the normal and reduction layers, are repeatedly stacked, forming the complete CNN. The former kind of layers keeps the resolution and the number of channels for input images, while the latter halves the resolution and doubles the number of channels. The main difference between the micro and macro spaces is the design of each layer and the way to stack them into a complete CNN.

Micro Search Space: In the micro search space Zoph2018, we search for both the normal and reduction layers, named the normal and reduction cells. All the normal cells share the same architecture in a CNN, in which the weights are different though, and the case for the reduction cells is the same. Typically, we scale networks using different repeating number (

) in searching and validation stages. The normal and reduction cells share the same template, except for the stride size in operators. In each kind of cell, we search for both of the connections between nodes and the operation applied on each connection, as shown in Fig.

1 MIDDLE.

Macro Search Space: In the macro search space Xie2017, we search for only the normal layers, leaving the predefined reduction layers identical. Each normal layer in the macro search space is searched independently, where the repeating number in a phase (

) is equal to one. In the normal layers, only the connection patterns get searched, and the operation in each node is a predefined sequential operator, including convolution operators, batch normalization layers, and activation functions. Fig.

1 RIGHT shows an example for candidate connection patterns.

4 Experimental Results

In this section, we first present the searching results of our proposed NAS algorithm on the micro and macro search spaces for a modern classification dataset CIFAR-10 krizhevsky2009learning. Then, the ablation studies on NAS-Bench-301 siems2020bench demonstrate the effectiveness of our evaluation method and the rationality of some design choices. Finally, the experiment on ImageNet imagenet_cvpr09, which is one of the most challenging classification benchmarks, shows the transferability of our architectures and illustrates the practicality for real-world applications.

4.1 Searching on CIFAR-10

In our approach, we search on a modern classification dataset CIFAR-10 krizhevsky2009learning, which contains ten categories and 60K images. Conventionally, the dataset split into a training set with 50K images and a test set with 10K images. Following common settings in NAS algorithms Zoph2018; Real2019; Lu2019, we further split the training set (80%-20%) in the searching stage to create the training and validation sets.

Here we introduce the detailed implementation and parameter settings in our NAS algorithm. In the searching stage, the population size is set to 20 and the max generation is set to 30. For RWE, the architectures in the micro search space have 10 initial channels and 5 layers, and the architectures in the macro search space have 32 initial channels. Also, due to the randomness introduced in RWE, we adopt assemble learning technique Hansen1990 in the training of the linear classifier to stabilize the results. Specifically, there are five classifiers to be trained, each of which is only exposed to 4/5 features. We only introduce the normalization techniques in the preprocessing of the input images, without the data augmentation techniques introducing the randomness. SGD optimizer with an initial learning rate of 0.25 and a momentum of 0.9 is adopted, where the cosine annealing schedule loshchilov2016sgdr

decays the learning rate to zero gradually. The batch size is set to 512 and the training iterations conduct for 30 epochs. The average CPU time for a single evaluation is approximately 10 seconds with a single Nvidia 2080Ti.

  Architecture
  Test Error
  (%)
  Params
  (M)
  FLOPs
  (M)
  Search Cost
  (GPU days)
   Search Method
  Wide ResNet Zagoruyko2016 4.17 36.5 - - manual
  DenseNet-BC Huang2017 3.47 25.6 - - manual
  BlockQNN Zhong2020 3.54 39.8 - 96 RL
  SNAS Xie2019 3.10 2.3 - 1.5 gradient
  NASNet-A Zoph2018 2.91 3.2 532 2,000 RL
  DARTS Liu2019 2.76 3.3 547 4 gradient
  NSGA-Net + macro space Lu2019 3.85 3.3 1290 8 evolution
  Macro-L evolution
  AE-CNN + E2EPP 8744404 5.30 4.3 - 7 evolution
  Hier. Evolution Liu2018a 3.75 15.7 - 300 evolution
  AmoebaNet-A Real2019 2.77 3.3 533 3,150 evolution
  NSGA-Net Lu2019 2.75 3.3 535 4 evolution
  Micro-S evolution
  Micro-M evolution
  Micro-L evolution
Table 1: The results of the proposed algorithm and other state-of-the-art methods on CIFAR-10. denotes the results achieved by the same training setting with ours and reported in Lu2019. denotes the work that adopts the regularization technique cutout devries2017improved.
Figure 2: The visualization of Micro-L and Macro-L architecture.

For the validation stage, we scale the architectures to the deployment settings, where the number of training epochs, layers, and channels increases. The architectures from the final Pareto front are selected to be trained from scratch, where the number of layers and initial channels is set to 20 and 34 for the micro search space, and the number of channels is set to 128 in all layers in the macro search space. We use the same SGD optimizer as the one in the training stage, except the initial learning rate is set to 0.025. The selected architectures are trained for 600 epochs with a batch size of 96. Also, the regularization techniques cutout devries2017improved and scheduled path dropout Zoph2018 is introduced, where the cutout length and the drop out rate are set to 16 and 0.2. The settings are the same with state-of-the-art algorithms for a fair comparison Lu2019.

The results of validation and the comparison to other state-of-the-art architecture are shown in Table. 1. The representative architectures from the final Pareto front get compared to both hand-crafted and search-based architectures. In the experiments with the micro search space, the architecture with the lowest error rate (Micro-L) in our approach achieves a 2.98% Top-1 error rate with 340M FLOPs. With fewer FLOPs, it has competitive performance with state-of-the-art architectures. Also, Micro-M, Micro-S shows a different tradeoff between performance and complexity. Similar to the micro search space, the chosen architecture in the macro search space (Macro-L) has competitive performance and fewer FLOPs comparting to the state-of-the-art. The visualization of the detailed structures of Micro-L in the micro space and Macro-L in the macro space are shown in Fig. 2.

4.2 Effectiveness of Random-Weight Evaluation

Figure 3: The Spearman correlation coefficient for different performance metrics in searching on NAS-Bench-301.
Figure 4: The Spearman correlation coefficient for different initialization method in searching on NAS-Bench-301.

To demonstrate the effectiveness of RWE, we conduct experiments on NAS-Bench-301 dataset siems2020bench. The dataset is constructed by the surrogate model trained by sampled architectures in search space, such that it covers the whole search space to help researchers to analyze their NAS algorithms. While other NAS-Bench datasets construct a toy search space for the convenience of studies dong2020nasbench201, NAS-Bench-301 covers a real-world search space, which is the micro search space adopted in our work. Thus, the ablation studies based on NAS-Bench-301 examine the behavior of our algorithm during the searching stage.

We evaluate the effectiveness of estimation strategies by calculating the Spearman correlation coefficient between the estimation and queried performance from NAS-Bench-301. The target individuals are from the union of the population of each generation and their offspring. The Spearman correlation coefficient, which ranged from , is a nonparametric measure of rank correlation. The higher coefficient is, the ranking of two variables has a more similar rank to each other. The idea of experiment settings is that, during the evolutionary algorithm, the only phase depending on the estimation strategy is the mating and survival selection, which happen in the union mentioned above. The higher the correlation coefficient is, the more reliable the estimation strategy is. Thus, the algorithm has more chances to choose good candidates from a set of architectures.

In the following experiments, we use the same search strategy as introduced in Section 3.2 and conduct the experiments in NAS-Bench-301 with 20 generations. The search space in NAS-Bench-301 is a subset of the Micro Space, where identical connections to a single node are not allowed in NAS-Bench-301. As a result, we add a fix operation in the search strategy, which randomly chooses another connection to avoid duplication when an invalid architecture is produced. Also, the results present the mean and the standard variation of five independent trials with different random seeds.

We first compare our estimation strategy RWE with the zero-cost proxies abdelfattah2021zerocost; mellor2020neural and the training-based evaluation method Zoph2018; Zhou_2020_CVPR. For the zero-cost proxies, we choose the representative performance metrics synflow , grasp, and fisher from abdelfattah2021zerocost, and jacob_conv from mellor2020neural. For the training-based method, we train the network for 10 epochs with the number initial channels and layers of 16 and 8. As shown in Fig. 3, the proposed RWE outperforms all zero-cost proxies after the initial stage of searching, ending up with a similar accuracy with the training-based method. The experiment shows the effectiveness of RWE is competitive to the one of training-based method while having much fewer computational overheads. Paired with the searching in the micro and macro spaces, it further shows that RWE performs well in the real-world search spaces.

We then investigate the effects of different initialization methods applied in our approach. The method we adopt in this paper is the default one in PyTorch, and we examine other four representative initialization methods, which are known as Kaiming normal (uniform) initialization He and Xavier normal (uniform) initialization glorot2010understanding. As shown in Fig. 4, the initialization methods have minor impacts on the effectiveness of RWE, as we observe no significant different behaviors. The experiment shows that our approach is robust to different initialization methods.

4.3 Transferring to ImageNet

Architecture
Test Error (%)
Params
(M)
FLOPs
(M)
top-1 top-5
MobileNetV1 Howard2017 31.6 - 2.6 325
InceptionV1 Szegedy2015 30.2 10.1 6.6 1448
ShuffleNetV1 Zhang2018 28.5 - 3.4 292
VGG simonyan2014very 28.5 9.9 138 -
MobileNetV2 Sandler2018 28.0 9.0 3.4 300
ShuffleNetV2 1.5 ma2018shufflenet 27.4 - - 299
NASNet-C Zoph2018 27.5 9.0 4.9 558
SNAS Xie2019 27.3 9.2 4.3 533
EffPNet 9349967 27.0 9.25 2.5 -
DARTS Liu2019 26.7 8.7 4.7 574
AmoebaNet-B Real2019 26.0 8.5 5.3 555
PNAS liu2018progressive 26.0 8.5 5.3 555
Micro-L
Table 2: The results of the proposed algorithm and other state-of-the-art methods on ImageNet. denotes the methods that first get searched on CIFAR-10 and then get transferred to ImageNet.

To validate the practicality of our output architectures, we experiment with the transferability of the architecture Micro-L from CIFAR-10 to ImageNet imagenet_cvpr09. ImageNet dataset, which substantially shows its importance in real-world applications, contains more than one million images. With various resolutions, these images unevenly distribute in 1K categories. The general idea of transferring is to scale the architectures with a greater number of channels but a smaller number of layers, which is introduced by some classic works in NAS Zoph2018; Xie2019. More specifically, the architectures starts with three stem convolutional layers with stride 2, which downsample the resolution by eight times. Following, there are 14 layers and 48 initial channels, where the reduction cells appear on the fifth and ninth layer. Some common data augmentation techniques are also adopted, including the random resize, the random crop, the random horizon flip, and the color jitter. We train our model with the SGD optimizer with 250 epochs, batch size of 1024, and resolution of on 4 Nvidia Tesla V100 GPU. The initial learning rate is set to 0.5 and decays to linearly. In addition, the warmup strategy is applied on the first five epochs, increasing the learning rate from 0 to 0.5 linearly. The label smooth technique with a smooth rate of 0.1 is also adopted. Table. 2 shows the experimental results and the comparisons to state-of-the-art methods. It shows that our approach has superior performance comparing to the hand-crafted architectures and has competitive performance with state-of-the-art NAS algorithms.

5 Conclusion

This paper proposed a flexible performance metric Random-Weight Evaluation (RWE) to rapidly estimate the performance of CNNs. Inspired by the expressive power of randomly initialized convolution filters, RWE only trains the last classification layer and leaving the backbone with randomly initialized weights. As a result, RWE achieves a reliable estimation of architectures in seconds. We further integrated RWE with an evolutionary multi-objective algorithm, adopting the complexity metric as the second objective. The experimental results showed that our algorithm achieved a set of efficient networks with state-of-the-art performance on both micro and macro search spaces. The resulted architecture with 350M FLOPs achieved 2.98% Top-1 error in CIFAR-10 and 27.6% Top-1 error in ImageNet after transferring. Also, the careful ablation studies experiments on different performance metrics and initialization methods indicated the effectiveness of the proposed algorithm.

Acknowledgements.
This work was supported by the National Natural Science Foundation of China (No. 61903178, 61906081, and U20A20306), the Shenzhen Science and Technology Program (No. RCBS20200714114817264), the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No. 2017ZT07X386), the Shenzhen Peacock Plan (No. KQTD2016112514355531), and the Program for University Key Laboratory of Guangdong Province (No. 2017KSYS008).

Conflict of interest

The authors declare that they have no conflict of interest.

References