Discretization-Aware Architecture Search

07/07/2020 ∙ by Yunjie Tian, et al. ∙ 9

The search cost of neural architecture search (NAS) has been largely reduced by weight-sharing methods. These methods optimize a super-network with all possible edges and operations, and determine the optimal sub-network by discretization, i.e., pruning off weak candidates. The discretization process, performed on either operations or edges, incurs significant inaccuracy and thus the quality of the final architecture is not guaranteed. This paper presents discretization-aware architecture search (DA2S), with the core idea being adding a loss term to push the super-network towards the configuration of desired topology, so that the accuracy loss brought by discretization is largely alleviated. Experiments on standard image classification benchmarks demonstrate the superiority of our approach, in particular, under imbalanced target network configurations that were not studied before.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Network architecture search (NAS) is a research topic aimming to explore the design of neural networks in a large space that is not well covered by human expertise. To alleviate the computational burden of the reinforcement-based 

[38, 39] and evolutionary [26, 32, 25] algorithms that evaluate sampled architecture individually, researchers proposed one-shot search methods [2] which first optimized a super-network with all possible architectures included, and then sampled sub-networks from it for evaluation [24]. By sharing computation, this kind of methods accelerated NAS by orders of magnitudes.

A representative example of one-shot search is differentiable architecture search (DARTS [20]), which formulates the super-network into a differentiable form with respect to a set of architectural parameters, e.g., operations and connections, so that the entire NAS process can be optimized in an end-to-end manner. DARTS did not require an explicit process for evaluating each sub-network, but performed a standalone discretization process to determine the optimal sub-architecture, on which re-training is performed. Such an efficient search strategy does not require the search cost to increase dramatically as the size of search space, and the space can be much larger compared with other NAS approaches.

Figure 1: Top: the normal cell of DARTS, from which we investigate node3 which sums up input edges in the search stage. In two discretization configurations (preserving and out of edges), this node can preserve and input(s), respectively, but pruning off inputs with moderate weights can lead to dramatic super-network accuracy and unsatisfying re-training accuracy. Middle & Bottom: DA2S is aware of the number of inputs to be preserved for each node and pushes weights to get close to either 1 or 0, so as the discretization loss is largely alleviated, and re-training accuracy improved. This figure is best viewed in color.

Despite of the superiority about efficiency, DARTS is believed to suffer the gap between the optimized super-network and the sampled sub-networks. In particular, as illustrated in [5], the difference between the number of cells can cause a ‘depth gap’, and the search performance is largely stabilized by alleviating the gap. In this paper, we point out another gap, potentially more important, caused by the process of discretizing architectural weights of the super-network. To be specific, DARTS combines candidate operations and edges with a weighted sum (the weights are learnable), and preserves a fixed number of candidates with strong weights and discards others. However, there is no guarantee that the discarded weights are relatively small – if not, this discretization process can introduce significant inaccuracy in neural responses to each cell. Such inaccuracy accumulates and finally causes that a well-optimized super-network does not necessarily generates high-quality sub-networks, in particular (i) when the the discarded candidates still have moderate weights; and/or (ii) the number of pruned edges is relatively small compared to that in the super-network. Figure 1 shows a cell optimized by DARTS. One can see that discretization causes the super-network accuracy to drop dramatically, which also harms the performance of searched architecture in the re-training stage.

To alleviate the above issue, we propose discretization-aware architecture search (DA2

S). The main idea is to introduce an additional term to the loss function, so that the architectural parameters of the super-network is gradually pushed towards the desired configuration during the search process. To be specific, we formulate the new loss term into an entropy function based on the property that minimizing the entropy of a system drives maximizing the sparsity and discretization of the elements (weights) in the system. The objective of entropy is to enforce each weight to get close to either

or , with the number of ’s determined by the desired configuration, so that the discretization process, by removing candidates with weights close to

, does not incur significant accuracy loss. Being differentiable to architectural parameters, the entropy function can be freely plugged into the system for SGD optimization. We perform experiments on two standard image classification benchmarks, namely, CIFAR10 and ImageNet, based on PC-DARTS 

[34], an efficient variant of DARTS. Note that two sets of architectural parameters exist in PC-DARTS, taking control of operations in an edge and edges that sum into a node, respectively, and they are potentially equipped with different loss terms. We evaluate different configurations (i.e., varying from each other in the number of preserved edges for each node), most of which have not been studied before. When each search process reaches the end, the super-network converges into a discretization-friendly form, and the discretization process causes much smaller accuracy drop than that reported without the entropy loss. Consequently, the searched architecture, under any configuration, enjoys higher yet more stable performance, and the advantage is more significant as the configuration becomes more imbalanced, on which the original search method suffers a larger ‘discretization gap’.

2 Related Work

The rapid development of deep learning 

[16]

, in particular convolutional neural networks, have largely changed the way of designing computational models in computer vision. Recent years have witnessed a trend of stacking more and more convolutional layers to a deep network 

[1, 27, 10, 14] so that more trainable parameters are included and higher recognition accuracy is achieved.

Going one step further, researchers started to consider the possibility that designs deep networks automatically, and thereby created a new research area termed neural architecture search (NAS) [38]

. NAS defines a sub-field of automated machine learning (AutoML) 

[13]

and has attracted increasing attentions in both academia and industry. The idea is to construct a sufficiently large space and thus enables the architecture to adjust according to training data, simulating the process of evolutionary computation. With carefully monitored search strategies, NAS has claimed better performance compared to hand-designed networks in a wide range of applications including image classification 

[38], object detection [8], and semantic segmentation [17].

The early efforts of NAS mainly involved heuristic search in a very large space, and the sampled architectures were often evaluated individually. Representative examples include using reinforcement learning (RL) to formulate network or block designs 

[38, 39, 18]

, applying evolutionary algorithms (EA) to force the network evolve throughout iterations 

[26, 32], or simply performing guided random search to find competitive solutions [25]. These methods often require a vast amount of computation, e.g., thousands of GPU-days. To accelerate the search process, one-shot architecture search was proposed to share computation among architectures with similar building blocks [2].

One-shot architecture search was later developed into weight-reusing [3] and weight-sharing [24] methods which can reduce the search costs by orders of magnitudes. Beyond this point, researchers proposed to improve the search stability using better sampling methods [28, 7], explored the importance of the search space [12]

, and tried to integrate hardware consumption such as latency as additional evaluation metrics 

[31]. These efforts eventually leads to powerful architectures that achieve state-of-the-art performance on ImageNet [3] with moderate computational cost overhead.

A special family of one-shot architecture search falls into formulating the search space into a super-network which can adjust itself in a continuous space [21]. Based on this, the network and architectural parameters can be jointly optimized, which leads to a differentiable approach for architecture search. DARTS [20], a representative differentiable framework, designed an over-parameterized super-network which contains exponentially many sub-networks with shared weights. It performed bi-level optimization to update network weights and architectural weights alternately and, at the end of the search stage, used a greedy algorithm to prune off the operations and edges with lower weights. Partially-Connected DARTS [34] pursed a more efficient search by sampling a small part of super-network to reduce the redundancy in exploring the network space.

Recent DARTS methods [5, 34] have achieved success on both architecture quality and search efficiency. Nevertheless few researchers noticed that the discretization process incurs a significant accuracy loss, which makes it difficult to obtain a high-quality sub-network from the optimal sub-network [35]. This paper investigates this problem born with DARTS methods in a systematic way with the target to search discretization-aware architectures from the perspective of model regularization.

3 Discretization-Aware Architecture Search

3.1 Preliminaries: DARTS

DARTS [20] designs a cell-based search space to facilitate efficient differentiable architecture search. Each cell is represented as a directed acyclic graph with nodes, where each node defines a network layer. There is a pre-defined space of operations denoted by , where each element, , denotes a fixed operation. Commonly used operations include identity connection, and convolution performed at a network layer.

Within a cell, the searching goal is to choose one operation from for each pair of nodes. Let denote a pair of nodes, where . The primary idea of DARTS is to formulate the information and gradient propagated from to as a weighted sum over operations, as , where and denotes the output of the -th node, and is a set of architectural parameters to weight operations within each edge, with determining the weight of in edge . Following PC-DARTS [34], we introduce an extra set of architectural parameters () in our DA2S to determine the weight of each edge. Thus, the output of a node is the sum of all input flows, i.e., , where . The output of the entire cell is formed by concatenating the output of all prior nodes, i.e., . Note that the first two nodes, and , are input nodes to a cell, which are fixed during the search procedure.

This design makes the entire framework differentiable to both layer weights and hyper-parameters , so that it is possible to perform architecture search in an end-to-end fashion. After the search process is completed, on each edge , the operation with the largest value is preserved, and each node is connected to two precedents with the largest preserved. Denote the architectural parameters as , and the overall super-network as , which is parameterized by both and . The learning procedure of DARTS optimizes the image classification loss to determine and , as

(1)

where denotes a batch of training samples with corresponding class labels.

3.2 The Devil is in the Discretization Loss

It is well acknowledged that DARTS-based approaches suffer limited stability, i.e., when the same search procedure runs for several times individually, the searched architectures can report varying performance during the re-training stage. For this reason, the original DARTS [20] evaluated the architectures found in four individual search phases on the validation set and picked up the best one, which results in search cost. More importantly, as the search space gets enlarged, the number of trials require to find a high-quality architecture may also increase, and finally, the DARTS-based approaches may lose the advantage in efficiency.

An important insight that our work delivers is that the instability is partly caused by the discretization loss. Here, by discretization we mean the process that picks up the best operation and/or edge and discards others according to the architectural weights of the super-network, i.e., the continuous parameters, and , are discretized so that a pre-defined number of elements are optimized towards 1 and others close to 0. This obviously introduces inaccuracy to the well-trained super-network. To show this, we follow DARTS to train a super-network on CIFAR10, which reports an accuracy of on the validation set. Then, we investigate the impact of discretization by replacing the corresponding part with the trained weights, e.g., on each edge, keeping the dominating operation (using a weight of ) with its parameters (e.g., convolutional weights) unchanged. Results are shown in Figure 1. The accuracy drop is dramatic, e.g., under the setting of DARTS (each node has edges preserved), the validation accuracy drops from to . If we investigate a more imbalanced discretization (the first two nodes have edge each and the last two nodes have edges each), the validation accuracy drops to , which is even close to a random guess. This is unexpected and violates the design nature of one-shot NAS, which suggests that dramatically bad sub-networks can be sampled from a well-trained super-network. Consequently, there is no guarantee that architectures found in this way can eventually report good performance, even after a complete re-training process has been performed.

We argue that such gap is caused by that the training process is not aware of that a discretization process will be performed afterwards. For example, when operations are competing in an edge , they ‘assume’ that the input, is a weighted sum of the outputs of all nodes prior to . When discretization is performed, is modified into the output of the dominating node, but the weights on edge may not match the new input. Such inaccuracy accumulates throughout the entire network and eventually leads to catastrophic accuracy drop. Therefore, the key to alleviate the gap is to make the search process aware of discretization, as well as the topology of the final architecture. We will elaborate our solution in the next part.

3.3 Entropy-based Discretization-Aware Search

Figure 2 shows the overall framework of discretization-aware search. The main idea is to use the topology constraint to guide the optimization process, so that super-network eventually gets close to a sub-network that is allowed to appear as the final architecture. This is achieved by adding a loss function that measures the minimal distance between the current super-network and any acceptable sub-network. Specifically, we introduce an entropy-based loss function for each set of architectural parameters to fulfil this goal.

Below we elaborate the details when applying this methodology to two sets of parameters, (operation) and (edge), followed by discussions on the priority of discretization and the relationship between prior works and our DA2S.

Figure 2: Illustration of DA2S which forces the softmax of architectural parameters and moving towards extreme points. Here, denotes the weight on the -th operation, and denotes the weight between nodes and . Each color indicates a candidate, and the area of each region corresponds to the weight of corresponding candidate. This figure is best viewed in color.

Discretization of and

We start with discretizing . To guarantee that only one operation dominates on each edge when the search process ends, we compute the following loss for each edge :

(2)

Summarizing the loss term on all edges obtains the operation loss:

(3)

Note that Eq. (2) is an entropy-based loss function on

, the probability of choosing

as the operation of each . Minimizing pushes the weights of all operations to a one-hot distribution, i.e., the probability of one operation is close to while that of others are close to . Note that is jointly optimized with , implying that when the search process is complete, the network parameters, , have been adjusted according to the one-hot , consequently, the inaccuracy introduced by discretization is much smaller.

Things become a bit different when we try to discrete , because the configuration often requires to preserve more than one candidates, e.g., according to the standard DARTS formulation, each node receives input from two previous nodes. To handle it, we add an extra term to the previous entropy loss and constrain the maximum value of any to 1, and the overall loss is shown as:

(4)

where . Note that the sum of can be changed according to the search configuration. In the experimental part, we will show how this formulation generalizes to other types of desired topology, e.g., preserving or out of edges.

Similarly, summarizing this term on all nodes obtains the edge loss:

(5)

and the discretization-aware objective function for architecture search is:

(6)

Discretization Priority

Edge discretization and operation discretization depend on the performance estimation by each other. This warped paradox perplexes the community a lot for a long time, and can be eased by independently enforcing additional regularization on

and . While, exploring the discretization priority of operation discretization and edge discretization further narrows the discretization gap. By introducing regularization control functions, the discretization-aware objective function for architecture search can be improved as:

(7)

where , and are regularization control factors related to classification accuracy, operation discretization and edge discretization, respectively.

Considering the dynamic change of node connections, operation weights, and network parameters during the searching process, the regularization factors are defined as functions of training epochs, and simplified to be chosen from five representative increasing functions, as shown in Figure 

LABEL:fig:dimension, to reveal the regular pattern of optimization priority. At early training epochs, the network is not well trained. The regularization factors are small so that the training focuses on network parameters. As the optimization process continues, the network gets better trained and more attentions are paid on selecting operations and edges.

3.4 Relationship to Prior Work

There exist prior works to push the architectural parameters towards either or so as to align with the requirement of discretization.

For example, FairDARTS [6] introduced the zero-one loss as to quantize the architectural parameters, , by using individual sigmoid rather than softmax, where

indicates the sigmoid function. In addition, by considering NAS as an annealing process in which the system converges to a less chaotic status, XNAS 

[23] proposed to reduce the temperature term of the cross-entropy loss so that weaker candidates get eliminated. However, FairDARTS was not able to control the exact number of preserved candidates – sometimes there can be multiple weights pushing towards but only one of them is allowed to be kept; on the other hand, XNAS cannot support more than one candidates to be preserved, which suffers limited flexibility when applied to multi-choice scenarios. In comparison, our approach can adjust the loss function according to the desired topology – we will show a variety of examples in Table 3. If needed, it can freely generalize to choose multiple operations for each edge.

4 Experiments

In this section, we first describe the experimental settings. We then validate the effect of our discretization-aware search approach. We also report the performance of our approach on balanced and imbalanced configurations, and compare it with the state-of-the-arts.

4.1 Experimental Settings

Dataset The commonly used CIFAR10 and ImageNet datasets are used to evaluate our network architecture search approach. CIFAR10 consists of 60K images, which are of a low spatial resolution of 32 32. The images are equally distributed over 10 classes, with 50K training and 10K testing images. ImageNet contains 1,000 object categories, which consists of 1.3M high-resolution training images and 50K validation images. The images are almost equally distributed overall classes. Following the commonly used settings, we apply the mobile setting where the input image size is fixed to be 224 224 and the number of multi-add operations does not exceed 600M in the testing stage [34].

Implementation Details. Following DARTS as well as conventional architecture search approaches, we use an individual stage for architecture search, and after the optimal architecture is obtained, we use an additional process to train the classification model from scratch. During the search stage, the goal is to find the optimal and under the entropy-based discretization regularization in an end-to-end manner. We search architectures on CIFAR10 and then transfer to ImageNet.

During the search procedure, we split the training data into two parts, one for each stage of the search process. As for search space, we follow DARTS but without zero as it requires to choose a low weight operation when zero has a advantage to form a standard cell. There are in total 7 options including 33 and 55 separable convolution, 33 and 55 dilated separable convolution, 3

3 max-pooling, 3

3 average-pooling, and skip-connect.

When searching, the over-parameterized super-network is constructed by stacking cells ( normal cells and reduction cells) with the initial number of channels , and each cell consists of N = nodes. The 50K training set of CIFAR10 is split into two subsets with equal size, with one subset used for training network weights and the other used for architecture hyper-parameters.

We train the super-network for epochs and super-network weights are optimized by the momentum SGD algorithm, a momentum of , and a weight decay of . The learning rate is reduced progressively to zero following a cosine schedule from an intial learning rate of without restart. We use an Adam optimizer [15] for both and , both with a fixed learning rate of , a momentum of and a weight decay of  [34]. The memory cost of our implementation is smaller than GB so that it can be trained on most modern GPUs.

4.2 Results on CIFAR10

In Table 1, we compare the proposed approach with the state-of-the-art approaches. It can be seen that our approach outperforms the baseline DARTS method with a large margin (2.42% vs. 2.76%), and outperforms recent gradient-based methods including P-DARTS [5], PC-DARTS [34], and BayesNAS [37]. Note that the significant performance gains are achieved with moderate parameter size (3.4 M) and computational cost (0.3 GPU days). The performance gains validate the effectiveness of our entropy-based regularization method and the importance of discretization-aware search itself. This part we will first introduce our approach to search standard cells (select edges from balanced configuration), and then to further illustrate the effectiveness of our approach, we will search non-standard cells (imbalanced configuration).

[b] Architecture Test Err. Params Search Cost Search Method (%) (M) (GPU-days) DenseNet-BC [14] 3.46 25.6 - manual NASNet-A + cutout [39] 2.65 3.3 1800 RL AmoebaNet-B + cutout [25] 2.550.05 2.8 3150 evolution Hireachical Evolution [19] 3.750.12 15.7 300 evolution PNAS [18] 3.410.09 3.2 225 SMBO ENAS + cutout [24] 2.89 4.6 0.5 RL NAONet-WS [21] 3.53 3.1 0.4 NAO DARTS (1st order) + cutout [20] 3.000.14 3.3 0.4 gradient-based DARTS (2nd order) + cutout [20] 2.760.09 3.3 1 gradient-based SNAS (moderate) + cutout [33] 2.850.02 2.8 1.5 gradient-based ProxylessNAS + cutout [4] 2.08 - 4.0 gradient-based P-DARTS + cutout [5] 2.50 3.4 0.3 gradient-based PC-DARTS + cutout [34] 2.570.07 3.6 0.1 gradient-based BayesNAS + cutout [37] 2.810.04 3.4 0.2 gradient-based DA2S (ours) + cutout 2.42/2.510.09 3.4 0.3 gradient-based

Table 1: Comparison of classification error (%) with state-of-the-arts on CIFAR10. For DA2S, and are the best and average errors, respectively, and the search cost, GPU-days, is reported on a single NVIDIA GTX-1080Ti GPU – on a Tesla-V100 GPU, the time is expected to be GPU-days.

Operation and Edge Discretization. There are 7 operations in total for all cells (the ‘none’ operation is not used). Each cell has 14 edges and the network consists of two kinds of cells: the normal cells and the reduction cells, that the network architecture depends on the search of 28 edges. That is to say is the sum of 28 operation entropy losses. And is the sum of all edge entropy losses, Eq. (4). In Eq. (7), we experimentally define that , and . Then we evaluate the results with and under different setting of functions shown in the Figure LABEL:fig:dimension.

In Figure 3, we present the evolution of softmax of operation weights on CIFAR10 with edges in a normal cell. It can be seen that after about training epochs, the softmax of operation weights begin to significantly differentiate. At the final epoch, a single largest (towards 1) is obtained with the rest of small values (towards 0), which clearly demonstrate the effect of operation discretization. In Figure 4, we present the softmax evolution of , which validates the effect of edge discretization. Note that there are two edges selected at the same time for each pair of nodes, which shows the effectiveness of connection constraints in Eq. (4).

Figure 3: Evolution of softmax of operation weights during the searching procedure in a normal cell on CIFAR10. The horizontal axis denotes training epoch and vertical axis softmax weight value. (Best viewed in color with zooming in).
Figure 4: Evolution of softmax of edge weights of node3/4/5 during the searching procedure in a normal cell searched on CIFAR10. The horizontal axis denotes training epoch and vertical axis softmax weight value.

Discretization Priority. The entropy loss function inevitably interferes the searching procedure of DARTS, particularly, at the early epochs. Therefore, we propose to progressively increase the regularization factors using monotonous functions as shown in Figure LABEL:fig:dimension. In Table 2, we fix and as ‘const’ (equals ), it can be seen that the fast increasing functions, such as ‘linear’ and ‘log’, outperform slow ones for regularization factor , while ‘linear’ achives the best performance. It can be explained that moderately quick (linear) enhancement of the regularization on classification loss may have the smallest interference to the searching procedure.

In Table 4.2, we test the priority of operation and edge discretization using different regularizaton control functions with set to ‘linear’ as default. We fix as ‘const’ and evaluate using different control functions since the epochs before which is fixed as 0. This means that the priority of operation discretization is higher than that of edge. Under this setting, the best performance (2.49%) is achieved by the ‘step’ function. On the other hand, we fix and change under the same conditions. The best performance (2.42%) is achieved by the ‘log’ function.

The higher performance obtained by fixing shows that when the edge discretization dominates the search procedure, quick convergence of the topology of cell can lead the operation discretization-aware search converge better with fast increasing regularization control function (‘log’) utilized.

[b] baseline const log exp step linear 2.760.09 2.640.14 2.560.06 2.780.11 2.600.07 2.540.02

Table 2: Classification errors () under different control functions for regularization factor on CIFAR10.
Function =1.0 =1.0 best average best average const 2.71 2.740.03 2.53 2.560.03 linear 2.55 2.570.02 2.66 2.680.02 exp 2.51 2.530.02 2.57 2.600.03 step 2.49 2.540.05 2.51 2.570.06 log 2.61 2.640.03 2.42 2.510.09 tableClassification errors () when fixing either of or while changing the other using the regularization control functions on CIFAR10.

Imbalanced Configurations. In the above settings, it is defined that there are two inputs for each node in cells and the optimization objective is to select 8 out of 14 edges. This constraint largely reduces the difficulty of search, a random search can find architectures of moderate accuracy. To further validate the effectiveness and generalization of our approach, we search architectures with imbalanced configurations. Specifically, we break the setting about choosing 8 from 14 and choosing fewer edges to magnify the gap between architectures before and after discretization.

Four configurations, namely, preserving out of edges, are used to validate our approach and compared it with DARTS. For DARTS, we use the default searched architecture and select 3, 4, 5, or 6 edges according to the weights of operations. For our approach, to select 3 edges, we pose edge entropy-loss on node2 and node3, and select the largest one, and pose edge entropy-loss on node4 and node5 to select one on each. To select 4 edges, we pose edge entropy-loss on four inner nodes so that each of them has a single edge. For 5 edge edges, we select two on node5 and one on other 3 nodes. For 6 edges, we select two on node4/node5 and one on node2/node3.

In Table 3, the performance under imbalanced configurations of DARTS and our approach is compared. Under imbalanced configurations, the performance of DARTS dramatically drops in a large margin around [77.75-78.00], which demonstrates that the discretization process does bring a significant gap before and after prunning. Such gap has unpredictable impact upon searched architecture. In contrast, with discretization-aware constraint, our approach achieves relatively stable performance that the classification accuracy drop are significantly reduced to [0.21, 21.29]. For each configuration, it outperforms DARTS with significant margins (2.16%, 1.75%, 0.51%, 0.27%) after re-training.

[b] config DARTS [20] DA2S error para(M) acc. drop error para(M) acc. drop 3/14 5.831.21 1.50.2 87.8710.03 3.670.24 1.90.2 85.5264.23 4/14 4.791.17 1.90.3 87.8709.87 2.940.09 2.50.1 85.6385.42 5/14 3.230.08 2.20.2 87.8710.12 2.720.06 2.90.1 84.7671.85 6/14 2.910.05 2.70.1 87.8709.96 2.640.02 3.00.1 84.2964.24

Table 3: Comparison (%) of re-training error and super-network accuracy between DARTS and DA2S under imbalanced configurations on CIFAR10. In the first column, indicates preserving out of edges.
(a) the normal cell found on CIFAR10
(b) the reduction cell found on CIFAR10
Figure 5: Normal cell and reduction cell searched on CIFAR10.

4.3 Results on ImageNet

This part we use large-scale ImageNet to test the transferability of cells searched on CIFAR10 as shown in Figure 5. Same configuration as DARTS is adopt, i.e., the entire network is construct by stacking cells with an initial channel number of . We train the network for epochs from scratch with batch size on Tesla V100 GPUs. An SGD optimizer is used for optimizing the network parameters with an initial learning rate of (decayed linearly after each epoch), and also a momentum of 0.9 and a weight decay of . Other enhancements including label smoothing [30] and auxiliary loss are used during training, and learning rate warmup [9] is applied for the first epochs.

In Table 4, we evaluate the proposed approach and compare the result with the state-of-the-art approaches under the mobile setting (the FLOPs does not exceed ). DA2S outperforms the direct baseline, DARTS, by a significant margin of (an error rate of vs. ). DA2S also produces competitive performance among some recently published work including P-DARTS, PC-DARTS, and BeyesNAS, when the network architecture is searched on CIFAR and transferred to ImageNet. This further verifies the superiority of our DA2S in mitigating the discretization gap in the differentiable architecture search framework.

[b] Architecture Test Err. (%) Params Search Cost Search Method top-1 top-5 (M) (M) (GPU-days) Inception-v1 [29] 30.2 10.1 6.6 1448 - manual MobileNet [11] 29.4 10.5 4.2 569 - manual ShuffleNet 2 (v1) [36] 26.4 10.2 5 524 - manual ShuffleNet 2 (v2) [22] 25.1 - 5 591 - manual NASNet-A [39] 26.0 8.4 5.3 564 1800 RL AmoebaNet-C [25] 24.3 7.6 6.4 570 3150 evolution PNAS [18] 25.8 8.1 5.1 588 225 SMBO MnasNet-92 [31] 25.2 8.0 4.4 388 - RL DARTS (2nd order) [20] 26.7 8.7 4.7 574 4.0 gradient-based SNAS (mild) [33] 27.3 9.2 4.3 522 1.5 gradient-based ProxylessNAS (GPU) [4] 24.9 7.5 7.1 465 8.3 gradient-based P-DARTS (CIFAR10) [5] 24.4 7.4 4.9 557 0.3 gradient-based PC-DARTS (CIFAR10) [34] 25.1 7.8 5.3 586 0.1 gradient-based BayesNAS [37] 26.5 8.9 3.9 - 0.2 gradient-based DA2S (CIFAR10) 24.4 7.3 5.0 565 0.3 gradient-based

Table 4: Comparison of classification error (%) on ImageNet under the mobile setting (no larger than FLOPs).

5 Conclusions

In this paper, we propose a discretization-aware NAS method, which works by introducing an entropy-based loss term to push the super-network towards a discretization-friendly status according to the pre-defined target. This strategy can be applied to either selecting an operator for each edge, or selecting a fixed number of edges for each node. Experiments on standard image classification benchmarks demonstrate the superiority of our approach, in particular, under some imbalanced configurations which were not studied before.

This work provides another evidence that one-shot neural architecture search can benefit from shrinking the gap between the super-network and sub-networks. As the search space becomes more complicated in the future, we expect our approach to serve as a standard tool to alleviate the discretization gap. We also look forward to investigate some uncovered problems, e.g., whether discretization can be done in a gradual manner so as to further reduce the error.

References

  • [1] K. Alex, S. Ilya, and H. G. E (2012) ImageNet classification with deep convolutional neural networks. In NeurIPS, pp. 1106–1114. Cited by: §2.
  • [2] A. Brock, T. Lim, J. M. Ritchie, and N. Weston (2018) SMASH: one-shot model architecture search through hypernetworks. In ICLR, Cited by: §1, §2.
  • [3] H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang (2018) Efficient architecture search by network transformation. In AAAI, pp. 2787–2794. Cited by: §2.
  • [4] H. Cai, L. Zhu, and S. Han (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In ICLR, Cited by: Table 1, Table 4.
  • [5] X. Chen, L. Xie, J. Wu, and Q. Tian (2019) Progressive differentiable architecture search: bridging the depth gap between search and evaluation. In IEEE ICCV, Cited by: §1, §2, §4.2, Table 1, Table 4.
  • [6] X. Chu, T. Zhou, B. Zhang, and J. Li (2019) Eliminating unfair advantages in differentiable architecture search. Cited by: §3.4.
  • [7] X. Chu, B. Zhang, R. Xu, and J. Li (2019) FairNAS: rethinking evaluation fairness of weight sharing neural architecture search. ArXiv abs/1907.01845. Cited by: §2.
  • [8] G. Ghiasi, T. Lin, and Q. V. Le (2019) NAS-FPN: learning scalable feature pyramid architecture for object detection. In IEEE CVPR, pp. 7036–7045. Cited by: §2.
  • [9] P. Goyal, P. Dollár, R. Girshick, P. Noordhuis, L. Wesolowski, A. Kyrola, A. Tulloch, Y. Jia, and K. He (2017) Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677. Cited by: §4.3.
  • [10] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In IEEE CVPR, pp. 770–778. Cited by: §2.
  • [11] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam (2017) MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. Cited by: Table 4.
  • [12] G. Howard, M. Sandler, G. Chu, L. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, and H. Adam (2019) Searching for mobilenetv3. In IEEE ICCV, pp. 1314–1324. Cited by: §2.
  • [13] Http://www.cs.ubc.ca/labs/beta/projects/autoweka. Cited by: §2.
  • [14] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In IEEE CVPR, pp. 2261–2269. Cited by: §2, Table 1.
  • [15] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §4.1.
  • [16] Y. LeCun, Y. Bengio, and G. E. Hinton (2015) Deep learning. Nature 521 (7553), pp. 436–444. Cited by: §2.
  • [17] C. Liu, L. Chen, F. Schroff, H. Adam, W. Hua, A. Yuille, and L. Fei-Fei (2019) Auto-deeplab: hierarchical neural architecture search for semantic image segmentation. In IEEE CVPR, pp. 82–92. Cited by: §2.
  • [18] C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy (2018) Progressive neural architecture search. In ECCV, pp. 540–555. Cited by: §2, Table 1, Table 4.
  • [19] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu (2018) Hierarchical representations for efficient architecture search. In ICLR, Cited by: Table 1.
  • [20] H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In ICLR, Cited by: §1, §2, §3.1, §3.2, Table 1, Table 3, Table 4.
  • [21] R. Luo, F. Tian, T. Qin, E. Chen, and T. Liu (2018) Neural architecture optimization. In NeurIPS, pp. 7827–7838. Cited by: §2, Table 1.
  • [22] N. Ma, X. Zhang, H. Zheng, and J. Sun (2018) ShuffleNet v2: practical guidelines for efficient cnn architecture design. In ECCV, pp. 122–138. Cited by: Table 4.
  • [23] N. Nayman, A. Noy, T. Ridnik, I. Friedman, R. Jin, and L. Zelnik (2019) Xnas: neural architecture search with expert advice. In NeurIPS, Cited by: §3.4.
  • [24] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean (2018) Efficient neural architecture search via parameter sharing. In ICML, pp. 4092–4101. Cited by: §1, §2, Table 1.
  • [25] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2019)

    Regularized evolution for image classifier architecture search

    .
    In AAAI, pp. 4780–4789. Cited by: §1, §2, Table 1, Table 4.
  • [26] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin (2017) In ICML, pp. 2902–2911. Cited by: §1, §2.
  • [27] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §2.
  • [28] D. Stamoulis, R. Ding, D. Wang, D. Lymberopoulos, B. Priyantha, J. Liu, and D. Marculescu (2019) Single-path nas: designing hardware-efficient convnets in less than 4 hours. In arXiv preprint arXiv:1904.02877, Cited by: §2.
  • [29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich (2015) Going deeper with convolutions. In IEEE CVPR, pp. 1–9. Cited by: Table 4.
  • [30] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In IEEE CVPR, pp. 2818–2826. Cited by: §4.3.
  • [31] M. Tan, B. Chen, R. Pang, V. Vasudevan, and Q. V. Le (2019) MnasNet: platform-aware neural architecture search for mobile. IEEE CVPR, pp. 2820–2828. Cited by: §2, Table 4.
  • [32] L. Xie and A. Yuille (2017) Genetic cnn. In IEEE ICCV, pp. 1388–1397. Cited by: §1, §2.
  • [33] S. Xie, H. Zheng, C. Liu, and L. Lin (2019) SNAS: stochastic neural architecture search. In ICLR, Cited by: Table 1, Table 4.
  • [34] Y. Xu, L. Xie, X. Zhang, X. Chen, G. J. Qi, Q. Tian, and H. Xiong PC-DARTS: partial channel connections for memory-efficient differentiable architecture search. arXiv preprint arXiv:1907.05737. Cited by: §1, §2, §2, §3.1, §4.1, §4.1, §4.2, Table 1, Table 4.
  • [35] A. Zela, T. Elsken, T. Saikia, Y. Marrakchi, T. Brox, and F. Hutter (2019) Understanding and robustifying differentiable architecture search. Cited by: §2.
  • [36] X. Zhang, X. Zhou, M. Lin, and J. Sun (2018) ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In IEEE CVPR, pp. 6848–6856. Cited by: Table 4.
  • [37] H. Zhou, M. Yang, J. Wang, and W. Pan (2019) BayesNAS: a Bayesian approach for neural architecture search. In ICML, pp. 7603–7613. Cited by: §4.2, Table 1, Table 4.
  • [38] B. Zoph and Q. V. Le (2017) Neural architecture search with reinforcement learning. In ICLR, Cited by: §1, §2, §2.
  • [39] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In IEEE CVPR, pp. 8697–8710. Cited by: §1, §2, Table 1, Table 4.