Stabilizing Differentiable Architecture Search via Perturbation-based Regularization

02/12/2020 ∙ by Xiangning Chen, et al. ∙ 9

Differentiable architecture search (DARTS) is a prevailing NAS solution to identify architectures. Based on the continuous relaxation of the architecture space, DARTS learns a differentiable architecture weight and largely reduces the search cost. However, its stability and generalizability have been challenged for yielding deteriorating architectures as the search proceeds. We find that the precipitous validation loss landscape, which leads to a dramatic performance drop when distilling the final architecture, is an essential factor that causes instability. Based on this observation, we propose a perturbation-based regularization, named SmoothDARTS (SDARTS), to smooth the loss landscape and improve the generalizability of DARTS. In particular, our new formulations stabilize DARTS by either random smoothing or adversarial attack. The search trajectory on NAS-Bench-1Shot1 demonstrates the effectiveness of our approach and due to the improved stability, we achieve performance gain across various search spaces on 4 datasets. Furthermore, we mathematically show that SDARTS implicitly regularizes the Hessian norm of the validation loss, which accounts for a smoother loss landscape and improved performance. The code is available at



There are no comments yet.


page 1

page 2

page 3

page 4

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: The landscape of validation accuracy regarding the architecture weight on CIFAR-10. The X-axis is the gradient direction , while the Y-axis is another random orthogonal direction (best viewed in color).

Neural architecture search (NAS) has emerged as a rational next step to automate the trial and error paradigm of architecture design. It is straightforward to search by reinforcement learning

(Zoph and Le, 2017; Zoph et al., 2018; Zhong et al., 2018)

and evolutionary algorithm

(Stanley and Miikkulainen, 2002; Miikkulainen et al., 2019; Real et al., 2017; Liu et al., 2017) due to the discrete nature of the architecture space. However, these methods usually require massive computation resources. A variety of approaches are then proposed to reduce the search cost including one-shot architecture search (Pham et al., 2018; Bender et al., 2018; Brock et al., 2018)

, performance estimation

(Klein et al., 2017; Baker, 2018) and network morphisms (Elsken et al., 2019; Cai et al., 2018, 2018). For example, one-shot architecture search methods construct a super-network covering all candidate architectures, where sub-networks with shared components also share the corresponding weights. Then the super-network is trained only once, which is much more efficient. In particular, DARTS (Liu et al., 2019) builds a continuous mixture architecture and relaxes the categorical architecture search problem to learning a differentiable architecture weight .

Despite being computationally efficient, the stability and generalizability of DARTS have been challenged recently. Many (Zela et al., 2020a; Yu et al., 2020) have observed that although the validation accuracy of the mixture architecture keeps growing, the performance of the derived architecture collapses when evaluation. Such instability makes DARTS converge to distorted architectures. For instance, Chu et al. (2019) and Liang et al. (2019) find that parameter-free operations such as skip connection dominate the generated architecture, and DARTS has a preference towards wide and shallow structures (Shu et al., 2020). To alleviate this issue, some (Zela et al., 2020a; Liang et al., 2019) propose to early stop the search process based on handcrafted criteria. However, the inherent instability starts from the very beginning and early stopping is a compromise without actually improving the search algorithm.

An important source of such instability is the final projection step to derive the actual discrete architecture from the continuous mixture architecture. There is often a huge performance drop in this projection step, so the validation accuracy of the mixture architecture, which is optimized by DARTS, may not be correlated with the final validation accuracy. As shown in Figure 1(a), DARTS often converges to a sharp region, so small perturbations will dramatically decrease the validation accuracy, let alone the projection step. Moreover, the sharp cone in the landscape illustrates that the network weight is almost only applicable to the current architecture weight . Bender et al. (2018) also discovers a similar phenomenon that the shared weight of the one-shot network is sensitive and only works for a few sub-networks. This empirically prevents DARTS from fully exploring the architecture space.

To address these problems, we propose two novel formulations. Intuitively, the optimization of is based on that performs well on nearby configurations rather than exactly the current one. This leads to smoother landscapes as shown in Figure 1(b, c). Our contributions are as follows:

  • We present SmoothDARTS (SDARTS) to overcome the instability and lack of generalizability of DARTS. Instead of assuming the shared weight as the minimizer with respect to the current architecture weight , we formulate as the minimizer of the Randomly Smoothed function, defined as the expected loss within the neighborhood of current . The resulting approach, called SDARTS-RS, requires scarcely additional computational cost but is surprisingly effective. We also propose a stronger formulation that forces to minimize the worst-case loss around a neighborhood of , which can be solved by ADVersarial training. The resulting algorithm, called SDARTS-ADV, leads to even better stability and improved performance.

  • Mathematically, we show that the performance drop caused by discretization is highly related to the norm of Hessian regarding the architecture weight , which is also mentioned empirically in (Zela et al., 2020a). Furthermore, we show that both our regularization techniques are implicitly minimizing this term, which explains why our methods can significantly improve DARTS throughout various settings.

  • The proposed methods consistently improve DARTS and can match or improve over state-of-the-art results on various search spaces of CIFAR-10 and Penn Treebank. Besides, extensive experiments show that our methods outperform other regularization approaches on three datasets across four search spaces.

Figure 2: Normal cells discovered by SDARTS-RS and SDARTS-ADV on CIFAR-10.

2 Background and Related Work

2.1 Differentiable Architecture Search

Similar to prior work (Zoph et al., 2018), DARTS only searches for the architecture of cells, which are stacked to compose the full network. Within a cell, there are nodes organized as a DAG (Figure 2), where every node is a latent representation and every edge is associated with a certain operation . It is inherently difficult to perform an efficient search since the choice of operation on every edge is discrete. As a solution, DARTS constructs a mixed operation on every edge:

where is the candidate operation corpus and denotes the corresponding architecture weight for operation on edge

. Therefore, the original categorical choice per edge is parameterized by a vector

with dimension . And the architecture search is relaxed to learning a continuous architecture weight . With such relaxation, DARTS formulates a bi-level optimization objective:


Then, and are updated via gradient descent alternately, where is approximated by the current or one-step forward . DARTS sets up a wave in the NAS scenario and many approaches are springing up to make further improvements (Xie et al., 2019; Dong and Yang, 2019; Cai et al., 2019; Yao et al., 2020; Xu et al., 2020).

Stabilize DARTS.

After search, DARTS simply prunes out operations on every edge except the one with the largest when evaluation. Under such perturbation, its stability and generalizability have been widely challenged (Zela et al., 2020a; Liang et al., 2019; Chu et al., 2019). DARTS+ (Liang et al., 2019) proposes to early stop the search based on the number of skip connection. Zela et al. (2020a)

empirically points out that the dominate eigenvalue

of the Hessian matrix is highly correlated with the stability. They also present another early stopping criterion (DARTS-ES) to prevent from exploding. Besides, partial channel connection (Xu et al., 2020), ScheduledDropPath (Zoph et al., 2018) and L2 regularization on are also shown to improve the stability of DARTS.


NAS-Bench-1Shot1 (Zela et al., 2020b) is a benchmark architecture dataset covering three search spaces on CIFAR-10. It provides a mapping between the continuous space of differentiable NAS and discrete one in NAS-Bench-101 (Ying et al., 2019) - the first architecture dataset proposed to lower the entry barrier of NAS. By querying in NAS-Bench-1Shot1, researchers can obtain necessary quantities for a specific architecture (e.g. test accuracy) in milliseconds. Using this benchmark, we track the anytime test error of various NAS algorithms, which allows us to compare their stability.

2.2 Adversarial Robustness

In this paper, we claim that DARTS should be robust against the perturbation on the architecture weight

. Similarly, the topic of adversarial robustness aims to overcome the vulnerability of neural networks against contrived input perturbation

(Szegedy et al., 2014). Random smoothing (Lecuyer et al., 2019; Cohen et al., 2019) is a popular method to improve model robustness. Another effective approach is adversarial training (Goodfellow et al., 2015; Madry et al., 2018b), which intuitively optimizes the worst-case training loss. To the best of our knowledge, we are the first to apply this idea to stabilize the searching of NAS.

3 Proposed method

3.1 Motivation

During the DARTS search procedure, a continuous architecture weight is used, but it has to be projected to derive the discrete architecture eventually. There is often a huge performance drop in the projection stage, and thus a good mixture architecture does not imply a good final architecture. Therefore, although DARTS can consistently reduce the validation error of the mixture architecture, the validation error after projection is very unstable and could even blow up, as shown in Figure 3 and 4.

This phenomenon has been discussed in several recent papers (Zela et al., 2020a; Liang et al., 2019), and Zela et al. (2020a) empirically finds that the instability is related to the norm of Hessian . To verify this phenomenon, we plot the validation accuracy landscape of DARTS in Figure 1(a), which is extremely sharp – small perturbation on can hugely reduce the validation accuracy from over 90% to less than 10%. This also undermines DARTS’ ability to explore the architecture space: can only change slightly at each iteration because the current only works within a small local region.

3.2 Proposed Formulation

To address this issue, intuitively we want to force to be more smooth with respect to the perturbation . This leads to the following two versions of SDARTS by redefining :



represents the uniform distribution between

and . The main idea is that instead of using that only performs well on the current , we replace it by the defined in (2) that performs well within a neighborhood of . This forces our algorithms to focus on pairs with smooth loss landscapes. For SDARTS-RS, we set as the minimizer of the expected loss under small random perturbation bounded by . This is based on the idea of random smoothing, which randomly averaging the neighborhood of a given function to obtain a smoother version (Cohen et al., 2019; Lecuyer et al., 2019). On the other hand, we set to minimize the worst-case training loss under small perturbation of for SDARTS-ADV. This is based on the idea of adversarial training, which is a widely used technique in adversarial defense (Madry et al., 2018a).

  Generate a mixed operation for every edge
  while not converged do
     Update architecture by descending
     Compute based on equation (3) or (4)
     Update weight by descending
  end while
Algorithm 1 Training of SDARTS

3.3 Search Algorithms

The optimization algorithm for solving the proposed formulations is described in Algorithm 1. Similar to DARTS, our algorithm is based on alternating minimization between and . For SDARTS-RS, is the minimizer of the expected loss altered by a randomly chosen , which can be optimized by SGD directly. We sample the following and add it to before running a single step of SGD on 111We use uniform random for simplicity, while in practice the approach works also with other random perturbations, such as Gaussian. :


This approach is very simple (adding only one line of the code) and efficient (doesn’t introduce any overhead), and we find that it is quite effective to improve the stability. As shown in Figure 1(b), the sharp cone disappears and the landscape becomes much smoother, which maintains high validation accuracy under perturbation on .

For SDARTS-ADV, we consider the worst-case loss under certain perturbation level, which is a stronger requirement than the expected loss in SDARTS-RS. The resulting landscape is even smoother as illustrated in Figure 1(c). In this case, updating needs to solve a min-max optimization problem beforehand. We employ the widely used multi-step projected gradient descent (PGD) on the negative training loss to iteratively compute :


where denotes the projection onto the chosen norm ball (e.g. clipping in the case of the norm) and denotes the learning rate.

In the next section, we will mathematically explain why SDARTS-RS and SDARTS-ADV improve the stability and generalizability of DARTS.

4 Implicit Regularization on Hessian Matrix

It has been empirically pointed out in (Zela et al., 2020a) that the dominant eigenvalue of (spectral norm of Hessian) is highly correlated with the generalization quality of DARTS solutions. In standard DARTS training, the Hessian norm usually blows up, which leads to deteriorating (test) performance of the solutions. In Figure 5, we plot this Hessian norm during the training procedure and find that the proposed methods, including both SDARTS-RS and SDARTS-ADV, consistently reduce the Hessian norms during the training procedure. In the following, we first explain why the spectral norm of Hessian is correlated with the solution quality, and then formally show that our algorithms can implicitly control the Hessian norm.

Why is Hessian norm correlated with solution quality?

Assume is the optimal solution of (1) in the continuous space while is the discrete solution by projecting to the simplex. Based on Taylor expansion and assume due to optimality condition, we have


where is the average Hessian. If we assume that Hessian is stable in a local region, then the quantity of can approximately bound the performance drop when projecting to with a fixed . After fine tuning, where is the optimal weight corresponding to is expected to be even smaller than , if the training and validation losses are highly correlated. Therefore, the performance of , which is the quantity we care, will also be bounded by . Note that the bound could be quite loose since it assumes the network weight remains unchanged when switching from to . A more precise bound can be computed by viewing as a function only paramterized by , and then calculate its derivative/Hessian.

Controlling spectral norm of Hessian is non-trivial.
Figure 3: Anytime test error (mean std) of DARTS, explicit Hessian regularization, SDARTS-RS and SDARTS-ADV on NAS-Bench-1Shot1 (best viewed in color).

With the observation that the solution quality of DARTS is related to , an immediate thought is to explicitly control this quantity during the optimization procedure. To implement this idea, we add an auxiliary term - the finite difference estimation of Hessian matrix

to the loss function when updating

. However, this requires much additional memory to build a computational graph of the gradient, and Figure 3 suggests that it takes some effect compared with DARTS but is worse than both SDARTS-RS and SDARTS-ADV. One potential reason is the high dimensionality – there are too many directions of to choose from and we can only randomly sample a subset of them at each iteration.

Why can SDARTS-RS implicitly control Hessian?

In SDARTS-RS, the objective function becomes


where the second term in (7) is canceled out since and the off-diagonal elements of the third term becomes after taking the expectation on . The update of in SDARTS-RS can thus implicitly controls the trace norm of . If the matrix is close to PSD, this is approximately regularizing the (positive) eigenvalues of . Therefore, we observe that SDARTS-RS empirically reduces the Hessian norm through its training procedure.

Why can SDARTS-ADV implicitly control Hessian?

SDARTS-ADV ensures that the validation loss is small under the worst-case perturbation of . If we assume the Hessian matrix is roughly constant within -ball, then adversarial training implicitly minimizes


when the perturbation is in norm, the second term becomes the , and when the perturbation is in norm, the second term is bounded by . Thus SDARTS-ADV also approximately minimizes the norm of Hessian. In addition, notice that from (9) to (10) we assume the gradient is , which is the property holds only for . In the intermediate steps for a general , the stability under perturbation will not only be related to Hessian but also gradient, and in SDARTS-ADV we can still implicitly control the landscape to be smooth by minimizing the first-order term in the Taylor expansion of (9).

5 Experiments

In this section, we first track the anytime performance of our methods on NAS-Bench-1Shot1 in Section 5.1, which demonstrates their superior stability and generalizability. Then we perform experiments on the widely used CNN cell space on CIFAR-10 (Section 5.2) and RNN cell space on PTB (Section 5.3). In Section 5.4, we present a detailed comparison between our methods with other popular regularization techniques. At last, we examine the generated architectures and illustrate that our methods mitigate DARTS’ bias for certain operations and connection patterns in Section 5.5.

5.1 Architecture Search on NAS-Bench-1Shot1

(a) Space 1
(b) Space 2
(c) Space 3
Figure 4: Anytime test error on NAS-Bench-1Shot1 (best viewed in color).
(a) Space 1
(b) Space 2
(c) Space 3
Figure 5: Trajectory (mean std) of the Hessian norm on NAS-Bench-1Shot1 (best viewed in color).

NAS-Bench-1Shot1 consists of 3 search spaces based on CIFAR-10, which contains 6,240, 29,160 and 363,648 architectures respectively. The macro architecture of models in all spaces is constructed by 3 stacked blocks, with a max-pooling operation in between as the DownSampler. Each block contains 3 stacked cells and the micro architecture of each cell is represented as a DAG. Besides the operation on every edge, the search algorithm also needs to determine the topology of edges connecting input, output nodes and the choice blocks. We refer to their paper (Zela et al., 2020b) for details about the search spaces.

We make a comparison between our methods and state-of-the-art NAS algorithms on all 3 search spaces. Descriptions of the compared baselines can be found in Appendix 7.1

. We run every NAS algorithm for 100 epochs (twice of the default DARTS setting) to allow a thorough and comprehensive analysis on search stability and generalizability. Hyperparameter settings for 5 baselines are set as their default. For both SDARTS-RS and SDARTS-ADV, the perturbation on

is performed after the softmax layer. We initialize the norm ball

as 0.03 and linearly increase it to 0.3 in all our experiments. The random perturbation in SDARTS-RS is sampled uniformly between and . And we use the 7-step PGD attack under norm ball to obtain the in SDARTS-ADV. Other settings are the same as DARTS.

To search for 100 epochs on a single NVIDIA GTX 1080 Ti GPU, ENAS, DARTS, GDAS, NASP, PC-DARTS requires 10.5h, 8h, 4.5h, 5h, and 6h respectively. Extra time of SDARTS-RS is just for the random sample, so its search time is approximately the same as DARTS, which is 8h. SDARTS-ADV needs extra steps of forward and backward propagation to perform the adversarial attack, so it spends 16h. Notice that this can be largely reduced by setting the PGD attack step as 1 (FGSM (Goodfellow et al., 2015)), which only brings little performance decrease according to our experiments.


We plot the anytime test error averaged from 6 independent runs in Figure 4. Also, the trajectory (mean std) of the spectral norm of is shown in Figure 5. Noting that ENAS is not included in Figure 5 since it does not have the architecture weight . We provide our detailed analysis below.

  • DARTS generates architectures with deteriorating performance when the search epoch becomes large, which is in accordance with the observations in (Zela et al., 2020a; Liang et al., 2019). The single-path modifications (GDAS, NASP) take effects to some extent, e.g. GDAS prevents to find worse architectures and remains stable. However, GDAS suffers premature convergence to sub-optimal architectures, and NASP is effective for the first few search epochs before its performance starts to fluctuate like ENAS. A potential reason is that the architecture weight is clipped to the nearest boundary when it can not satisfy some range constraint. This makes NASP confused when choosing among operations if their corresponding weights are similar on certain edges. The partial channel connection introduced by PC-DARTS makes it the best baseline on Space 1 and 3, but PC-DARTS also suffers severely degenerate performance on Space 2.

  • SDARTS-RS outperforms all 5 baselines on 3 search spaces. It better explores the architecture space and meanwhile overcomes the instability issue in DARTS. SDARTS-ADV achieves even better performance by forcing to minimize the worst-case loss around a neighborhood of . Its anytime test error continues to decrease when the search epoch is larger than 80, which does not occur for any other method.

  • As explained in Section 4, the spectral norm of Hessian has strong correlation with the stability and solution quality. Large leads to poor generalizability and stability. In agreement with the theoretical analysis that our methods keep minimizing (Section 4), both SDARTS-RS and SDARTS-ADV anneal to a low level throughout the search procedure. In comparison, in all baselines continue to increase and they even enlarge beyond 10 times after 100 search epochs. Though GDAS has the lowest at the beginning, it suffers the largest growth rate. The partial channel connection in PC-DARTS can not regularize the Hessian norm, it has a similar trajectory to DARTS and NASP, which supports their comparably unstable performance.

Test Error
Search Cost
(GPU days)
DenseNet-BC (Huang et al., 2017) 3.46 25.6 - manual
NASNet-A (Zoph et al., 2018) 2.65 3.3 2000 RL
AmoebaNet-A (Real et al., 2019) 3.2 3150 evolution
AmoebaNet-B (Real et al., 2019) 2.8 3150 evolution
PNAS (Liu et al., 2018) 3.2 225 SMBO
ENAS (Pham et al., 2018) 2.89 4.6 0.5 RL
NAONet (Luo et al., 2018) 3.53 3.1 0.4 NAO
DARTS (1st) (Liu et al., 2019) 3.3 0.4 gradient
DARTS (2nd) (Liu et al., 2019) 3.3 1 gradient
SNAS (moderate) (Xie et al., 2019) 2.8 1.5 gradient
GDAS (Dong and Yang, 2019) 2.93 3.4 0.3 gradient
BayesNAS (Zhou et al., 2019) 3.4 0.2 gradient
ProxylessNAS (Cai et al., 2019) 2.08 - 4.0 gradient
NASP (Yao et al., 2020) 3.3 0.1 gradient
PC-DARTS (Xu et al., 2020) 3.6 0.1 gradient
R-DARTS(L2) (Zela et al., 2020a) - 1.6 gradient
SDARTS-RS 3.4 0.4 gradient
SDARTS-ADV 3.3 1.3 gradient
  • Obtained without cutout augmentation.

  • Obtained on a different space with PyramidNet (Han et al., 2017) as the backbone.

  • Recorded on a single GTX 1080Ti GPU.

Table 1:

Comparison with state-of-the-art image classifiers on CIFAR-10.

5.2 Architecture Search on CNN Standard Space


We employ SDARTS-RS and SDARTS-ADV to search CNN cells on CIFAR-10 following the search space (with 7 operations) in DARTS (Liu et al., 2019). The macro architecture is obtained by stacking convolution cells for 8 times, and every cell contains nodes (2 input nodes, 4 intermediate nodes, and 1 output nodes). Other detailed settings for searching and evaluation can be found in Appendix 7.2, which are the same as DARTS.


Table 1 summarizes the comparison of our methods with state-of-the-art algorithms, and the searched normal cells are visualized in Figure 2

. We achieve performance gain compared with DARTS and most of its variants. Moreover, the variance of SDARTS-RS is considerably better than baselines and SDARTS-ADV achieves even better stability. PC-DARTS slightly outperforms our methods but has a higher variance. It warm starts

for the first 15 epochs, and the search epoch is comparably smaller, which may alleviate its instability issue discussed in Section 5.1. Nevertheless, when searching on various simplified search spaces across 3 datasets, our methods achieve superior stability and test accuracy compared with PC-DARTS as indicated in Section 5.4.

5.3 Architecture Search on RNN Standard Space


Besides searching for CNN cells, our methods are applicable to various scenarios such as identifying RNN cells. Following DARTS (Liu et al., 2019), the RNN search space based on PTB contains 5 candidate functions, i.e. tanh, relu, sigmoid, identity and zero. The macro architecture of the RNN network is comprised of only a single cell consisting of nodes. The first intermediate node is manually fixed and the rest nodes are determined by the search algorithm. When searching, we train the RNN network for 50 epochs with sequence length as 35. During evaluation, the final architecture is trained by an SGD optimizer, where the batch size is set as 64 and the learning rate is fixed as 20. These settings are the same as DARTS.


The results are shown in Table 2. SDARTS-RS achieves a validation perplexity of 58.7 and a test perplexity of 56.4. Meanwhile, SDARTS-ADV achieves a validation perplexity of 58.3 and a test perplexity of 56.1. We outperform other NAS methods with similar model size, which demonstrates the effectiveness of our methods for the RNN space. LSTM + SE obtains better results than us, but it benefits from a handcrafted ensemble structure.

Architecture Perplexity(%)
valid test
LSTM + SE (Yang et al., 2018) 58.1 56.0 22
NAS (Zoph and Le, 2017) - 64.0 25
ENAS (Pham et al., 2018) 60.8 58.6 24
DARTS (1st) (Liu et al., 2019) 60.2 57.6 23
DARTS (2nd) (Liu et al., 2019) 58.1 55.7 23
GDAS (Dong and Yang, 2019) 59.8 57.5 23
NASP (Yao et al., 2020) 59.9 57.3 23
SDARTS-RS 58.7 56.4 23
SDARTS-ADV 58.3 56.1 23
  • LSTM + SE represents LSTM with 15 softmax experts.

  • We achieve 58.5 for validation and 56.2 for test when training the architecture found by DARTS (2nd) ourselves.

Table 2: Comparison with state-of-the-art language models on PTB (lower perplexity is better).
C10 S1 3.84 3.11 3.01 3.11 2.78 2.78 2.73
S2 4.85 3.02 3.26 3.48 3.31 2.75 2.65
S3 3.34 2.51 2.74 2.93 2.51 2.53 2.49
S4 7.20 3.02 3.71 3.58 3.56 2.93 2.87
C100 S1 29.46 18.87 28.37 25.93 24.25 17.02 16.88
S2 26.05 18.23 23.25 22.30 22.44 17.56 17.24
S3 28.90 18.05 23.73 22.36 23.99 17.73 17.12
S4 22.85 17.16 21.26 22.18 21.94 17.17 15.46
SVHN S1 4.58 2.28 2.72 2.55 4.79 2.26 2.16
S2 3.53 2.39 2.60 2.52 2.51 2.37 2.07
S3 3.41 2.27 2.50 2.49 2.48 2.21 2.05
S4 3.05 2.37 2.51 2.61 2.50 2.35 1.98
Table 3: Comparison with popular regularization techniques (test error (%)).
The best method is boldface and underlined while the second best is boldface.

5.4 Comparison with Other Regularization

Our methods can be viewed as a way to regularize DARTS (implicitly regularize the Hessian norm of validation loss). In this section, we compare SDARTS-RS and SDARTS-ADV with other popular regularization techniques. The compared baselines are 1) partial channel connection (PC-DARTS (Xu et al., 2020)); 2) ScheduledDropPath (Zoph et al., 2018) (R-DARTS(DP)); 3) L2 regularization on (R-DARTS(L2)); 3) early stopping (DARTS-ES (Zela et al., 2020a)). Descriptions of the compared regularization baselines are shown in Appendix 7.1.


We perform a thorough comparison on 4 simplified search spaces proposed in (Zela et al., 2020a) across 3 datasets (CIFAR-10, CIFAR-100, and SVHN). All search spaces utilize the same macro architecture as in Section 5.2, the difference is that they only contain a portion of candidate operations (details are shown in Appendix 7.3). Results in Table 3 are obtained by running every method 4 independent times and pick the final architecture based on the validation accuracy (retrain from scratch for a few epochs). Other settings are the same as Section 5.2.


The discovered cells are shown in Appendix (Figure 7, 8, 9 and 10). Our methods achieve substantial performance gains compared with baselines. SDARTS-ADV is the best method for all 12 benchmarks and SDARTS-RS strikes the second place on 10 benchmarks. The cell discovered on S3 for CIFAR-10 even achieves higher test accuracy than all the methods in Table 1 (except for ProxylessNAS that searches based on PyramidNet).

5.5 Examine the Searched Architectures

As pointed out in (Zela et al., 2020a; Liang et al., 2019; Shu et al., 2020), DARTS tends to fall into distorted architectures that converge faster, which is another manifestation of its instability. So here we examine the generated architectures and see whether our methods can overcome such bias.

S1 1.0 0.5 0.375 0.125 0.125
S2 0.875 0.75 0.25 0.375 0.125
S3 1.0 0.125 1.0 0.125 0.125
S4 0.625 0.125 0.0 0.0 0.0
Table 4: Proportion of parameter-free operations in normal cells found on CIFAR-10.

5.5.1 Proportion of Parameter-Free Operations

Many (Zela et al., 2020a; Liang et al., 2019) have found out that parameter-free operations such as skip connection dominate the generated architecture. Though makes architectures converge faster, excessive parameter-free operations can largely reduce the model’s representation capability and bring out low test accuracy. As illustrated in Table 4, we also find similar phenomenon when searching by DARTS on 4 simplified search spaces in Section 5.4. The proportion of parameter-free operations even becomes 100% on S1 and S3, and DARTS can not distinguish the harmful noise operation on S4. PC-DARTS achieves some improvements but is not enough since noise still appears. DARTS-ES reveals its effectiveness on S2 and S4 but fails on S3 since all operations found are skip connection. We do not show R-DARTS(DP) and R-DARTS(L2) here because their discovered cells are not released. In comparison, both SDARTS-RS and SDARTS-ADV succeed in controlling the portion of parameter-free operations on all search spaces.

5.5.2 Connection Pattern

Shu et al. (2020) demonstrates, from both empirical and theoretical aspects, that DARTS tends to favor wide and shallow cells since they often have smoother loss landscape and faster convergence speed. However, these cells may not generalize better than their narrower and deeper variants (Shu et al., 2020). Follow their definitions (suppose every intermediate node has width , detailed definitions are shown in Appendix 7.4), the best cell generated by our methods on CNN standard space (Section 5.2) has width 3 and depth 4. In contrast, ENAS has width 5 and depth 2, DARTS has width 3.5 and depth 3, PC-DARTS has width 4 and depth 2. Consequently, we succeed in mitigating the bias of connection pattern.

6 Conclusion

We introduce SmoothDARTS (SDARTS), a perturbation-based regularization to improve the stability and generalizability of differentiable architecture search. Specifically, the regularization is carried out with random smoothing or adversarial attack. SDARTS possesses a much smoother landscape and has the theoretical guarantee to regularize the Hessian norm of the validation loss. Extensive experiments illustrate the effectiveness of SDARTS and we outperform various regularization techniques.


  • B. Baker (2018) Accelerating neural architecture search using performance prediction. External Links: Link Cited by: §1.
  • G. Bender, P. Kindermans, B. Zoph, V. Vasudevan, and Q. Le (2018) Understanding and simplifying one-shot architecture search. In Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsmässan, Stockholm Sweden, pp. 550–559. External Links: Link Cited by: §1, §1.
  • A. Brock, T. Lim, J.M. Ritchie, and N. Weston (2018) SMASH: one-shot model architecture search through hypernetworks. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • H. Cai, T. Chen, W. Zhang, Y. Yu, and J. Wang (2018) Efficient architecture search by network transformation. In AAAI, Cited by: §1.
  • H. Cai, J. Yang, W. Zhang, S. Han, and Y. Yu (2018) Path-level network transformation for efficient architecture search. In Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsmässan, Stockholm Sweden, pp. 678–687. External Links: Link Cited by: §1.
  • H. Cai, L. Zhu, and S. Han (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In International Conference on Learning Representations, External Links: Link Cited by: §2.1, Table 1.
  • X. Chu, T. Zhou, B. Zhang, and J. Li (2019) Fair darts: eliminating unfair advantages in differentiable architecture search. External Links: 1911.12126 Cited by: §1, §2.1.
  • J. M. Cohen, E. Rosenfeld, and J. Z. Kolter (2019) Certified adversarial robustness via randomized smoothing. In ICML, Cited by: §2.2, §3.2.
  • T. DeVries and G. W. Taylor (2017)

    Improved regularization of convolutional neural networks with cutout

    External Links: 1708.04552 Cited by: §7.2.
  • X. Dong and Y. Yang (2019) Searching for a robust neural architecture in four gpu hours. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    pp. 1761–1770. Cited by: §2.1, Table 1, Table 2, 3rd item.
  • T. Elsken, J. H. Metzen, and F. Hutter (2019) Efficient multi-objective neural architecture search via lamarckian evolution. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • I. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations, External Links: Link Cited by: §2.2, §5.1.
  • D. Han, J. Kim, and J. Kim (2017) Deep pyramidal residual networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). External Links: ISBN 9781538604571, Link, Document Cited by: item .
  • G. Huang, Z. Liu, L. v. d. Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). External Links: ISBN 9781538604571, Link, Document Cited by: Table 1.
  • A. Klein, S. Falkner, J. T. Springenberg, and F. Hutter (2017) Learning curve prediction with bayesian neural networks. In ICLR, Cited by: §1.
  • M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana (2019) Certified robustness to adversarial examples with differential privacy. In 2019 IEEE Symposium on Security and Privacy (SP), pp. 656–672. Cited by: §2.2, §3.2.
  • H. Liang, S. Zhang, J. Sun, X. He, W. Huang, K. Zhuang, and Z. Li (2019) DARTS+: improved differentiable architecture search with early stopping. External Links: 1909.06035 Cited by: §1, §2.1, §3.1, 1st item, §5.5.1, §5.5.
  • C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy (2018) Progressive neural architecture search. Lecture Notes in Computer Science, pp. 19–35. External Links: ISBN 9783030012465, ISSN 1611-3349, Link, Document Cited by: Table 1.
  • H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu (2017) Hierarchical representations for efficient architecture search. External Links: 1711.00436 Cited by: §1.
  • H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §1, §5.2, §5.3, Table 1, Table 2, 2nd item, §7.2.
  • R. Luo, F. Tian, T. Qin, E. Chen, and T. Liu (2018) Neural architecture optimization. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 7816–7827. External Links: Link Cited by: Table 1.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018a)

    Towards deep learning models resistant to adversarial attacks

    In ICLR, Cited by: §3.2.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018b) Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, External Links: Link Cited by: §2.2.
  • R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, and et al. (2019) Evolving deep neural networks. Artificial Intelligence in the Age of Neural Networks and Brain Computing, pp. 293–312. External Links: ISBN 9780128154809, Link, Document Cited by: §1.
  • H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean (2018) Efficient neural architecture search via parameters sharing. In Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause (Eds.), Proceedings of Machine Learning Research, Vol. 80, Stockholmsmässan, Stockholm Sweden, pp. 4095–4104. External Links: Link Cited by: §1, Table 1, Table 2, 1st item.
  • E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2019) Regularized evolution for image classifier architecture search. Proceedings of the AAAI Conference on Artificial Intelligence 33, pp. 4780–4789. External Links: ISSN 2159-5399, Link, Document Cited by: Table 1.
  • E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin (2017) Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pp. 2902–2911. Cited by: §1.
  • Y. Shu, W. Wang, and S. Cai (2020) Understanding architectures learnt by cell-based neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §1, §5.5.2, §5.5, §7.4.
  • K. O. Stanley and R. Miikkulainen (2002) Evolving neural networks through augmenting topologies. Evolutionary Computation 10 (2), pp. 99–127. External Links: Document, Link, Cited by: §1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In International Conference on Learning Representations, External Links: Link Cited by: §2.2.
  • R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8 (3–4), pp. 229–256. External Links: ISSN 0885-6125, Link, Document Cited by: 1st item.
  • S. Xie, H. Zheng, C. Liu, and L. Lin (2019) SNAS: stochastic neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.1, Table 1, 3rd item.
  • Y. Xu, L. Xie, X. Zhang, X. Chen, G. Qi, Q. Tian, and H. Xiong (2020) PC-DARTS: partial channel connections for memory-efficient architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.1, §2.1, §5.4, Table 1, 5th item.
  • Z. Yang, Z. Dai, R. Salakhutdinov, and W. W. Cohen (2018) Breaking the softmax bottleneck: a high-rank RNN language model. In International Conference on Learning Representations, External Links: Link Cited by: Table 2.
  • Q. Yao, J. Xu, W. Tu, and Z. Zhu (2020) Efficient neural architecture search via proximal iterations. In AAAI, Cited by: §2.1, Table 1, Table 2, 4th item.
  • C. Ying, A. Klein, E. Christiansen, E. Real, K. Murphy, and F. Hutter (2019) NAS-bench-101: towards reproducible neural architecture search. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 7105–7114. External Links: Link Cited by: §2.1.
  • K. Yu, C. Sciuto, M. Jaggi, C. Musat, and M. Salzmann (2020) Evaluating the search phase of neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • A. Zela, T. Elsken, T. Saikia, Y. Marrakchi, T. Brox, and F. Hutter (2020a) Understanding and robustifying differentiable architecture search. In International Conference on Learning Representations, External Links: Link Cited by: 2nd item, §1, §2.1, §3.1, §4, 1st item, §5.4, §5.4, §5.5.1, §5.5, Table 1, 6th item, 7th item, 8th item.
  • A. Zela, J. Siems, and F. Hutter (2020b) NAS-BENCH-1SHOT1: benchmarking and dissecting one-shot neural architecture search. In International Conference on Learning Representations, External Links: Link Cited by: §2.1, §5.1.
  • Z. Zhong, J. Yan, W. Wu, J. Shao, and C. Liu (2018) Practical block-wise neural network architecture generation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Cited by: §1.
  • H. Zhou, M. Yang, J. Wang, and W. Pan (2019) BayesNAS: a bayesian approach for neural architecture search. In ICML, pp. 7603–7613. External Links: Link Cited by: Table 1.
  • B. Zoph and Q. V. Le (2017) Neural architecture search with reinforcement learning. External Links: Link Cited by: §1, Table 2, §7.2.
  • B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. External Links: ISBN 9781538664209, Link, Document Cited by: §1, §2.1, §2.1, §5.4, Table 1, 6th item.

7 Appendix

7.1 Descriptions of compared baselines

  • ENAS (Pham et al., 2018) first trains the shared parameter of a one-shot network. For the search phase, it samples sub-networks and use the validation error as the reward signal to update an RNN controller following REINFORCE (Williams, 1992) rule. Finally, they sample several architectures guided by the trained controller and derive the one with the highest validation accuracy.

  • DARTS (Liu et al., 2019) builds a mixture architecture similar to ENAS. The difference is that it relaxes the discrete architecture space to a continuous and differentiable representation by assigning a weight to every operation. The network weight and are then updated via gradient descent alternately based on the training set and the validation set respectively. For evaluation, DARTS prunes out all operations except the one with the largest on every edge, which leaves the final architecture.

  • GDAS (Dong and Yang, 2019) uses the Gumbel-Softmax trick to activate only one operation for every edge during search, similar technique is also applied in SNAS (Xie et al., 2019). This trick reduces the memory cost during search meanwhile keeps the property of differentiability.

  • NASP (Yao et al., 2020) is another modification of DARTS via the proximal algorithm. A discrete version of architecture weight is computed every search epoch by applying a proximal operation to the continuous . Then the gradient of is utilized to update its corresponding

    after backpropagation.

  • PC-DARTS (Xu et al., 2020) evaluates only a random proportion of the channels. This partial channel connection not only accelerates search but also serves as a regularization that controls the bias towards parameter-free operations, as explained by the author.

  • R-DARTS(DP) (Zela et al., 2020a) runs DARTS with different intensity of ScheduledDropPath regularization (Zoph et al., 2018)

    and picks the final architecture according to the performance on the validation set. In ScheduledDropPath, each path in the cell is dropped out with a probability that increases linearly over the training procedure.

  • R-DARTS(L2) (Zela et al., 2020a) runs DARTS with different amounts of L2 regularization and selects the final architecture in the same way with R-DARTS(DP). Specifically, the L2 regularization is applied on the inner loop (i.e. network weight ) of the bi-level optimization problem.

  • DARTS-ES (Zela et al., 2020a) early stops the search procedure of DARTS if the increase of (the dominate eigenvalue of Hessian ) exceeds a threshold. This prevents , which is highly correlated with the stability and generalizability of DARTS, from exploding.

7.2 Training details on CNN standard space

For the search phase, we train the mixture architecture for 50 epochs, with the 50K CIFAR-10 dataset be equally split into training and validation set. Following (Liu et al., 2019), the network weight is optimized on the training set by an SGD optimizer with momentum as 0.9 and weight decay as , where the learning rate is annealed from 0.025 to 1e-3 following a cosine schedule. Meanwhile, we use an Adam optimizer with learning rate 3e-4 and weight decay 1e-3 to learn the architecture weight on the validation set. For the evaluation phase, the macro structure consists of 20 cells and the initial number of channels is set as 36. We train the final architecture by 600 epochs using the SGD optimizer with a learning rate cosine scheduled from 0.025 to 0, a momentum of 0.9 and a weight decay of 3e-4. The drop probability of ScheduledDropPath increases linearly from 0 to 0.2, and the auxiliary tower (Zoph and Le, 2017) is employed with a weight of 0.4. We also utilize CutOut (DeVries and Taylor, 2017) as the data augmentation technique and report the result (mean std) of 4 independent runs with different random seeds.

7.3 Micro architecture of 4 simplified search spaces

The first space S1 contains 2 popular operators per edge as shown in Figure 6, S2 restricts the set of candidate operations on every edge as { separable convolution, skip connection}, the operation set in S3 is { separable convolution, skip connection, zero}, and S4 simplifies the set as { separable convolution, noise}.

(a) Normal cell
(b) Reduction cell
Figure 6: Micro cell architecture of S1.

7.4 Definitions of cell width and height

Specifically, the depth of a cell is the number of connections on the longest path from input nodes to the output node. While the width of a cell is computed by adding the width of all intermediate nodes that are directly connected to the input nodes, where the width of a node is defined as the channel number for convolutions and the feature dimension for linear operations (In (Shu et al., 2020), they assume the width of every intermediate node is for simplicity). In particular, if an intermediate node is partially connected to input nodes (i.e. has connections to other intermediate nodes), its width is deducted by the percentage of intermediate nodes it is connected to when computing the cell width.

(a) S1 (C10)
(b) S2 (C10)
(c) S3 (C10)
(d) S4 (C10)
(e) S1 (C100)
(f) S2 (C100)
(g) S3 (C100)
(h) S4 (C100)
(i) S1 (SVHN)
(j) S2 (SVHN)
(k) S3 (SVHN)
(l) S4 (SVHN)
Figure 7: Normal cells discovered by SDARTS-RS on spaces S1-S4 across CIFAR-10, CIFAR-100 and SVHN.
(a) S1 (C10)
(b) S2 (C10)
(c) S3 (C10)
(d) S4 (C10)
(e) S1 (C100)
(f) S2 (C100)
(g) S3 (C100)
(h) S4 (C100)
(i) S1 (SVHN)
(j) S2 (SVHN)
(k) S3 (SVHN)
(l) S4 (SVHN)
Figure 8: Reduction cells discovered by SDARTS-RS on spaces S1-S4 across CIFAR-10, CIFAR-100 and SVHN.
(a) S1 (C10)
(b) S2 (C10)
(c) S3 (C10)
(d) S4 (C10)
(e) S1 (C100)
(f) S2 (C100)
(g) S3 (C100)
(h) S4 (C100)
(i) S1 (SVHN)
(j) S2 (SVHN)
(k) S3 (SVHN)
(l) S4 (SVHN)
Figure 9: Normal cells discovered by SDARTS-ADV on spaces S1-S4 across CIFAR-10, CIFAR-100 and SVHN.
(a) S1 (C10)
(b) S2 (C10)
(c) S3 (C10)
(d) S4 (C10)
(e) S1 (C100)
(f) S2 (C100)
(g) S3 (C100)
(h) S4 (C100)
(i) S1 (SVHN)
(j) S2 (SVHN)
(k) S3 (SVHN)
(l) S4 (SVHN)
Figure 10: Reduction cells discovered by SDARTS-ADV on spaces S1-S4 across CIFAR-10, CIFAR-100 and SVHN.