Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?

06/12/2020
by   Shen Yan, et al.
Michigan State University
29

Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias. Despite the widespread use, architecture representations learned in NAS are still poorly understood. We observe that the structural properties of neural architectures are hard to preserve in the latent space if architecture representation learning and search are coupled, resulting in less effective search performance. In this work, we find empirically that pre-training architecture representations using only neural architectures without their accuracies as labels considerably improve the downstream architecture search efficiency. To explain these observations, we visualize how unsupervised architecture representation learning better encourages neural architectures with similar connections and operators to cluster together. This helps to map neural architectures with similar performance to the same regions in the latent space and makes the transition of architectures in the latent space relatively smooth, which considerably benefits diverse downstream search strategies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 16

06/17/2019

Sample-Efficient Neural Architecture Search by Learning Action Space

Neural Architecture Search (NAS) has emerged as a promising technique fo...
10/09/2020

Smooth Variational Graph Embeddings for Efficient Neural Architecture Search

In this paper, we propose an approach to neural architecture search (NAS...
03/16/2022

Learning Where To Look – Generative NAS is Surprisingly Efficient

The efficient, automated search for well-performing neural architectures...
03/26/2020

Are Labels Necessary for Neural Architecture Search?

Existing neural network architectures in computer vision — whether desig...
06/18/2020

Neural Architecture Optimization with Graph VAE

Due to their high computational efficiency on a continuous space, gradie...
08/04/2021

Generic Neural Architecture Search via Regression

Most existing neural architecture search (NAS) algorithms are dedicated ...
10/31/2020

Self-supervised Representation Learning for Evolutionary Neural Architecture Search

Recently proposed neural architecture search (NAS) algorithms adopt neur...

Code Repositories

arch2vec

[NeurIPS 2020] "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?" by Shen Yan, Yu Zheng, Wei Ao, Xiao Zeng, Mi Zhang


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Unsupervised representation learning has been successfully used in a wide range of domains including natural language processing 

Mikolov et al. (2013); Devlin et al. (2019); Radford et al. (2018)

, computer vision 

Oord et al. (2016); He et al. (2019), robotic learning Finn et al. (2016); Jang et al. (2018), and network analysis Perozzi et al. (2014); Grover and Leskovec (2016). Although differing in specific data type, the root of such success shared across domains is learning good data representations that are independent of the specific downstream task. In this work, we investigate unsupervised representation learning in the domain of neural architecture search (NAS), and demonstrate how NAS search spaces encoded through unsupervised representation learning could benefit downstream search strategies.

Standard NAS methods encode the search space with the adjacency matrix and focus on designing different downstream search strategies based on reinforcement learning

Williams (1992)

, evolutionary algorithm

Real et al. (2017), and Bayesian optimization Falkner et al. (2018) to perform architecture search in discrete search spaces. This encoding is simple yet a natural choice since neural architectures are by nature discrete. However, the size of the adjacency matrix grows quadratically as search space scales up, making downstream architecture search less efficient in large search spaces Elsken et al. (2019). To reduce the computational overhead, recent NAS methods employ dedicated networks to learn continuous representations of neural architectures and perform architecture search in the continuous search space Luo et al. (2018); Liu et al. (2019); Xie et al. (2019); Shi et al. (2019)

. In these methods, architecture representations and search strategies are jointly optimized in a supervised manner, guided by the accuracies of architectures selected by the search strategies. However, these methods are biased towards weight-free operations (e.g. identity, max-pooling) which are often preferred early on in the search, resulting in lower final accuracies

Guo et al. (2019); Shu et al. (2020); Zela et al. (2020b, a).

In this work, we propose arch2vec, a simple yet effective unsupervised architecture representation learning method for neural architecture search. arch2vec

uses a variational graph isomorphism autoencoder to injectively capture the local structural information of neural architectures in the latent space and map distinct architectures into unique embeddings. It circumvents the bias caused by joint optimization through decoupling architecture representation learning and architecture search into two separate processes. By learning architecture representations using only neural architectures without their accuracies, it helps to model a more smoothly-changing architecture performance surface in the latent space. Such smoothness greatly helps the downstream search since architectures with similar performance tend to locate near each other in the latent space instead of locating randomly. We visualize the learned architecture representations in §

4.1. It shows that architecture representations learned by arch2vec can better preserve structural similarity of local neighborhoods than its supervised architecture representation learning counterparts. In particular, it can capture topology (e.g. skip connections or straight networks) and operation similarity, which helps to cluster architectures with similar accuracy.

We follow the NAS best practices checklist Lindauer and Hutter (2019) to conduct our experiments. We validate the performance of arch2vec on three commonly used NAS search spaces NAS-Bench-101 Ying et al. (2019), NAS-Bench-201 Dong and Yang (2020) and DARTS Liu et al. (2019) and two search strategies based on reinforcement learning (RL) and Bayesian optimization (BO). Our results show that, with the same downstream search strategy, arch2vec consistently outperforms its discrete encoding and supervised architecture representation learning counterparts across all three search spaces.

Our contributions are summarized as follows:

  • Existing NAS methods typically use discrete encodings that do not scale well, or joint optimization that leverages accuracy as supervision signal for architecture representation learning. We demonstrate that pre-training architecture representations without using their accuracies helps to build a smoother latent space w.r.t. architecture performance.

  • We propose arch2vec, a simple yet effective unsupervised architecture representation learning method that injectively maps distinct architectures to unique representations in the latent space. By decoupling architecture representation learning and architecture search into two separate processes, arch2vec is able to construct less biased architecture representations, and thus benefits the architecture sampling process in terms of efficiency and robustness.

  • The pre-trained architecture representations considerably benefit the downstream architecture search. This finding is consistent across three search spaces, two search strategies and two datasets, demonstrating the importance of unsupervised architecture representation learning for neural architecture search.

2 Related Work

Unsupervised Representation Learning of Graphs. Our work is closely related to unsupervised representation learning of graphs. In this domain, some methods have been proposed to learn representations using local random walk statistics and matrix factorization-based learning objectives Perozzi et al. (2014); Grover and Leskovec (2016); Tang et al. (2015); Wang et al. (2016); some methods either reconstruct a graph’s adjacency matrix by predicting edge existence Kipf and Welling (2016); Hamilton et al. (2017) or maximize the mutual information between local node representations and a pooled graph representation Veličković et al. (2019)

. The expressiveness of Graph Neural Networks (GNNs) is studied in

Xu et al. (2019) in terms of their ability to distinguish any two graphs. It also introduces Graph Isomorphism Networks (GINs), which is proved to be as powerful as the Weisfeiler-Lehman test Weisfeiler and Lehman (1968) for graph isomorphism. Zhang et al. (2019) proposes an asynchronous message passing scheme to encode DAG computations using RNNs. In contrast, we injectively encode architecture structures using GINs, and we show a strong pre-training performance based on its highly expressive aggregation scheme.

Regularized Autoencoders

. Autoencoders can be seen as energy-based models trained with reconstruction energy

LeCun et al. (2006). Our goal is to encode neural architectures with similar performance into the same regions of the latent space, and to make the transition of architectures in the latent space relatively smooth. To prevent degenerated mapping where latent space is free of any structure, there is a rich literature on restricting the low-energy area for data points on the manifold Kavukcuoglu et al. (2010); Vincent et al. (2008); Kingma and Welling (2014); Makhzani et al. (2016); Ghosh et al. (2020). Here we adopt the popular variational autoencoder framework Kingma and Welling (2014); Kipf and Welling (2016) to optimize the variational lower bound w.r.t. the variational parameters, which as we show in our experiments is a simple yet effective regularization.

Neural Architecture Search (NAS). As mentioned in the introduction, typical NAS methods are built upon discrete encodings Zoph and Le (2017); Baker et al. (2017); Falkner et al. (2018); Real et al. (2019); Kandasamy et al. (2018), which face the scalability challenge in large search spaces. To address this challenge, recent NAS methods shift from conducting architecture search in discrete spaces to continuous spaces Luo et al. (2018); Shi et al. (2019); White et al. (2019); Wen et al. (2019) using different architecture encoders such as MLP, LSTM Hochreiter and Schmidhuber (1997) or GCN Kipf and Welling (2017). However, what lies in common across these methods is that the architecture representation and search direction are jointly optimized by the supervision signal (e.g. accuracies of the selected architectures), which could bias the architecture representation learning and search direction. There is concurrent work Liu et al. (2020) showing that architectures searched without using labels are competitive to their counterparts searched with labels. Different from their approach which performs pretext tasks using image statistics, we use architecture reconstruction objective to preserve local structure relationship in the latent space.

3 arch2vec

In this section, we describe the details of arch2vec, followed by two downstream architecture search strategies we use in this work.

3.1 Variational Graph Isomorphism Autoencoder

3.1.1 Preliminaries

We restrict our search space to the cell-based architectures. Following the configuration in NAS-Bench-101 Ying et al. (2019), each cell is a labeled DAG , with as a set of nodes and as a set of edges. Each node is associated with a label chosen from a set of predefined operations. A natural encoding scheme of cell-based neural architectures is an upper triangular adjacency matrix and an one-hot operation matrix . This discrete encoding is not unique, as permuting of the adjacency matrix along with operation matrix will lead to the same graph, which is known as isomorphism Weisfeiler and Lehman (1968).

3.1.2 Encoder

In order to learn a continuous representation that is invariant to isomorphic graphs, we leverage Graph Isomorphism Networks (GINs) Xu et al. (2019) to encode the graph-structured architectures given its better expressiveness. We augment the adjacency matrix as to transfer original directed graphs into undirected graphs, allowing bi-directional information flow. Similar to Kipf and Welling (2016), the inference model, i.e. the encoding part of the model, is defined as:

(1)

We use the -layer GIN to get the node embedding matrix :

(2)

where . is a trainable bias. The MLP

here is a multi-layer perception where each layer is a linear-batchnorm-ReLU triplet. Then, the node embedding matrix

is fed to two fully-connected layers to get the mean

and the variance

of the posterior approximation in Eq. (1

), respectively. During the inference, the architecture representation is derived by summing the representation vectors of all nodes.

3.1.3 Decoder

Our decoder is a generative model aiming at reconstructing and from the latent variables :

(3)
(4)

where is the sigmoid activation, softmax(·) is the softmax activation applied row-wise, and indicates the operation selected from the predifined set of opreations at the n-th node. and are learnable weights and biases of the decoder.

3.2 Training Objective

In practice, our variational graph isomorphism autoencoder consists of five-layer GINs and two-layer MLPs with hidden dimension 128 for each layer. We set the dimensionality of the embedding to 16. During training, model weights are learned by iteratively maximizing a tractable variational lower bound:

(5)

where as we assume that the adjacency matrix and operation matrix are conditional independent given the latent variable . The second term on the right hand side of Eq. (5

) denotes the Kullback-Leibler divergence

Kullback and Leibler (1951) which is used to measure the difference between the posterior distribution and the prior distribution . Here we choose a Gaussian prior due to its simplicity. We use reparameterization trick Kingma and Welling (2014) for training since it can be thought of as injecting noise to the code layer. Using random noise injection mechanism has been proved to be effective on the regularization of neural networks Sietsma and Dow (1991); An (1996); Kingma and Welling (2014). The loss is optimized using mini-batch gradient descent over neural architectures.

3.3 Architecture Search Strategies

We use reinforcement learning (RL) and Bayesian optimization (BO) as two representative search algorithms to evaluate arch2vec on the downstream architecture search.

3.3.1 Reinforcement Learning (RL)

We use REINFORCE Williams (1992) as our RL-based search strategy as it has been shown to converge better than more advanced RL methods such as PPO Schulman et al. (2017) for neural architecture search. We use a single-layer LSTM as the controller and output a 16-dimensional output as the mean vector to the Gaussian policy with a fixed identity covariance matrix. We use the validation accuracy of the sampled architecture as the reward and decode the sampled architecture representation to a valid architecture using L2 distance to find the nearest neighbor in the pre-trained latent space.

3.3.2 Bayesian Optimization (BO)

We use DNGO Snoek et al. (2015) as our BO-based search strategy. We use a one-layer adaptive basis regression network with hidden dimension 128 to model distributions over functions. It serves as an alternative to Gaussian process in order to avoid cubic scaling Garnett et al. (2014). We use expected improvement (EI) Mockus (1977) as the acquisition function which is widely used in NAS Kandasamy et al. (2018); White et al. (2019); Shi et al. (2019)

. During the search process, the best performing architectures are selected and added to the pool. The network is retrained in the next iteration using samples in the updated pool. This process is iterated until the maximum estimated wall-clock time is arrived.

4 Experimental Results

We validate arch2vec

on three commonly used NAS search spaces. The details of the hyperparameters we used for pre-training and search on each search space are included in Appendix

A.

NAS-Bench-101. NAS-Bench-101 Ying et al. (2019) is the first rigorous NAS dataset designed for benchmarking NAS methods. It targets the cell-based search space used in many popular NAS methods Zoph et al. (2018); Liu et al. (2018, 2019) and contains unique neural architectures. Each architecture comes with pre-computed validation and test accuracies on CIFAR-10. The cell consists of 7 nodes and can take on any DAG structure from the input to the output with at most 9 edges, with the first node as input and the last node as output. The intermediate nodes can be either 11 convolution, 33 convolution or 33 max pooling. We split the dataset into 90% training and 10% held-out test sets for arch2vec pre-training.

NAS-Bench-201. Different from NAS-Bench-101, the cell-based search space in NAS-Bench-201 Dong and Yang (2020) is represented as a DAG with nodes representing sum of feature maps and edges associated with operation transforms. Each DAG is generated by 4 nodes and 5 associated operations: 11 convolution, 33 convolution, 33 average pooling, skip connection and zero, resulting in a total of

unique neural architectures. The training details for each architecture candidate are provided for three datasets: CIFAR-10, CIFAR-100 and ImageNet-16-120

Chrabaszcz et al. (2017). We use the same data split as used in NAS-Bench-101.

DARTS search space. The DARTS search space Liu et al. (2019) is a popular search space for large-scale NAS experiments. The search space consists of two cells: a convolutional cell and a reduction cell, each with six nodes. For each cell, the first two nodes are the outputs from the previous two cells. The next four nodes contain two edges as input, creating a DAG. The network is then constructed by stacking the cells. Following Liu et al. (2018), we use the same cell for both normal and reduction cell, allowing roughly DAGs without considering graph isomorphism. We randomly sample 600,000 unique architectures in this search space following the mobile setting Liu et al. (2019). We use the same data split as used in NAS-Bench-101.

In the following, we first evaluate the pre-training performance of arch2vec (§4.1) and then the neural architecture search performance based on its pre-trained representations (§4.2).

4.1 Pre-training Performance

Observation (1): We compare arch2vec with two popular baselines GAE Kipf and Welling (2016) and VGAE Kipf and Welling (2016) using three metrics suggested by Zhang et al. (2019): 1) Reconstruction Accuracy (reconstruction accuracy of the held-out test set), 2) Validity (how often a random sample from the prior distribution can generate a valid architecture), and 3) Uniqueness (unique architectures out of valid generations). As shown in Table 1, arch2vec achieves the highest reconstruction accuracy, validity, and uniqueness in all three search spaces. Encoding with GINs outperforms GCNs in reconstruction accuracy due to its better neighbor aggregation scheme. The variational formulation acts as an effective regularizer that leads to better generative performance including validity and uniqueness. Given its superior performance, we stick to arch2vec for the remaining of our experiments.

width=1.01center Method NAS-Bench-101 NAS-Bench-201 DARTS Accuracy Validity Uniqueness Accuracy Validity Uniqueness Accuracy Validity Uniqueness GAE Kipf and Welling (2016) 98.75 29.88 99.25 99.52 79.28 78.42 97.80 15.25 99.65 VGAE Kipf and Welling (2016) 97.45 41.18 99.34 98.32 79.30 88.42 96.80 25.25 99.27 arch2vec 100 51.33 99.36 100 79.41 98.72 99.79 33.36 100

Table 1: Reconstruction accuracy, validity, and uniqueness of different GNNs.
Figure 1: Predictive performance comparison between arch2vec (left) and supervised architecture representation learning (right) on NAS-Bench-101.

Observation (2): We compare arch2vec with its supervised architecture representation learning counterpart on the predictive performance of the latent representations. This metric measures how well the latent representations can predict the corresponding architectures’ performance. Being able to accurately predict the performance of architectures based on the latent representation makes it easier to search for high-performance points in the latent space. We train a Gaussian Process model with 250 sampled data to predict all data and report the results across 10 different seeds. We use RMSE and the Pearson correlation coefficient (Pearson’s r) to evaluate points with test accuracy larger than 0.8. Figure 1 compares the predictive performance between arch2vec and its supervised architecture representation learning counterpart on NAS-Bench-101. As shown, arch2vec outperforms its supervised learning counterpart111The RMSE and Pearson’s r are: 0.0380.025 / 0.530.09 for the supervised architecture representation learning, and 0.0180.001 / 0.670.02 for arch2vec. A smaller RMSE and a larger Pearson’s r indicates a better predictive performance., indicating that arch2vec is able to better capture the local structure relationship of the input space and thus is more informative on guiding the search optimization.

Figure 2: Visualization of a sequence of architecture cells decoded from the learned latent space of arch2vec (upper) and supervised architecture representation learning (lower) on NAS-Bench-101. The two sequences start from the same point. For both sequences, each architecture is the closest point of the previous one in the latent space excluding previously visited ones. Edit distances between adjacent architectures of the upper sequence are 4, 6, 1, 5, 1, 1, 1, 5, 2, 3, 2, 4, 2, 5, 2; the average is 2.9. Edit distances between adjacent architectures of the lower sequence are 8, 6, 7, 7, 9, 8, 11, 11, 6, 10, 10, 11, 10, 11, 9; the average is 8.9.
Figure 3: Comparing distribution of L2 distance between architecture pairs by edit distance on NAS-Bench-101, measured by 1,000 architectures sampled in a long random walk with 1 edit distance apart from consecutive samples. left: arch2vec. right: supervised architecture representation learning.
Figure 4: Latent space 2D visualization van der Maaten and Hinton (2008) comparison between arch2vec (left) and supervised architecture representation learning (right) on NAS-Bench-101. Color encodes test accuracy. We randomly sample points and average the accuracy in each small area.

Observation (3): In Figure 4, we plot the relationship between the L2 distance in the latent space and the edit distance of the corresponding DAGs between two architectures. For arch2vec, the L2 distance grows monotonically with increasing edit distance, indicating that arch2vec can preserve the closeness between two architectures measured by edit distance, which potentially benefits the effectiveness of the downstream search. In contrast, such closeness is not well captured by supervised architecture representation learning.

Observation (4): In Figure 4, we visualize the latent spaces of NAS-Bench-101 learned by arch2vec (left) and its supervised architecture representation learning counterpart (right) in 2-dimensional space using t-SNE. As shown, for arch2vec, the embeddings of architectures span the whole latent space, and architectures with similar accuracies are clustered together. Conducting architecture search on such a smoothly performance-changing latent space is much easier. In contrast, for the supervised counterpart, the embeddings are discontinuous in the latent space, and the transition of accuracy is non-smooth. This indicates that joint optimization guided by accuracy cannot injectively encode architecture structures. As a result, architecture does not have its unique embedding in the latent space, making the task of architecture search more challenging.

Observation (5): To provide a closer look at the learned latent space, Figure 2 visualizes the architecture cells decoded from the latent space of arch2vec (upper) and supervised architecture representation learning (lower). For arch2vec, the adjacent architectures change smoothly and embrace similar connections and operations. This indicates that unsupervised architecture representation learning helps to model a smoothly-changing structure surface. As we show in the next section, such smoothness greatly helps the downstream search since architectures with similar performance tend to locate near each other in the latent space instead of locating randomly. In contrast, the supervised counterpart does not group similar connections and operations well and have much higher edit distances between adjacent architectures. This biases the search direction since dependencies between architecture structures cannot be captured.

4.2 Neural Architecture Search (NAS) Performance

NAS results on NAS-Bench-101. For fair comparison, we reproduced the NAS methods which use the adjacency matrix-based encoding in Ying et al. (2019), including Random Search (RS) Bergstra and Bengio (2012), Regularized Evolution (RE) Real et al. (2019), REINFORCE Williams (1992) and BOHB Falkner et al. (2018). For the supervised learning-based search methods, the hyperparameters are the same as arch2vec, except that the architecture representation learning and search are jointly optimized. Figure 5 and Table 2 summarize our results.

Figure 5: Comparison of NAS performance between discrete encoding, supervised architecture representation learning, and arch2vec on NAS-Bench-101. The plot shows the mean test regret (left) and the empirical cumulative distribution of the final test regret (right) of 500 independent runs given seconds wall-clock time budget.

width=0.9center NAS Methods #Queries Test Accuracy (%) Encoding Search Method Random Search Ying et al. (2019) 1000 93.54 Discrete Random RL Ying et al. (2019) 1000 93.58 Discrete REINFORCE BO Ying et al. (2019) 1000 93.72 Discrete Bayesian Optimization RE Ying et al. (2019) 1000 93.72 Discrete Evolution NAO Luo et al. (2018) 1000 93.74 Supervised Gradient Decent BANANAS White et al. (2019) 500 94.08 Supervised Bayesian Optimization RL (ours) 400 93.74 Supervised REINFORCE BO (ours) 400 93.79 Supervised Bayesian Optimization arch2vec-RL 400 94.10 Unsupervised REINFORCE arch2vec-BO 400 94.05 Unsupervised Bayesian Optimization

Table 2: Comparison of NAS performance between arch2vec and SOTA methods on NAS-Bench-101. It reports the mean performance of 500 independent runs given the number of queried architectures.

Observation (1): BOHB and RE are two best-performing search methods using the adjacency matrix-based encoding. However, as shown in Figure 5, they perform slightly worse than supervised architecture representation learning because the relative high-dimensional input could tend to require more observations for the optimization. In contrast, supervised architecture representation learning focuses on low-dimensional continuous optimization and thus makes the search more efficient.

Observation (2): As shown in Figure 5 (left), arch2vec considerably outperforms its supervised counterpart and the adjacency matrix-based encoding after wall clock seconds. Figure 5 (right) further shows that arch2vec is able to robustly achieve the lowest final test regret after seconds across 500 independent runs.

Observation (3): Table 2 shows the search performance comparison in terms of number of architecture queries. Notably, while RL-based search using discrete encoding suffers from the scalability issue, arch2vec encodes architectures into a lower dimensional continuous space and is able to achieve competitive RL-based search performance with only a simple 1-layer LSTM controller.

NAS results on NAS-Bench-201.

For CIFAR-10, we follow the same implementation established in NAS-Bench-201 by searching based on the validation accuracy obtained after 12 training epochs with converged learning rate scheduling. The search budget is set to

seconds. The NAS experiments on CIFAR-100 and ImageNet-16-120 are conducted with budget that corresponds to the same number of queries as used in CIFAR-10. As shown in Table 3, searching with arch2vec leads to better validation and test accuracy as well as reduced variability among different runs on all datasets.

width=center NAS Methods CIFAR-10 CIFAR-100 ImageNet-16-120 validation test validation test validation test RE Real et al. (2019) 91.080.43 93.840.43 73.020.46 72.860.55 45.780.56 45.630.64 RS Bergstra and Bengio (2012) 90.940.38 93.750.37 72.170.64 72.050.77 45.470.65 45.330.79 REINFORCE Williams (1992) 91.030.33 93.820.31 72.350.63 72.130.79 45.580.62 45.300.86 BOHB Falkner et al. (2018) 90.820.53 93.610.52 72.590.82 72.370.90 45.440.70 45.260.83 arch2vec-RL 91.320.42 94.120.42 73.130.72 73.150.78 46.220.30 46.160.38 arch2vec-BO 91.410.22 94.180.24 73.350.32 73.370.30 46.340.18 46.270.37

Table 3:

The mean and standard deviation of the validation and test accuracy for different algorithms under three datasets in NAS-Bench-201. The results are calculated over 500 independent runs.

NAS results on DARTS search space. Similar to White et al. (2019), we set the computational budget to 100 queries in this search space. In each query, a sampled architecture is trained for 50 epochs and the average validation error of the last 5 epochs is computed. To ensure fair comparison with same hyparameters setup, we re-trained the architectures from papers that exactly use DARTS search space and reported the final architecture. As shown in Table 4, arch2vec

generally leads to competitive search performance among different cell-based NAS methods with comparable model parameters. The best performed cells and transfer learning results on ImageNet

Deng et al. (2009) are included in Appendix C.

width=center Test Error Params (M) Search Cost NAS Methods Avg Best Stage 1 Stage 2 Total Encoding Search Method Random Search Liu et al. (2019) 3.290.15 - 3.2 - - 4 - Random ENAS Pham et al. (2018) - 2.89 4.6 0.5 - - Supervised REINFORCE ASHA Li and Talwalkar (2019) 3.030.13 2.85 2.2 - - 9 - Random RS WS Li and Talwalkar (2019) 2.850.08 2.71 4.3 2.7 6 8.7 - Random SNAS Xie et al. (2019) 2.850.02 - 2.8 1.5 - - Supervised GD DARTS Liu et al. (2019) 2.760.09 - 3.3 4 1 5 Supervised GD BANANAS White et al. (2019) 2.64 2.57 3.6 100 (queries) - 11.8 Supervised BO Random Search (ours) 3.10.18 2.71 3.2 - - 4 - Random DARTS (ours) 2.710.08 2.63 3.3 4 1.2 5.2 Supervised GD BANANAS (ours) 2.670.07 2.61 3.6 100 (queries) 1.3 11.5 Supervised BO arch2vec-RL 2.650.05 2.60 3.3 100 (queries) 1.2 9.5 Unsupervised REINFORCE arch2vec-BO 2.560.05 2.48 3.6 100 (queries) 1.3 10.5 Unsupervised BO

Table 4: Comparison with state-of-the-art cell-based NAS methods on DARTS search space using CIFAR-10. The test error is averaged over 5 seeds. Stage 1 shows the GPU days (or number of queries) for model search and Stage 2 shows the GPU days for model evaluation.

5 Conclusion

arch2vec is a simple yet effective unsupervised architecture representation learning method for neural architecture search. By learning architecture representations without using their accuracies, it helps to model a more smoothly-changed architecture performance surface in the latent space compared to its supervised architecture representation learning counterpart, which further benefits different downstream search strategies. We have demonstrated its effectiveness on three NAS search spaces. We suggest that it is desirable to take a closer look at architecture representation learning for neural architecture search. It is possible that designing neural architecture search using arch2vec with a better search strategy in continuous space will give better results.

References

  • [1] G. An (1996)

    The effects of adding noise during backpropagation training on a generalization performance

    .
    In Neural Computation, Cited by: §3.2.
  • [2] B. Baker, O. Gupta, N. Naik, and R. Raskar (2017) Designing neural network architectures using reinforcement learning. In ICLR, Cited by: §2.
  • [3] J. Bergstra and Y. Bengio (2012) Random search for hyper-parameter optimization. In JMLR, Cited by: §4.2, Table 3.
  • [4] P. Chrabaszcz, I. Loshchilov, and F. Hutter (2017) A downsampled variant of imagenet as an alternative to the cifar datasets. In arXiv:1707.08819, Cited by: §4.
  • [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei (2009) ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, Cited by: Appendix C, §4.2.
  • [6] J. Devlin, M. Chang, K. Lee, and K. Toutanova (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In ACL, Cited by: §1.
  • [7] X. Dong and Y. Yang (2020) NAS-Bench-201: extending the scope of reproducible neural architecture search. In ICLR, Cited by: §A.2, Appendix A, §1, §4.
  • [8] T. Elsken, J. H. Metzen, and F. Hutter (2019) Neural architecture search: a survey. In JMLR, Cited by: §1.
  • [9] S. Falkner, A. Klein, and F. Hutter (2018) BOHB: robust and efficient hyperparameter optimization at scale. In ICML, Cited by: §1, §2, §4.2, Table 3.
  • [10] C. Finn, I. Goodfellow, and S. Levine (2016) Unsupervised learning for physical interaction through video prediction. In NeurIPS, Cited by: §1.
  • [11] R. Garnett, M. A. Osborne, and P. Hennig (2014) Active learning of linear embeddings for gaussian processes. In UAI, Cited by: §3.3.2.
  • [12] P. Ghosh, M. S. M. Sajjadi, A. Vergari, M. Black, and B. Scholkopf (2020) From variational to deterministic autoencoders. In ICLR, Cited by: §2.
  • [13] A. Grover and J. Leskovec (2016) Node2vec: scalable feature learning for networks. In ACM SIGKDD, Cited by: §1, §2.
  • [14] Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, and J. Sun (2019) Single path one-shot neural architecture search with uniform sampling. In arXiv:1904.00420, Cited by: §1.
  • [15] W. Hamilton, Z. Ying, and J. Leskovec (2017) Inductive representation learning on large graphs. In NeurIPS, Cited by: §2.
  • [16] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick (2019) Momentum contrast for unsupervised visual representation learning. In arXiv:1911.05722, Cited by: §1.
  • [17] S. Hochreiter and J. Schmidhuber (1997) Long short-term memory. In Neural Computation, Cited by: §2.
  • [18] E. Jang, C. Devin, V. Vanhoucke, and S. Levine (2018) Grasp2vec: learning object representations from self-supervised grasping. In arXiv:1811.06964, Cited by: §1.
  • [19] K. Kandasamy, W. Neiswanger, J. Schneider, B. Poczos, and E. Xing (2018) Neural architecture search with bayesian optimisation and optimal transport. In NeurIPS, Cited by: §2, §3.3.2.
  • [20] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. L. Cun (2010) Learning convolutional feature hierarchies for visual recognition. In NeurIPS, Cited by: §2.
  • [21] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §A.1, §A.1.
  • [22] D. P. Kingma and M. Welling (2014) Auto-encoding variational bayes. In ICLR, Cited by: §2, §3.2.
  • [23] T. N. Kipf and M. Welling (2016) Variational graph auto-encoders. In NeurIPS Workshop, Cited by: §2, §2, §3.1.2, §4.1, Table 1.
  • [24] T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In ICLR, Cited by: §2.
  • [25] S. Kullback and R. A. Leibler (1951) On information and sufficiency. In Annals of Mathematical Statistics, Cited by: §3.2.
  • [26] Y. LeCun, S. Chopra, R. Hadsell, and F. J. Huang (2006) A tutorial on energy-based learning. In Predicting Structured Data, Cited by: §2.
  • [27] L. Li and A. Talwalkar (2019) Random search and reproducibility for neural architecture search. In UAI, Cited by: Appendix C, Table 4.
  • [28] M. Lindauer and F. Hutter (2019) Best practices for scientific research on neural architecture search. In arXiv:1909.02453, Cited by: §1.
  • [29] C. Liu, P. Dollár, K. He, R. Girshick, A. Yuille, and S. Xie (2020) Are labels necessary for neural architecture search?. In arXiv:2003.12056, Cited by: §2.
  • [30] C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy (2018) Progressive neural architecture search. In ECCV, Cited by: §A.3, §4, §4.
  • [31] H. Liu, K. Simonyan, and Y. Yang (2019) DARTS: differentiable architecture search. In ICLR, Cited by: §A.3, §A.3, Appendix A, Table 5, Appendix C, §1, §1, Table 4, §4, §4.
  • [32] R. Luo, F. Tian, T. Qin, E. Chen, and T. Liu (2018) Neural architecture optimization. In NeurIPS, Cited by: §1, §2, Table 2.
  • [33] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow (2016) Adversarial autoencoders. In ICLR, Cited by: §2.
  • [34] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In NeurIPS, Cited by: §1.
  • [35] J. Mockus (1977) On bayesian methods for seeking the extremum and their application.. In IFIP Congress, Cited by: §3.3.2.
  • [36] A. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu (2016) Conditional image generation with pixelcnn decoders. In NeurIPS, Cited by: §1.
  • [37] B. Perozzi, R. Al-Rfou, and S. Skiena (2014) DeepWalk: online learning of social representations. In ACM SIGKDD, Cited by: §1, §2.
  • [38] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean (2018) Efficient neural architecture search via parameter sharing. In ICML, Cited by: Table 4.
  • [39] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. In OpenAI Blog, Cited by: §1.
  • [40] I. Radosavovic, J. Johnson, S. Xie, W. Lo, and P. Dollár (2019) On network design spaces for visual recognition. In ICCV, Cited by: Appendix C.
  • [41] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2019)

    Regularized evolution for image classifier architecture search

    .
    In AAAI, Cited by: Table 5, Appendix C, §2, §4.2, Table 3.
  • [42] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin (2017) Large-scale evolution of image classifiers. In ICML, Cited by: §1.
  • [43] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov (2017) Proximal policy optimization algorithms. In arXiv:1707.06347, Cited by: §3.3.1.
  • [44] H. Shi, R. Pi, H. Xu, Z. Li, J. T. Kwok, and T. Zhang (2019) Efficient sample-based neural architecture search with learnable predictor. In arXiv:1911.09336, Cited by: §1, §2, §3.3.2.
  • [45] Y. Shu, W. Wang, and S. Cai (2020) Understanding architectures learnt by cell-based neural architecture search. In ICLR, Cited by: §1.
  • [46] J. Sietsma and R. J. Dow (1991) Creating artificial neural networks that generalize. In Neural Networks, Cited by: §3.2.
  • [47] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, M. Prabhat, and R. Adams (2015) Scalable bayesian optimization using deep neural networks. In ICML, Cited by: §A.1, §3.3.2.
  • [48] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei (2015) LINE: large-scale information network embedding.. In WWW, Cited by: §2.
  • [49] L. van der Maaten and G. Hinton (2008) Visualizing data using t-SNE. In JMLR, Cited by: Figure 4.
  • [50] P. Veličković, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm (2019) Deep Graph Infomax. In ICLR, Cited by: §2.
  • [51] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol (2008)

    Extracting and composing robust features with denoising autoencoders

    .
    In ICML, Cited by: §2.
  • [52] D. Wang, P. Cui, and W. Zhu (2016) Structural deep network embedding. In KDD, Cited by: §2.
  • [53] B. Weisfeiler and A. Lehman (1968) A reduction of a graph to a canonical form and an algebra arising during this reduction.. In Nauchno-Technicheskaya Informatsia, Cited by: §2, §3.1.1.
  • [54] W. Wen, H. Liu, H. Li, Y. Chen, G. Bender, and P. Kindermans (2019) Neural predictor for neural architecture search. In arXiv:1912.00848, Cited by: §2.
  • [55] C. White, W. Neiswanger, and Y. Savani (2019) BANANAS: bayesian optimization with neural architectures for neural architecture search. In arXiv:1910.11858, Cited by: §2, §3.3.2, §4.2, Table 2, Table 4.
  • [56] R. J. Williams (1992) Simple statistical gradient-following algorithms for connectionist reinforcement learning. In Machine Learning, Cited by: §A.1, §1, §3.3.1, §4.2, Table 3.
  • [57] S. Xie, H. Zheng, C. Liu, and L. Lin (2019) SNAS: stochastic neural architecture search. In ICLR, Cited by: Table 5, Appendix C, §1, Table 4.
  • [58] K. Xu, W. Hu, J. Leskovec, and S. Jegelka (2019) How powerful are graph neural networks?. In ICLR, Cited by: §2, §3.1.2.
  • [59] C. Ying, A. Klein, E. Christiansen, E. Real, K. Murphy, and F. Hutter (2019) NAS-Bench-101: towards reproducible neural architecture search. In ICML, Cited by: §A.1, Appendix A, §1, §3.1.1, §4.2, Table 2, §4.
  • [60] A. Zela, T. Elsken, T. Saikia, Y. Marrakchi, T. Brox, and F. Hutter (2020) Understanding and robustifying differentiable architecture search. In ICLR, Cited by: §1.
  • [61] A. Zela, J. Siems, and F. Hutter (2020) NAS-bench-1shot1: benchmarking and dissecting one-shot neural architecture search. In ICLR, Cited by: §1.
  • [62] M. Zhang, S. Jiang, Z. Cui, R. Garnett, and Y. Chen (2019) D-vae: a variational autoencoder for directed acyclic graphs. In NeurIPS, Cited by: Appendix B, §2, §4.1.
  • [63] B. Zoph and Q. V. Le (2017) Neural architecture search with reinforcement learning. In ICLR, Cited by: §2.
  • [64] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In CVPR, Cited by: Table 5, Appendix C, §4.

Appendix A Pre-training and search details on each search space

As described in §3, we use adjacency matrix and operation matrix as inputs to our neural architecture encoder (§3.1). In this section, we present pre-training and search details for NAS-Bench-101 [59], NAS-Bench-201 [7] and DARTS [31] search spaces.

a.1 NAS-Bench-101

We followed the encoding scheme in NAS-Bench-101 [59]. Specifically, a cell in NAS-Bench-101 is represented as a directed acyclic graph (DAG) where nodes represent operations and edges represent data flow. A upper-triangular binary matrix is used to encode edges. A operation matrix is used to encode operations, input, and output, with the order as {input, 1 1 conv, 3 3 conv, 3 3 max-pool (MP), output}. For cells with less than

nodes, their adjacency and operator matrices are padded with trailing zeros. Figure

6 shows an example of a 7-node cell in NAS-Bench-101 search space and its corresponding adjacency and operation matrices.

We use a five-layer Graph Isomorphism Network (GIN) with hidden sizes {128, 128, 128, 128, 16} as the encoder and two-layer MLPs with hidden dimension 128 for each layer as the decoder. The adjacency matrix is preprocessed as an undirected graph to allow bi-directional information flow. After forwarding the inputs to the model, the reconstruction error is minimized using Adam optimizer [21] with learning rate . We train the model with batch size 32 and the training loss is able to converge well after 20 epochs. After training, we extract the architecture embeddings from the encoder for downstream architecture search.

For RL-based search, We use REINFORCE [56] as the search strategy. We use a single-layer LSTM with hidden dimension 128 as the controller and output a 16-dimensional output as the mean vector to the Gaussian policy with a fixed identity covariance matrix. The controller is optimized using Adam optimizer [21] with learning rate . The number of sampled architectures in each episode is set to 16 and the discount factor is set to 0.8. The baseline value is set to 0.95. The maximum estimated wall-clock time for each run is set to seconds.

For BO-based search, we use DNGO [47] as the search strategy. We use a one-layer fully connected network with hidden dimension 128 to perform adaptive basis function regression. We randomly sample 16 architectures at the beginning, and select the top 5 best-performing architectures to the architecture pool in each architecture sampling iteration. The network is optimized using selected architecture samples in the pool using Adam optimizer with learning rate and trained for 100 epochs in each architecture sampling iteration. The best function value of expected improvement (EI) is set to 0.95. We use the same time budget used in RL-based search.

Figure 6: An example of the cell encoding in NAS-Bench-101 search space. The left panel shows the DAG of a 7-node cell. The top-right and bottom-right panels show its corresponding adjacency matrix and operation matrix respectively.

a.2 NAS-Bench-201

Different from NAS-Bench-101, NAS-Bench-201 [7] employs a fixed cell-based DAG representation of neural architectures, where nodes represent the sum of feature maps and edges are associated with operations that transform the feature maps from the source node to the destination node. To represent the architectures in NAS-Bench-201 with discrete encoding that is compatible with our neural architecture encoder, we first transform the original DAG in NAS-Bench-201 into a DAG with nodes representing operations and edges representing data flow as the ones in NAS-Bench-101. We then use the same discrete encoding scheme in NAS-Bench-101 to encode each cell into an adjacency matrix and operation matrix. An example is shown in Figure 7. The hyperparameters we used for pre-training on NAS-Bench-201 are the same as described in §A.1.

For RL-based search, the search is stopped when it meets the time budget , , seconds for CIFAR-10, CIFAR-100, and ImageNet-16-200, respectively. For CIFAR-10, we follow the same implementation established in NAS-Bench-201 by searching based on the validation accuracy obtained after 12 training epochs with converged learning rate scheduling. The discount factor and the baseline value is set to 0.4. All the other hyperparameters are the same as described in §A.1.

For BO-based search, we initially sampled 16 architectures and select the best performing architecture to the pool in each iteration. The best function value of EI is set to 1.0 for all datasets. We use the same search budget as used in RL-based search. All the other hyperparameters are the same as described in §A.1.

Figure 7: An example of the cell encoding in NAS-Bench-201 search space. The top-left and top-right panels show the original and transformed representations of a cell. The bottom-left and bottom-right panels show its corresponding adjacency matrix and operation matrix respectively.

a.3 DARTS Search Space

The cell in the DARTS search space has the following property: two input nodes are from the output of two previous cells; each intermediate node is connected by two predecessors, with each connection associated with one operation; the output node is the concatenation of all of the intermediate nodes within the cell [31].

Based on these properties, a upper-triangular binary matrix is used to encode edges and a operation matrix is used to encode operations, with the order as {, , zero, 3 3 max-pool, 3 3 average-pool, identity, 3 3 separable conv, 5 5 separable conv, 3 3 dilated conv, 5 5 dilated conv, }. An example is shown in Figure 8. Following [30], we use the same cell for both normal and reduction cell, allowing roughly DAGs without considering graph isomorphism. We randomly sample 600,000 unique architectures in this search space following the mobile setting [31]. The hyperparameters we used for pre-training on DARTS search space are the same as described in §A.1.

We set the computational budget to 100 architecture queries in this search space. In each query, a sampled architecture is trained for 50 epochs and the average validation accuracy of the last 5 epochs is computed. All the other hyperparamers we used for RL-based search and BO-based search are the same as described in §A.1.

Figure 8: An example of the cell encoding in DARTS search space. The top panel shows the cell. The bottom-left and bottom-right panels show its corresponding adjacency matrix and operation matrix respectively.

Appendix B More details on pre-training evaluation metrics

We split the the dataset into 90% training and 10% held-out test sets for arch2vec pre-training on each search space. In §4.1, we evaluate the pre-training performance of arch2vec using three metrics suggested by [62]: 1) Reconstruction Accuracy (reconstruction accuracy of the held-out test set) which measures how well the embeddings can errorlessly remap to the original structures; 2) Validity (how often a random sample from the prior distribution can generate a valid architecture) which measures the generative ability the model; and 3) Uniqueness (unique architectures out of valid generations) which measures the smoothness and diversity of the generated samples.

To compute Reconstruction Accuracy, we report the proportion of decoded neural architectures of the held-out test set that are identical to the inputs. To compute Validity, we randomly pick up 10,000 points generated by the Gaussian prior and then apply std() + mean(), where are the encoded means of the training data. It scales the sampled points and shifts them to the center of the embeddings of the training set. We report the proportion of the decoded architectures that are valid in the search space. To compute Uniqueness, we report the proportion of unique architectures out of valid decoded architectures.

The validity check criteria varies across different search spaces. For NAS-Bench-101 and NAS-Bench-201, we use the NAS-Bench-101222https://github.com/google-research/nasbench/blob/master/nasbench/api.py and NAS-Bench-201333https://github.com/D-X-Y/NAS-Bench-201/blob/master/nas_201_api/api.py official APIs to verify whether a decoded architecture is valid or not in the search space. For DARTS search space, a decoded architecture has to pass the following validity checks: 1) the first two nodes must be the input nodes and ; 2) the last node must be the output node ; 3) except the two input nodes, there are no nodes which do not have any predecessor; 4) except the output node, there are no nodes which do not have any successor; 5) each intermediate node must contain two edges from the previous nodes; and 6) it has to be an upper-triangular binary matrix (representing a DAG).

(a) arch2vec-RL
(b) arch2vec-BO
Figure 9: Best cell found by arch2vec using (a) RL-based and (b) BO-based search strategy.

Appendix C Best found cells and transfer learning results on ImageNet

Figure 9 shows the best cell found by arch2vec using RL-based and BO-based search strategy. As observed in [40], the shapes of normalized empirical distribution functions (EDFs) for NAS design spaces on ImagetNet [5] match their CIFAR-10 counterparts. This suggests that NAS design spaces developed on CIFAR-10 are transferable to ImageNet [40]. Therefore, we evaluate the performance of the best cell found on CIFAR-10 using arch2vec for ImageNet. In order to compare in a fair manner, we consider the mobile setting [64, 41, 31] where the number of multiply-add operations of the model is restricted to be less than 600M. We follow [27] to use the exactly same training hyperparameters used in the DARTS paper [31]. Table 5 shows the transfer learning results on ImageNet. With comparable computational complexity, arch2vec-RL and arch2vec-BO outperform DARTS [31] and SNAS [57] methods in the DARTS search space, and is competitive among all cell-based NAS methods under this setting.

width=1.0center NAS Methods Params (M) Mult-Adds (M) Top-1 Test Error (%) Comparable Search Space NASNet-A [64] 5.3 564 26.0 Y AmoebaNet-A [41] 5.1 555 25.5 Y PNAS [41] 5.1 588 25.8 Y SNAS [57] 4.3 522 27.3 Y DARTS [31] 4.7 574 26.7 Y arch2vec-RL 4.8 533 25.8 Y arch2vec-BO 5.2 580 25.5 Y

Table 5: Transfer learning results on ImageNet.

Appendix D More visualization results of each search space

NAS-Bench-101. In Figure 10, we visualize three randomly selected pairs of sequences of architecture cells decoded from the learned latent space of arch2vec (upper) and supervised architecture representation learning (lower) on NAS-Bench-101. Each pair starts from the same point, and each architecture is the closest point of the previous one in the latent space excluding previously visited ones. As shown, architecture representations learned by arch2vec can better capture topology and operation similarity than its supervised architecture representation learning counterparts. In particular, Figure 10 (a) and (b) show that arch2vec is able to better cluster straight networks, while supervised learning encodes straight networks and networks with skip connections together in the latent space.

NAS-Bench-201. Similarly, Figure 11 shows the visualization of five randomly selected pairs of sequences of decoded architecture cells using arch2vec (upper) and supervised architecture representation learning (lower) on NAS-Bench-201. The red mark denotes the change of operations between consecutive samples. Note that the edge flow in NAS-Bench-201 is fixed; only the operator associated with each edge can be changed. As shown, arch2vec leads to a smoother local change of operations than its supervised architecture representation learning counterpart.

DARTS Search Space. For the DARTS search space, we can only visualize the decoded architecture cells using arch2vec since there is no architecture accuracy recorded in this large-scale search space. Figure 12 shows an example of the sequence of decoded neural architecture cells using arch2vec. As shown, the edge connections of each cell remain unchanged in the decoded sequence, and the operation associated with each edge is gradually changed. This indicates that arch2vec preserves the local structural similarity of neighborhoods in the latent space.

(a) arch2vec (upper) and supervised architecture representation learning (lower).
(b) arch2vec (upper) and supervised architecture representation learning (lower).
(c) arch2vec (upper) and supervised architecture representation learning (lower).
Figure 10: Visualization of decoded neural architecture cells on NAS-Bench-101.
(a) arch2vec (upper) and supervised architecture representation learning (lower).
(b) arch2vec (upper) and supervised architecture representation learning (lower).
(c) arch2vec (upper) and supervised architecture representation learning (lower).
(d) arch2vec (upper) and supervised architecture representation learning (lower).
(e) arch2vec (upper) and supervised architecture representation learning (lower).
Figure 11: Visualization of decoded neural architecture cells on NAS-Bench-201.
Figure 12: Visualization of decoded neural architecture cells using arch2vec on DARTS search space. It starts from a randomly sampled point. Each architecture in the sequence is the closest point of the previous one in the latent space excluding previously visited ones.