A Semi-Supervised Assessor of Neural Architectures

05/14/2020 ∙ by Yehui Tang, et al. ∙ HUAWEI Technologies Co., Ltd. The University of Sydney Peking University 2

Neural architecture search (NAS) aims to automatically design deep neural networks of satisfactory performance. Wherein, architecture performance predictor is critical to efficiently value an intermediate neural architecture. But for the training of this predictor, a number of neural architectures and their corresponding real performance often have to be collected. In contrast with classical performance predictor optimized in a fully supervised way, this paper suggests a semi-supervised assessor of neural architectures. We employ an auto-encoder to discover meaningful representations of neural architectures. Taking each neural architecture as an individual instance in the search space, we construct a graph to capture their intrinsic similarities, where both labeled and unlabeled architectures are involved. A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph. Extensive experimental results on the NAS-Benchmark-101 dataset demonstrated that our method is able to make a significant reduction on the required fully trained architectures for finding efficient architectures.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The impressive successes in computer vision tasks, such as image classification 

[12, 10, 11], detection [4] and segmentation [44], heavily depends on an effective design the backbone deep neural networks, which are usually over-parameterized for the sake of effectiveness. Instead of resorting to human expert experience, Neural Architecture Search (NAS) framework focuses on an automatic way to select hyper-parameters and design appropriate network architectures.

There have been a large body of works on NAS, and they can be roughly divided into two categories. Combinatorial optimization methods search architectures in a discrete space by generating, evaluating and selecting different architectures, e.g. Evolutionary Algorithm (EA) based methods 

[30]

and Reinforcement Learning (RL) based methods 

[45]. The other kind of NAS methods are continuous optimization based, which relax the original search space to a continuous space and gradient-based optimization is usually applied [25, 21, 3, 36, 37]

. In NAS, to get the exact performance of an architecture, it often takes hours or even days for a sufficient training process. Reducing the number of training epochs or introducing the weight sharing mechanism could alleviate prohibitive computational cost, but it would result in inaccurate performance estimation for the architectures. Recently, there are studies to collect many network architectures with known real performance on the specific tasks and train a performance predictor 

[5, 33]. This one-off training of the predictor can then be applied to evaluate the performance of intermediate searched architectures in NAS, and the overall evaluation cost of an individual architecture can be reduced from hours to milliseconds.

A major bottleneck in obtaining a satisfactory architecture performance predictor could be the collection of a large annotated training set. Given the expensive cost on annotating a neural architecture with its real performance, the training set for the performance predictor is often small, which would lead to an undesirable over-fitting result. Existing methods insist on the fully supervised way to train the performance predictor, but neglect the significance of those neural architectures without annotations. In the search space of NAS, a number of valid neural architectures can be sampled with ease. Though the real performance could be unknown, their architecture similarity with those annotated architectures would convey invaluable information to optimize the performance predictor.

In this paper, we propose to assess neural architectures in a semi-supervised way for training the architecture predictor using the well-trained networks as fewer as possible. Specifically, a very small proportion of architectures are randomly selected and trained on the target dataset to obtain the ground-truth labels. With the help of massive unlabeled architectures, an auto-encoder is used to discover meaningful representations. Then we construct a relation graph involving both labeled and unlabeled architectures to capture intrinsic similarities between architectures. The GCN assessor takes the learned representations of all these architectures and the relation graph as input to predict the performance of unlabeled architectures. The entire system containing the auto-encoder and GCN assessor can be trained in an end-to-end manner. Extensive experiments results on the NAS-bench-101 dataset [41] demonstrate the superiority of the proposed semi-supervised assessor for searching efficient neural architectures.

This paper is organized as follows: in Sec. 2 we briefly review several performance predictors and analyze pros and cons of them, and give an introduction of NAS, GCN and auto-encoder. Sec. 3 gives a detailed implementation of the proposed method. Several experiments conducted on NASBench dataset and the results are shown in Sec. 4. Finally, Sec. 5 summarizes the conclusions.

2 Related Works

In this section, we first review current methods of NAS and performance predictor, and then introduce the classical GCN and auto-encoder.

2.1 Neural Architecture Search (NAS)

Current NAS framework for obtaining desired DNNs can be divided into two sub-problems, i.e., search space and search method.

A well-defined search space is extremely important for NAS, and there are mainly three kinds of search spaces in the state-of-the-art NAS methods. The first is cell based search space [29, 45, 46, 23]. Once a cell structure is searched, it is used in all the layers across the network by stacking multiple cells. Each cell contains several blocks, and each of the block contains two branches, with each branch applying an operation to the output of one of the former blocks. The outputs of the two branches are added to get the final output of the block. The second is Direct Acyclic Graph (DAG) based search space [41]. The difference between cell based and DAG based search space is that the latter does not restrict the number of branches. The input and output number of a node in the cell is not limited. The third is factorized hierarchical search space [35, 36, 9], which allows different layer architectures in different blocks.

Besides search space, most of the NAS research focus on developing efficient search methods, which can be divided into combinatorial optimization methods and continuous optimization methods[24, 39, 38, 25]. Combinatorial optimization methods include Evolutionary Algorithm (EA) based methods [24, 27, 30, 31, 40] and Reinforcement Learning (RL) based methods [45, 46, 1]. Continuous optimization methods include DARTS [25], which makes the search space continuous by relaxing the categorical choice of a particular operation to a softmax over all possible operations, and several one-shot methods that solve the problem in a one-shot procedure [29]. Recently, architecture datasets with substantial full-trained neural architectures are also proposed to compare different NAS methods conveniently and fairly [41, 7, 42].

2.2 NAS Predictor

There are limited works focusing on predicting the network performance. Some of the previous works were designed on hyper-parameter optimization with Gaussian Process [34], which focus on developing optimization functions to better evaluate the hyper-parameter. Other methods directly predict the performance of a given network architecture. The first way is to predict the final accuracy by using part of the learning curves with a mixture of parametric functions [6], Bayesian Neural Network [17] or -SVR [2]. The second way is to predict the performance of a network with a predictor. Deng et al[5] extract the feature of a given network architecture layer by layer, and the features with flexible length are sent to LSTM to predict the final accuracy. Istrate et al[13]

use a similar manner to predict the accuracy with random forest, believing that few training data are required by using random forest. Luo

et al[26] propose an end-to-end manner by using an encoder to extract features of the networks. The learned features are optimized with gradient descent and then decoded into new architectures with an decoder. The architecture derived in this way is regarded as the optimal architecture with a high performance.

Figure 1: Performance prediction pipeline of the proposed semi-supervised assessor. Both labeled and unlabeled architectures are sent to the auto-encoder to get the meaningful representations. Then a relation graph is constructed to capture architecture similarities based the learned representations. Both the representations and relation graph are sent to the GCN assessor to outputs estimated performance of architectures. The entire system can be trained end-to-end.

2.3 Graph Convolutional Network (GCN)

GCN is a prevalent technique tackling data generated from non-Euclidean domains and represented as graphs with complex relation. Sperduti et al. [32] first tackle DAGs with neural networks and recently GCNs achieve the-state-of-art performance in multiple tasks, such as citation networks [16], social networks [20] and point clouds data analyses [43]. Both graph-level task and node level tasks can be tackled with GCNs. For a graph-level task, each graph is seen as an individual and the GCN is to predict the labels of those graphs. As for node-level tasks, the examples are seen as vertices of a graph which reflects the relation between them, and the labels of examples are predicted by the GCN with the help of graph. Beyond the features of examples, the graph also provides extra valuable information and improves prediction accuracy.

3 Approach

Consider the search space with architectures, where are annotated architectures with the corresponding ground-truth performance , and are the remaining massive unlabeled architectures. The assessor is to take the architecture as the input and output the estimated performance , where is the trainable parameters of the assessor . Given a sufficiently large labeled architecture set as the training data, the assessor can be trained in a supervised manner to fit the ground truth performance [5, 33], i.e.,

(1)

where denotes norm. However, due to the limitation of time and computational resources, very limited architectures can be trained from scratch to get the ground-truth performance, which would not be enough to support the training of a predictor with high accuracy. Actually, there are massive architectures without annotations and they can participate in the prediction progress. The similarity between architectures can provide extra information to make up the insufficiency of labeled architectures and help training the performance predictor to achieve higher performance.

3.1 Architecture Embedding

Before sending neural architectures to the performance predictor, we need an encoder is to get the appropriate embedding of architectures. There are already some common hand-crafted representations of architectures for specific search spaces. For example, Ying et al[41] represent the architectures in a Directed Acyclic Graph (DAG) based search space with adjacency matrices, where 0 represents no connection between two nodes and the non-zero integers denote the operation types. Though these hand-crafted representations can describe different architectures, they are usually redundant and noisy to express the intrinsic property of architectures. In contrast with this manual approach, we aim to discover more effective representations of neural architectures with an auto-encoder.

A classical auto-encoder [14] contains two modules: the encoder and decoder . takes the hand-crafted representations of both labeled architectures and unlabeled architectures as input and maps them to a low-dimension space. Then the learn compact representation are sent to the decoder to reconstruct the original input. The auto-encoder is trained as:

(2)

where and are the trainable parameters of the encoder and decoder , respectively111 and in Eq. (2) also denote the hand-crafted representations of the architectures without ambiguity.. The feature for architectures learned by the auto-encoder can be more compact representations of architectures. Most importantly, the auto-encoder can be optimized together with the predictor in an end-to-end manner, which enables feature to be more compatible with to predict the performance of architectures.

3.2 Semi-supervised Architecture Assessor

The architectures in a search space are not independent and there are some intrinsic relation between architectures. For example, an architecture can always be obtained by slightly modifying a very ‘similar’ architecture, such as replacing an operation type, adding/removing an edge, changing the width/depth and so on. Most importantly, beyond the limited labeled architectures, the massive unlabeled architectures in search space would also be helpful for the training of assessor , because of their underlying connections with those labeled architectures. Though obtaining the real performance of all architectures is impossible, exploiting the large volume of unlabeled architectures and exploring intrinsic constraints underlying different architectures will make up the insufficiency of labeled architectures.

Based on the learned representation

of architectures, we adopt the common Radial Basis Function (RBF) 

[8] to define the similarity measure between architectures and , i.e.,

(3)

where denotes the distance measure (e.g., Euclidean distance) and is a scale factor. ranges in [0,1] and . When the distance between representation and becomes larger, the similarity decreases rapidly.

Given this similarity measurement, the relation between architectures can be easily modeled by a graph , where individual vertex denotes an architecture and the edge reflects the similarity between architectures. Both labeled and unlabeled architectures are involved in the graph . Denote the adjacency matrix of graph as , where if exceeds the threshold and zero otherwise. Note that and there are self-connections in graph . Two similar architectures thus tend to locate close with each other in the graph and are connected by edges with a large weight. The architectures connected by edges have direct relation while those disconnected architectures interact with each other in an implicit way via other vertices. This is accordant to the intuition that two very different architectures can be connected by some intermediate architectures.

To utilize both limited labeled architectures and massive unlabeled architectures with their similarity modeled by the graph , we construct the assessor by stacking multiple graph convolutional layers[16], which takes the learned representations of both labeled and unlabeled architectures as inputs. The graph is also embedded into each layer and guides the information propagation between the features of different architectures. Taking all these architectures as a whole and utilizing the relation between architectures, the assessor outputs their estimated performance. A assessor composing of two graph convolutional layers is:

(4)

where denotes the learned representations of both labeled and unlabeled architectures, and and are their estimated performance, respectively. is a diagonal matrix where , and . , are the weight matrices.

As shown in Eq. (4), the output of the assessor depends on not only their input representation but also the neighboring architectures in the graph due to adjacency matrix , and thus the performance prediction processes of labeled and unlabeled architectures interact with each other. In fact, GCN can be considered as a Laplacian smoothing operator  [22] and intuitively, two connected nodes on the graph tend to have similar features and produce similar outputs. As both labeled and unlabeled architectures are sent to the predictor simultaneously, their intermediate features interrelate with each other.

The assessor is trained to fit the ground-truth performance of labeled architectures based as both the architectures themselves and the relation between them, i.e.,

(5)

where is the trainable parameter of assessor . Though the supervised loss is only applied on labeled architectures, the unlabeled architectures also participate in the performance prediction of the labeled architectures via the relation graph , and thus the supervision information from those limited performance labels can guide the feature generation process of those unlabeled architectures. Intuitively, the labels can propagate along the edge in the relation graph , considering the length of paths and the weights of edges. What’s more, the training process helps the predictor learn to predict the performance of a given architecture with the assistance of its neighbors in the graph , which makes the prediction more robust and improve the prediction accuracy.

3.3 Optimization

The auto-encoder and assessor can constitute an end-to-end system, which learns the representations of architectures and predict performance simultaneously. As shown in Figure 1, the hand-crafted representations of both labeled architectures and unlabeled architectures are first delivered into the encoder to produce learned representations , and then the relation graph is constructed based on the representation via Eq. (3). Both the representation and relation graph are sent to the GCN assessor to get the estimated performance . In the training phase, the learned representations are also sent to the decoder to reconstruct the original input. Combining the regression loss that fits the ground-truth performance and the reconstruction loss , the entire system is trained as:

(6)

where

is the hyper-parameter that balances the two types of loss functions. In the end to end system, the learning of architecture representations and performance prediction are promoted mutually. The regression loss

focuses on fitting the ground-truth performance of labeled architectures and propagating labels to the unlabeled architectures, which also makes the learned representations have stronger relativity to the ground-truth performance. The reconstruction loss refines information from the massive unlabeled architectures to supply the limited labeled examples and makes the training process more robust. Note that for both regression loss and reconstruction loss , the unlabeled architectures participate in their optimization process and play an important role.

When implementing the proposed semi-supervised assessor to a large search space containing massive architectures, it is inefficient to construct a large graph containing all the

architectures. Constructing the graph needs to calculate the similarity of arbitrary two architectures which is time-consuming, and storing such a graph also needs a large memory. Mini-batch is a common strategy to tackle big data in deep learning 

[19], and we propose to construct the graph and train the entire system with mini-batch. For each mini-batch, labeled and unlabeled architectures are randomly sampled from and , and the graph is constructed with those examples. Thus the entire system can be trained efficiently with random gradient descent on memory-limited GPUs. The mini-batch training algorithm is presented in Algorithm 1.

0:  Search space , and the ground-truth performance for labeled architectures.
1:  repeat
2:     Randomly select labeled and unlabeled architectures from and respectively to form a mini-batch ;
3:     Send the architectures to feature extractor and get the learned representation ;
4:     Calculate the similarity between architectures via Eq. (3) and construct the relation graph ;
5:     Send the learned representation and relation graph to the GCN assessor and output the approximate performance ;
6:     Calculate the regression loss via Eq. (5);
7:     Send the learned representation to the decoder and calculate the reconstruction loss via Eq. (2);
8:     Calculate the final loss ;
9:     Backward and update the parameters of encoder , assessor and decoder ;
10:  until Convergence;
10:  The trained encoder and assessor .
Algorithm 1 Training of the semi-supervised assessor.
Criteria Peephole [5] E2EPP [33] Ours
1k KTau
MSE
r
10k KTau
MSE
r
100k KTau
MSE
r
Table 1: Comparison of performance prediction results on Nas-Bench-101 dataset.

4 Experiments

In this section, we conduct extensive experiments to validate the effectiveness of the proposed semi-supervised assessor. Firstly, the performance prediction accuracies of our method are compared with several state-of-the-art methods. Then we embed the proposed assessor and peer competitors to the combinatorial searching algorithm (such as evolutionary algorithm) to identify architectures with good performance. Ablation studies are also conducted to further analyze the proposed method.

Dataset. Nas-Bench-101 [41] is the largest public architecture dataset for NAS research proposed recently, containing 423K unique CNN architectures trained on CIFAR-10 [18] for image classification, and the best architecture achieves a test accuracy of 94.23%. The search space for Nas-Bench-101 is a feed-forward structure stacked by blocks and each block is constructed by stacking the same cell 3 times. As all the network architectures in the search space are trained completely to get their ground-truth performance, it is fair and convenient to compare different performance prediction methods comprehensively on Nas-Bench-101. A more detailed description of the dataset can be referred to [41]. Besides Nas-Bench-101, we also construct a small architecture dataset on CIFAR-100 to verify the effectiveness of the methods on different datasets.

Implementation details. The encoder is constructed by stacking two convolutional layers followed by a full-connected layer and the decoder is the reverse. The inputs of are the matrix representations of architectures following [41, 38]. The assessor consists of two graph convolutional layers and outputs the predicted performance. The scale factor and threshold for constructing graph are set to 0.01 and , and in Eq. (6) is set to 0.5 empirically. The entire system is trained end-to-end with Adam optimizer [15] without weight decay for 200 epochs222The auto-encoder is first pre-trained as initialization for optimization stabilization.

. The batch size and initial learning rate are set to 1024 and 0.001, respectively. All the experiments are conducted with Pytorch library

[28] on NVIDIA V100 GPUs.

4.1 Comparison of Prediction Accuracies

We compare the proposed method with the state-of-the-art predictors based methods Peephole [5] and E2EPP [33]. Since the main function of the performance predictors is to identify better architectures in a search space, accurate performance ranking of architectures is more important than their absolute values. KTau  is a common indicator measuring the correlation between the ranking of prediction values and the actual labels, and higher values mean more accurate prediction. Two other common criteria mean square error (MSE) and correlation coefficient (r) are also compared for completeness. MSE measures the deviation of predictions from the ground truth directly, and r  measures the correlation degree between prediction values and true labels.

The experimental results are shown in Table 1. We randomly sample architectures from the search space (including 423k architectures) as labeled examples, and varies from {1k, 10k, 100k}. All possible architectures are available once the search space has been given, and thus the remaining architectures are used as unlabeled architectures, i.e., . As shown in Table 1, the proposed semi-supervised assessor surpasses the state-of-the-art methods on three criteria with different number of labeled examples. For example, with 1k labeled architectures, KTau of our method can achieve 0.6541, which is 0.2168 higher than Peephole (0.4373) and 0.0836 higher than E2EPP (0.5705), meaning more accurate predicted ranking. The correlation coefficient is also improved by 0.1227 and 0.0773, indicating higher correction between predicted values and ground-truth labels using our method. The improved performance comes from more thorough exploitation of the information in the search space, which makes up the insufficiency of labeled data. Note that increasing improves the performance of all these methods, but the computational cost of training these architectures is also increased. Thus, the balance between the performance of the predictors and the computation cost of getting labeled examples needs to be considered in practice.

The qualitative results are shown in Figure (2). For clarity, 5k architectures are randomly sampled and shown in the scatter diagrams. The -axis of each point (architecture) is its ground truth ranking and the -axis is predicted ranking. For our method the points are much closer to the diagonal line, implying stronger consistency between the predicted ranking and ground truth ranking. Both the numerical criteria and intuitive diagrams show that our method surpasses the state-of-the-art methods.

(a) Peephole
(b) E2EPP
(c) Ours
Figure 2: Predicted ranking of architectures and the corresponding true ranking on Nas-Bench-101 dataset. The -axis denotes the true ranking and -axis denotes the predicted ranking.

4.2 Searching Results on NAS-Bench-101

The performance predictors can be embedded to various architecture search algorithms [33]such as random search, Reinforcement learning (RL) based methods [45] and Evolutionary Algorithm (EA) based methods [30]. Taking EA based methods as an example, the performance predicted by the predictors can be used as fitness, and other progresses including population generation, cross-over and mutation are not changed. Since we focus on the design of performance predictors, we embed different prediction methods into EA to find the architectures with high performance. Concretely, we compare the best performance among the top-10 architectures selected by different methods, and all the methods are repeated 20 times with different random seeds.

The performance of the best architecture selected by different methods is shown in Table 2. The second column is the accuracies of architectures on CIFAR-10 dataset and the third column is their real performance rankings in all the architectures of Nas-Bench-101. The best network identified by the proposed semi-supervised assessor achieves performance 94.01%, outperforming the compared methods (93.41% for Peephole and 93.77%) by a large margin, since the proposed method can make a more accurate estimation of performance and further identify those architectures with better performance. Though only 1k architectures are sampled to train the predictor, it can still find the architectures whose real performance is in the top 0.01% of the search space. Compared to the global best architecture with performance 94.23%, which is obtained by exhaustively enumerating all the possible architectures in the search space, the performance 94.01% obtained by our method with only 1k labeled architectures is comparable.

We further show the intuitive representation of the best architectures identified by different methods in Figure 3. There are some common characteristics for these architectures with good performance, e.g., both existing a very short path (e.g., path length 1) and a long path from the first node to the last. The long path consisting of multiple operations ensures the representation ability of the networks, and the short path makes gradient propagate easily to the shallow layers. The architecture identified by our method (Figure 3

(c)) also contains a max pooling layer in the longest path to enlarge the receptive field, which may be a reason for the better performance.

Method Top-1 Accuracy (%) Ranking (%)
Peephole [5] 93.410.34 1.64
E2EPP [33] 93.770.13 0.15
Ours 94.010.12 0.01
Table 2: Classification accuracies on CIFAR-10 and the performance ranking among all the architectures of Nas-Bench-101. 1k architectures randomly selected from Nas-Bench-101 are used as annotated examples.
Figure 3: Visualization of the best network architectures selected by different methods. 1k architectures randomly selected from Nas-bench-101 are used as annotated examples.

4.3 Experiments on CIFAR-100 Dataset

To verify the effectiveness of the proposed semi-supervised assessor in different datasets, we further conduct experiments on the common object classification dataset CIFAR-100. Since there is no architecture dataset with ground-truth performance based on CIFAR-100, we randomly sample 1k architectures from the search space of NAS-bench-101 and train them completely from scratch using the same training hyper-parameters in [41]. With the 1k labeled architectures, different performance prediction methods are embedded into the EA to find the best performance. As CIFAR-100 contains 100 categories, we both compare the top-1 and top-5 accuracies. The best performance among the top-10 architectures are compared and all the methods are repeated for 20 times with different random seeds.

Method Top-1 Accuracy (%) Top-5 Accuracy (%)
Peephole [5] 74.210.32 92.040.15
E2EPP [33] 75.860.19 93.110.10
Ours 78.640.16 94.230.08
Table 3: Classification accuracies of the best network architectures on CIFAR-100 selected by different methods. 1k network architectures trained on CIFAR-100 are used as annotated examples.
Figure 4: Visualization of the best network architectures selected by different methods. 1k network architectures trained on CIFAR-100 are used as annotated examples.

The accuracies and diagrams of different architectures are shown in Table 3 and Figure 4, respectively. The best architecture identified by our method achieves much higher performance (78.64% for top-1 and 94.23% for top-5) compared with the state-of-the-art methods (e.g., 75.86% for top-1 and 93.11% for top-5 in E2EPP). It implies that exploring the relation between architectures and utilizing the massive unlabeled examples in the proposed method works well in different datasets.

(a)
(b)
(c)
Figure 5: Performance prediction results of the proposed semi-supervised assessor w.r.t. different scale factor , weight , and the number of unlabeled architectures .

4.4 Ablation study

The impact of scale factor . The hyper-parameter impacts the similarity measurement in Eq. (3) and thereby impacts the construct of the graph. With a fixed threshold , a denser graph is constructed with a bigger , and more interaction between different architectures is applied when predicting performance with the GCN assessor. The prediction results with different scale factor are shown in Figure 5(a), which verifies the effectiveness of utilizing unlabeled architectures with a relation graph to train a more accurate performance predictor. An excessive also incurs the drop of accuracies in Figure 5(a), as putting too much attention on other architectures also disturb the supervision training process.

The impact of weight . The weight balances the regression loss and the reconstruction loss . When the reconstruction loss do not participate in the training process (), prediction accuracies (Ktau and ) are lower than those with reconstruction loss as shown in Figure 5(b), since the information in massive unlabeled architectures is not well preserved when constructing the learned architecture representation.

The number of unlabeled architectures . The unlabeled architectures can provide extra information to assist the training of the architecture assessor to make an accurate prediction. As shown in Figure 5, with the increasing of unlabeled architectures, both the two criteria KTau and are increased correspondingly, indicating more accurate performance prediction. The improvement of accuracies comes from that more information is provided by the unlabeled architectures. When the number of unlabeled architectures is enough to reflect the property of the search space (e.g., ), adding extra unlabeled architectures only brings limited accuracy improvement.

W/o Auto-encoder Ours
1k
10k
100k
Table 4: Comparison of prediction accuracies (Ktau) with or without the auto-encoder on Nas-Bench-101 dataset.

The effect of auto-encoder. To show the superiority of the learned representations compared with the hand-craft representations, the prediction results with or without the auto-encoder are shown in Table 4. The prediction accuracies (Ktau) are improved obviously by the deep auto-encoder (e.g., 0.6541 v.s. 0.5302 with 1k labeled architectures), which indicates the learned representations can reflect the intrinsic characteristics and more compatible to measure architecture similarity and used as the inputs of performance predictor.

5 Conclusion

The paper proposes a semi-supervised assessor to evaluate the network architectures by predicting their performance directly. Different from the conventional performance predictors trained in a fully supervised way, the proposed semi-supervised assessor takes advantage of the massive unlabeled architectures in the search space by exploring the intrinsic similarity between architectures. Meaningful representations of architectures are discovered by an auto-encoder and a relation graph involving both labeled and unlabeled architectures is constructed based on the learned representations. The GCN assessor takes both the representations and relation graph to predict the performance. With only 1k architectures randomly sampled from the large NAS-Benchmark-101 dataset [41], the architecture with accuracy (top of the entire search space) can be found with the proposed method. We plan to investigate the sampling strategy to construct more representative training sets for the assessor and identify better architectures with fewer labeled architectures in the future.

Acknowledgment

This work is supported by National Natural Science Foundation of China under Grant No. 61876007, 61872012, National Key R&D Program of China (2019YFF0302902), Australian Research Council under Project DE-180101438, and Beijing Academy of Artificial Intelligence (BAAI).

References

  • [1] B. Baker, O. Gupta, N. Naik, and R. Raskar (2016) Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167. Cited by: §2.1.
  • [2] B. Baker, O. Gupta, R. Raskar, and N. Naik (2017) Practical neural network performance prediction for early stopping. arXiv preprint arXiv:1705.10823 2 (3), pp. 6. Cited by: §2.2.
  • [3] G. Bender, P. Kindermans, B. Zoph, V. Vasudevan, and Q. Le (2018) Understanding and simplifying one-shot architecture search. In

    International Conference on Machine Learning

    ,
    pp. 550–559. Cited by: §1.
  • [4] Z. Cai and N. Vasconcelos (2018) Cascade r-cnn: delving into high quality object detection. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 6154–6162. Cited by: §1.
  • [5] B. Deng, J. Yan, and D. Lin (2017) Peephole: predicting network performance before training. arXiv preprint arXiv:1712.03351. Cited by: §1, §2.2, Table 1, §3, §4.1, Table 2, Table 3.
  • [6] T. Domhan, J. T. Springenberg, and F. Hutter (2015)

    Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves

    .
    In Twenty-Fourth International Joint Conference on Artificial Intelligence, Cited by: §2.2.
  • [7] X. Dong and Y. Yang (2020) NAS-bench-102: extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326. Cited by: §2.1.
  • [8] A. C. Good and W. G. Richards (1993) Rapid evaluation of shape similarity using gaussian functions. Journal of chemical information and computer sciences 33 (1), pp. 112–116. Cited by: §3.2.
  • [9] Z. Guo, X. Zhang, H. Mu, W. Heng, Z. Liu, Y. Wei, and J. Sun (2019) Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420. Cited by: §2.1.
  • [10] B. Han, Q. Yao, X. Yu, G. Niu, M. Xu, W. Hu, I. Tsang, and M. Sugiyama (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In Advances in neural information processing systems, pp. 8527–8537. Cited by: §1.
  • [11] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu (2019) GhostNet: more features from cheap operations. arXiv preprint arXiv:1911.11907. Cited by: §1.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §1.
  • [13] R. Istrate, F. Scheidegger, G. Mariani, D. Nikolopoulos, C. Bekas, and A. C. I. Malossi (2019) Tapas: train-less accuracy predictor for architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 3927–3934. Cited by: §2.2.
  • [14] M. Khanum, T. Mahboob, W. Imtiaz, H. A. Ghafoor, and R. Sehar (2015) A survey on unsupervised machine learning algorithms for automation, classification and maintenance. International Journal of Computer Applications 119 (13). Cited by: §3.1.
  • [15] D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
  • [16] T. N. Kipf and M. Welling (2016) Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907. Cited by: §2.3, §3.2.
  • [17] A. Klein, S. Falkner, J. T. Springenberg, and F. Hutter (2016) Learning curve prediction with bayesian neural networks. Cited by: §2.2.
  • [18] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §4.
  • [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §3.3.
  • [20] J. Li, Y. Rong, H. Cheng, H. Meng, W. Huang, and J. Huang (2019) Semi-supervised graph classification: a hierarchical graph perspective. In The World Wide Web Conference, pp. 972–982. Cited by: §2.3.
  • [21] L. Li and A. Talwalkar (2019) Random search and reproducibility for neural architecture search. arXiv preprint arXiv:1902.07638. Cited by: §1.
  • [22] Q. Li, Z. Han, and X. Wu (2018)

    Deeper insights into graph convolutional networks for semi-supervised learning

    .
    In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §3.2.
  • [23] C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L. Li, L. Fei-Fei, A. Yuille, J. Huang, and K. Murphy (2018) Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 19–34. Cited by: §2.1.
  • [24] H. Liu, K. Simonyan, O. Vinyals, C. Fernando, and K. Kavukcuoglu (2017) Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436. Cited by: §2.1.
  • [25] H. Liu, K. Simonyan, and Y. Yang (2018) Darts: differentiable architecture search. arXiv preprint arXiv:1806.09055. Cited by: §1, §2.1.
  • [26] R. Luo, F. Tian, T. Qin, E. Chen, and T. Liu (2018) Neural architecture optimization. In Advances in neural information processing systems, pp. 7816–7827. Cited by: §2.2.
  • [27] R. Miikkulainen, J. Liang, E. Meyerson, A. Rawal, D. Fink, O. Francon, B. Raju, H. Shahrzad, A. Navruzyan, N. Duffy, et al. (2019) Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing, pp. 293–312. Cited by: §2.1.
  • [28] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §4.
  • [29] H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, and J. Dean (2018) Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268. Cited by: §2.1, §2.1.
  • [30] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2019)

    Regularized evolution for image classifier architecture search

    .
    In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, pp. 4780–4789. Cited by: §1, §2.1, §4.2.
  • [31] E. Real, S. Moore, A. Selle, S. Saxena, Y. L. Suematsu, J. Tan, Q. V. Le, and A. Kurakin (2017) Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2902–2911. Cited by: §2.1.
  • [32] A. Sperduti and A. Starita (1997) Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks 8 (3), pp. 714–735. Cited by: §2.3.
  • [33] Y. Sun, H. Wang, B. Xue, Y. Jin, G. G. Yen, and M. Zhang (2019) Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor.

    IEEE Transactions on Evolutionary Computation

    .
    Cited by: §1, Table 1, §3, §4.1, §4.2, Table 2, Table 3.
  • [34] K. Swersky, J. Snoek, and R. P. Adams (2014) Freeze-thaw bayesian optimization. arXiv preprint arXiv:1406.3896. Cited by: §2.2.
  • [35] M. Tan, B. Chen, R. Pang, V. Vasudevan, M. Sandler, A. Howard, and Q. V. Le (2019) Mnasnet: platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820–2828. Cited by: §2.1.
  • [36] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, and K. Keutzer (2019) Fbnet: hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10734–10742. Cited by: §1, §2.1.
  • [37] S. Xie, H. Zheng, C. Liu, and L. Lin (2018) SNAS: stochastic neural architecture search. arXiv preprint arXiv:1812.09926. Cited by: §1.
  • [38] Y. Xu, Y. Wang, K. Han, H. Chen, Y. Tang, S. Jui, C. Xu, Q. Tian, and C. Xu (2019) RNAS: architecture ranking for powerful networks. arXiv preprint arXiv:1910.01523. Cited by: §2.1, §4.
  • [39] C. Xue, J. Yan, R. Yan, S. M. Chu, Y. Hu, and Y. Lin (2019) Transferable automl by model sharing over grouped datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9002–9011. Cited by: §2.1.
  • [40] Z. Yang, Y. Wang, X. Chen, B. Shi, C. Xu, C. Xu, Q. Tian, and C. Xu (2019) Cars: continuous evolution for efficient neural architecture search. arXiv preprint arXiv:1909.04977. Cited by: §2.1.
  • [41] C. Ying, A. Klein, E. Real, E. Christiansen, K. Murphy, and F. Hutter (2019) Nas-bench-101: towards reproducible neural architecture search. arXiv preprint arXiv:1902.09635. Cited by: §1, §2.1, §2.1, §3.1, §4.3, §4, §4, §5.
  • [42] A. Zela, J. Siems, and F. Hutter (2020) NAS-bench-1shot1: benchmarking and dissecting one-shot neural architecture search. arXiv preprint arXiv:2001.10422. Cited by: §2.1.
  • [43] Y. Zhang and M. Rabbat (2018) A graph-cnn for 3d point cloud classification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6279–6283. Cited by: §2.3.
  • [44] Y. Zhou, X. Sun, Z. Zha, and W. Zeng (2019) Context-reinforced semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4046–4055. Cited by: §1.
  • [45] B. Zoph and Q. V. Le (2016) Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578. Cited by: §1, §2.1, §2.1, §4.2.
  • [46] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le (2018) Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8697–8710. Cited by: §2.1, §2.1.