1 Introduction
In recent years tremendous efforts have been put into a manual design of high performance neural networks
larsson2016fractalnet ; hu2018squeeze ; szegedy2016rethinking_label_smooth ; szegedy2015going . An emerging alternative approach is replacing the manual design with automated Neural Architecture Search (NAS). NAS excels in finding architectures which yield stateoftheart results. Earlier NAS works were based on reinforcement learning
zoph2016neural ; NASNET , sequential optimization PNAS, and evolutionary algorithms
Real18Regularized , and required immense computational resources, sometimes demanding years of GPU compute time in order to output an architecture. More recent NAS methods reduce the search time significantly, e.g. via weightsharing ENAS or by a continuous relaxation of the space liu2018darts , making the search affordable and applicable to real problems.While current NAS methods provide encouraging results, they still suffer from several shortcomings. For example, a large number of hyperparameters that are not easy to tune, hard pruning decisions that are performed suboptimally at once at the end of the search, and a weak theoretical understanding. This cultivates skepticism and criticism of the utility of NAS in general. Some recent works even suggest that current search methods are only slightly better than random search and further imply that some selection methods are not well principled and are basically random li2019random ; sciuto2019evaluating .
To provide more principled methods, we view NAS as an online selection task, and rely on Prediction with Experts Advice (PEA) theory cesa2006prediction for the selection. Our key contribution is the introduction of XNAS (eXperts Neural Architecture Search), an optimization method (section 2.2) that is well suited for optimizing inner architecture weights over a differentiable architecture search space (section 2.1). We propose a setup in which the experts represent inner neural operations and connections, whose dominance is specified by architecture weights.
Our proposed method addresses the mentioned shortcomings of current NAS methods. For the mitigation of the hard pruning, we leverage the ExponentiatedGradient (EG) algorithm kivinen1997exponentiated
, which favors sparse weight vectors to begin with, enhanced by a wipeout mechanism for dynamically pruning inferior experts during the search process. Additionally, the algorithm requires less hyperparameters to be tuned (
section 3.2.2), and the theory behind it further provides guidance for the choice of learning rates. Specifically, the algorithm avoids the decay of architecture weights goodfellow2016deep , which is shown to promote selection of arbitrary architectures.Additionally, XNAS features several desirable properties, such as achieving an optimal worstcase regret bound (section 3.1) and suggesting to assign different learning rates for different groups of experts. Considering an appropriate reward term, the algorithm is more robust to the initialization of the architecture weights and inherently enables the recovery of ’late bloomers’, i.e., experts which may become effective only after a warmup period (section 3.2.1). The wipeout mechanism allows the recovery of experts with a chance of being selected at the end of the process.
We compare XNAS to previous methods and demonstrate its properties and effectiveness over statistical and deterministic setups, as well as over public datasets (section 4). It achieves stateoftheart performance over datasets, and topNAS over rest, with significant improvements. For example, XNAS reaches error over CIFAR10, more than improvement over existing NAS methods.
2 Proposed Approach
To lay out our approach we first reformulate the differentiable architecture search space of DARTS liu2018darts in a way that enables direct optimization over the architecture weights. We then propose a novel optimizer that views NAS as an online selection task, and relies on PEA theory for the selection.
2.1 Neural Architecture Space
We start with a brief review of the PEA settings and then describe our view of the search space as separable PEA subspaces. This enables us to leverage PEA theory for NAS.
PEA Settings. PEA cesa2006prediction refers to a sequential decision making framework, dealing with a decision maker, i.e. a forecaster, whose goal is to predict an unknown outcome sequence while having access to a set of experts’ advises, i.e. predictions. Denote the experts’ predictions at time by , where is the decision space, which we assume to be a convex subset of a vector space. Denote the forecaster’s prediction
, and a nonnegative loss function
. At each time step , the forecaster observes and predicts . The forecaster and the experts suffer losses of and respectively.The Search Space Viewed as Separable PEA Subspaces. We view the search space suggested by DARTS liu2018darts as multiple separable subspaces of experts, as illustrated in Figure 1, described next. An architecture is built from replications of normal and reduction cells represented as a directed acyclic graph. Every node in the graph represents a feature map and each directed edge is associated with a forecaster, that predicts a feature map given the input . Intermediate nodes are computed based on all of their predecessors: . The output of the cell is obtained by applying a reduction operation (e.g. concatenation) to the intermediate nodes. During the search stage, every forecaster combines experts’ feature map predictions forming its own prediction,
(1) 
From now on, we will ignore the superscript indices
for brevity. Each expert represents a neural operation, e.g. convolution or max pooling, associated with network weights
, that receives an input at time , and outputs a feature map prediction. Thus a time index is attached, as each prediction is associated with updated weights .Our architecture search approach is composed of two stages. In the search stage, the weights and are alternately optimized as described in section 2.2; then, in the discretization stage, a discrete child architecture is obtained as explained next.
The Discretization Stage. Once the architecture weights are optimized, the final discrete neural architecture is obtained by performing the following discretization stage, adopted from liu2018darts : Firstly, the strongest two predecessor edges are retained for each intermediate node. The strength of an edge is defined as . Lastly, every forecaster is replaced by the corresponding strongest expert.
2.2 XNAS: eXperts Neural Architecture Search
The differential space, described in section 2.1, enables direct optimization over the architecture weights via gradientdescent based techniques. Previous methods adopted generic optimizers commonly used for training the network weights. For example liu2018darts ; xie2018snas ; chen2019progressive ; casale2019probabilistic used adam kingma2014adam , and noy2019asap used SGD with momentum. While those optimizers excel in joint minimization of neural network losses when applied to network weights, NAS is a essentially a selection task, aiming to select a subset of experts out of a superset. The experts weights form a convex combination, as they compete over a forecaster’s attention.
We argue that a generic alternate optimization of network weights and architecture weights, as suggested in previous works, e.g. hundt2019sharpdarts ; liu2018darts , is not suitable for the unique structure of the architecture space. Hence, we design a tailormade optimizer for this task, inspired by PEA theory. In order to evaluate experts’ performance, a loss is to be associated with each expert. However, an explicit loss is not assigned to each expert, as opposed to a backpropagated loss gradient. Therefore, we base our algorithm on a version of the ExponentiatedGradient (EG) algorithm adapted for the NAS space. EG algorithms favor sparse weight vectors kivinen1997exponentiated , thus fit well to online selection problems.
We introduce the XNAS (eXperts Neural Architecture Search) algorithm, outlined in Algorithm 1 for a single forecaster. XNAS alternates between optimizing the network weights and the architecture weights in a designated manner. After updating (line 4) by descending the train loss , the forecaster makes a prediction based on the mixture of experts (line 5), then are updated with respect to the validation loss according to the EG rule (lines 89). Next, the optimizer wipesout weak experts (lines 1112), effectively assigning their weights to the remaining ones. The exponential update terms, i.e. rewards, are determined by the projection of the loss gradient on the experts’ predictions: . In section 3.1 we relate to the equivalence of this reward to the one associated with policy gradient search williams1992simple applied to NAS.
The purpose of the wipeout step is threefold. First, the removal of weak experts consistently smooths the process towards selecting a final architecture at the descretization stage described in section 2.1. Thus it mitigates the harsh final pruning of previous methods, which results in a relaxation bias addressed in snas ; noy2019asap . Second, it dynamically reduces the number of network weights, thus simplifying the optimization problem, avoiding overfitting and allowing it to converge to a better solution. Last, it speedsup the architecture search, as the number of graph computations decreases with the removal of experts.
3 Analysis and Discussion
3.1 Theoretical Analysis
In this section we analyse the performance of the proposed algorithm. For this purpose, we introduce the regret as a performance measure for NAS algorithms. Showing that the wipeout mechanism cannot eliminate the best expert (lemma 1), we provide theoretical guarantees for XNAS with respect to that measure (theorem 1). Proofs appear in Section 7 of the supplementary material for brevity. Relaying on the theoretical analysis, we extract practical instructions with regard to the choice of multiple learning rates. Finally we briefly discuss the equivalence of our reward to the one considered by the policy gradient approach applied to NAS.
The Regret as a Performance Measure. Denote the regret, the cumulative losses of the forecaster and of the th expert at time by,
(2) 
respectively. The regret measures how much the forecaster regrets not following the advice of the best expert in hindsight. This criterion suits our setup as we optimize a mixture of experts and select the best one by the end of the process.
In classical learning theory, statistical properties of the underlying process may be estimated on the basis of stationarity assumptions over the sequence of past observations. Thus effective prediction rules can be derived from these estimates
cesa2006prediction . However, NAS methods that alternately learn the architecture weights and train the network weights are highly nonstationary. In PEA theory, no statistical assumptions are made, as “simplicity is a merit” hazan2016introduction , and worstcase bounds are derived for the forecaster’s performance. We obtain such bounds for the wipeout mechanism and the regret.A Safe Wipeout. In XNAS (line 11), by the choice of the wipeout thresholds, experts with no chance of taking the lead by the end of the search are wiped out along the process. In a worsecase setup, a single incorrect wipeout might result in a large regret, i.e., linear in the number of steps , due to a loss gap at each consecutive step. The following lemma assures that this cannot happen,
Lemma 1.
In XNAS, the optimal expert in hindsight cannot be wiped out.
The wipeout effectively transfers the attention to leading experts. Define the wipeout factor and the aggregated wipeout factor as and , respectively.
Lemma 2.
The aggregated wipeout factor satisfies .
Equipped with both lemmas, we show that the wipeout may improve the EG regret bound for certain reward sequences.
Regret Bounds. Our main theorem guarantees an upper bound for the regret,
Theorem 1 (XNAS Regret Bound).
The regret of the XNAS algorithm 1, with experts and learning rate , incurring a sequence of nonnegative convex losses of bounded rewards, satisfies,
(3) 
As an input parameter of XNAS, the learning rate cannot be determined based on the value of , since the later depends on the data sequence. Choosing the minimizer of the first two terms of (3) fully known in advance, yields the following tight upper bound,
(4) 
The regret upper bound of XNAS is tight, as the lower bound can be shown to be of haussler1995tight . In addition, the wipeout related term reduces the regret in an amount which depends on the data sequences through , as it effectively contributes the attention of weak experts to the leading ones. For comparison, under the same assumptions, the worstcase regret bound of gradientdescent is of hazan2016introduction , while the one of Adam is linear in reddi2019convergence .
An illustration of the relationship between the regret and the rate of correct expert selection appears in section 8.3 of the supplementary material, where XNAS is shown to achieve a better regret compared to a generic optimizer.
Multiple Learning Rates. Equation 4 connects the optimal theoretical learning rate with the number of steps , which is also the number of gradient feedbacks received by the experts. Since forecasters weights are being replicated among different cells, the number of feedbacks is different for normal and reduction cells (section 2.1). Explicitly, , where are the effective horizon
, the validation set size, the number of epochs and the number of replications for cell type
respectively. We adopt the usage of multiple learning rates in our experiments as upper bounds on the learning rates for minimizing the upper bound of the regret.The Connection to Policy Gradient. We conclude this section by pointing out an interesting connection between policy gradient in NAS zoph2016neural and PEA theory. We refer to the PEA based reward term in line 8 of algorithm 1. This reward has been shown by snas to be the same effective reward of a policy gradient method applied to the common NAS optimization criterion zoph2016neural ; NASNET ; ENAS
. More precisely, consider the case where instead of mixing the experts’ predictions, the forecaster is to sample a single expert at each step with probability
, specified in (1). Then the effective reward associated with the policy gradient will be exactly the derived PEA reward, . XNAS optimizes with respect to the same reward, while avoiding the sampling inefficiency associated with policy gradient methods.3.2 Key Properties and Discussion
In this section we discuss some of the key properties of XNAS. For each of this properties we provide supporting derivations, illustrations and demonstrations appearing in section 8 of the supplementary material for brevity.
3.2.1 The Recovery of Late Bloomers and Robustness to Initialization
In this section we point out a key difference between our proposed update rule and the one used in previous works. We refer to Gradient Descent (GD) with softmax as updating the parameters , by descending respectively. Variants of GD with softmax, as used in liu2018darts to optimize the architecture weights, suppress operations that are weak at the initial iterations, making it more difficult for them to “bloom” and increase their weights. This could be problematic, e.g. in the two following cases. First, consider the best expert starting with a poor weight which gradually rises. This could be the case when an expert representing a parameterized operation (e.g. a convolutional layer) competes with an unparameterized one (e.g. a pooling layer), as the first requires some period for training, as stated by noy2019asap ; hundt2019sharpdarts . Second, consider a noisy setup, where the best expert in hindsight could receive some hard penalties before other inferior experts do. In NAS we deal with stochastic settings associated with the training data.
We inspect the update term of GD with softmax,
(5) 
Hence, the effective reward in this case is,
(6) 
See derivations in section 8.4. The linear dependence on the expert’s weight in (6) implies that GD with softmax makes it harder for an expert whose weight is weak at some point to recover and become dominant later on, as the associated rewards are attenuated by the weak expert’s weight.
XNAS mitigates this undesirable behavior. Since for XNAS the update term (8) depends on the architecture weights only indirectly, i.e. through the prediction, the recovery of late bloomers is not discouraged, as demonstrated in section 8.1 of the supplementary material. From the very same reasons, XNAS is more robust to the initialization scheme compared to GD with softmax and its variants, as demonstrated in section 8.2 of the supplementary material. These advantages make XNAS more suitable for the NAS setup.
Note that while the XNAS enables the recovery of experts with badly initialized weights or with delayed rewards, the wipeout mechanism prevents inferior operations that start blooming too late from interfering, by eliminating experts with no chance of leading at the end.
Wipeout Factor. As mentioned in section 2.2, the wipeout mechanism contributes to both optimization process and search duration. A further reduction in duration can be achieved when the wipeout threshold in line 11 of Algorithm 1 is relaxed with a parameter , being replaced by . This will lead to a faster convergence to a single architecture, with the price of a violation of the theoretical regret. As worstcase bounds tend to be over pessimistic, optimizing over could lead to improved results. We leave that for future work.
3.2.2 Fewer Hyper Parameters
The view of the differentiable NAS problem as an optimization problem solved by variants of GD, e.g. Adam, introduces some common techniques for such schemes along with their corresponding hyperparameters. Tuning these complicates the search process  the fewer hyperparameters the better. We next discuss how XNAS simplifies and reduces the number of hyperparameters.
Theoretically Derived Learning Rates. The determination of the learning rate has a significant impact on the convergence of optimization algorithms. Various scheduling schemes come up, e.g. loshchilov2016sgdr ; smith2017cyclical , as the later additionally suggests a way for obtaining an empirical upper bound on the learning rate. In section 3.1, multiple learning rates are suggested for minimizing the regret bound (4), as
represents normal and reduction cells respectively. For example, for CIFAR10 with 50%:50% trainvalidation split, 50 search epochs, gradient clipping of
, normal cells and reduction cells both of experts for each forecaster, (4) yields 7.5e4 and 1.3e3.Remark 1.
Note that the proposed learning rates minimize an upper bound of the regret (4) in the case of no wipeout, i.e. the worst case, as the extent of the wipeout cannot be known in advance. Hence the proposed learning rate provides an upper bound on the optimal learning rates and can be further finetuned.
No Weight Decay.
Another common technique involving hyperparameters is weight decay, which has no place in the theory behind XNAS. We claim that the obviation of weight decay by XNAS makes sense. Regularization techniques, such as weight decay, reduce overfitting of overparametrized models when applied to these parameters
goodfellow2016deep . No such effect is incurred when applying weight decay on the architecture parameters as they do not play the same role as the trained network parameters . Instead, weight decay encourages uniform dense solutions, as demonstrated in Figure 3.2.2, where the mean normalized entropy increases with the weight decay coefficient. The calculation of the mean normalized entropy is detailed in section 8.5 of the supplementary material. This observation could be associated with the suggestion of recent works li2019random ; sciuto2019evaluating that current search methods are only slightly better than random search. The density of results in a harder degradation in performance once discretization stage occurs (section 2.1), hence sparse solutions are much preferred over dense ones.No Momentum. The theory behind XNAS obviates momentum qian1999momentum and ADAM’s exponentially decay rates kingma2014adam . Since momentum requires more state variables and more computations, the resulting XNAS optimizer turns out to be simpler, faster and with a smaller memory footprint, compared to commonly used optimizers for NAS, e.g. ADAM liu2018darts ; xie2018snas ; chen2019progressive ; casale2019probabilistic and SGD with momentum noy2019asap .
4 Experiments and Results
In this section we will test XNAS on common image classification benchmarks, and show its effectiveness compared to the other stateoftheart models.
We used the CIFAR10 dataset for the main search and evaluation phase. In addition, using the cell found on CIFAR10 we did transferability experiments on the wellknown benchmarks ImageNet, CIFAR100, SVHN, FashionMNIST, Freiburg and CINIC10.
4.1 Architecture Search on CIFAR10
Using XNAS, we searched on CIFAR10 in a small parent network for convolutional cells. Then we built a larger network by stacking the learned cells, trained it on CIFAR10 and compared the results against other NAS methods.
We created the parent network by stacking cells with ordered nodes, each of which connected via forecasters to all previous nodes in the cell and also to the two previous cells outputs. Each forecaster contains seven operations: 3x3 and 5x5 separable and dilated separable convolutions, 3x3 maxpooling, 3x3 averagepooling and an identity. A cells output is a concatenation of the outputs of the four cells nodes.
The search phase lasts epochs. We use the firstorder approximation liu2018darts , relating to and as independent parameters which can be optimized separately. The train set is divided into two parts of equal sizes: one is used for training the operations weights and the other for training the architecture weights . With a batch size of , one epoch takes minutes in average on a single GPU^{1}^{1}1Experiments were performed using a NVIDIA GTX 1080Ti GPU., summing up to hours in total for a single search. Figure 9 shows our learned normal and reduction cells, respectively.
4.2 CIFAR10 Evaluation Results
4.3 Transferability Evaluation
Additional Results. We further tested XNAS transferability abilities on smaller datasets: CIFAR100 cifar100 , FashionMNIST fashionMnist , SVHN SVHN , Freiburg Freiburg and CINIC10 darlow2018cinic . We chose to use the XNASSmall architecture, with similar training scheme to the one described in section 4.2. Table 3 shows the performance of our model compared to NAS methods. We can see that XNAS cell excels on the datasets tested. On CIFAR100 it surpasses the next top cell by %, achieving the second highest reported score on CIFAR100 (without additional pretrain data), second only to cubuk2018autoaugment . On FashionMNIST, Freiburg and CINIC10, to the best of our knowledge XNAS achieves a new stateoftheart accuracy.
Datasets Architecture 
CIFAR100 Error  FMNIST Error  SVHN Error  Freiburg Error  CINIC10 Error  Params  

(M)  Search  
cost  
Known SotA  cubuk2018autoaugment  zhong2017random  cubuk2018autoaugment  noy2019asap  noy2019asap  26 cubuk2018autoaugment   
PDARTS chen2019progressive          
NAONet1 NAO          
NAONet2 NAO          
PDARTSL chen2019progressive          
SNAS snas  
PNAS PNAS  
AmoebaA Real18Regularized  
NASNet NASNET  
DARTS liu2018darts  
ASAP noy2019asap  
XNASSmall  0.3 
5 Related Work
Mentions of Experts in deep learning
rasmussen2002infinite ; yao2009hierarchical ; garmash2016ensemble ; aljundi2017expert literature go decades back Jacobs:1991 ; chen1999improved , typically combining models as separable experts submodels. A different concept of using multiple mixtures of experts as inner parts of a deep model, where each mixture has its own gating network, is presented in eigen2013learning . Following works build upon this idea and include a gating mechanism per mixture CondConv2019 ; teja2018hydranets , and some further suggest sparsity regularization over experts via the gating mechanism shazeer2017outrageously ; wang2018deep . These gating mechanisms can be seen as a dynamic routing, which activates a single or a group of experts in the network on a perexample basis. Inspired by these works, our methods leverage PEA principled methods for automatically designing neural network inner components.Furthermore, optimizers based on PEA theory may be useful for the neural architecture search phase. Common stochastic gradientdescent (SGD) and a set of PEA approaches, such as followtheregularizedleader (FTRL), were shown by
shalev2012online ; hazan2016introduction ; van2014follow to be equivalent. Current NAS methods NASNET ; zoph2016neural ; ENAS ; liu2018darts ; cai2018proxylessnas ; wu2018fbnet ; li2019random ; hundt2019sharpdarts ; noy2019asap use Adam, SGD with Momentum or other common optimizers. One notion that is common in PEA principled methods is the regret cesa2006prediction . PEA strategies aim to guarantee a small regret under various conditions. We use the regret as a NAS objective, in order to establish a better principled optimizer than existing methods li2019random ; sciuto2019evaluating . Several gradientdescent based optimizers, such as Adam, present a regret bound analysis, however, the worstcase scenario for Adam has nonzero average regret reddi2019convergence , i.e., it is not robust. Our optimizer is designated for selecting architecture weights while achieving an optimal regret bound.6 Conclusion
In this paper we presented XNAS, a PEA principled optimization method for differential neural architecture search. Inner network architecture weights that govern operations and connections, i.e. experts, are learned via exponentiatedgradient backpropagation update rule. XNAS optimization criterion is well suited for architectureselection, since it minimizes the regret implied by suboptimal selection of operations with tendency for sparsity, while enabling late bloomers experts to warmup and take over during the search phase. Regret analysis suggests the use of multiple learning rates based on the amount of information carried by the backward gradient. A dynamic mechanism for wiping out weak experts is used, reducing the size of computational graph along the search phase, hence reducing the search time and increasing the final accuracy. XNAS shows strong performance on several image classification datasets, while being among the fastest existing NAS methods.
Acknowledgements
We would like to thank the members of the Alibaba Israel Machine Vision Lab (AIMVL), in particular to Avi Mitrani, Avi BenCohen, Yonathan Aflalo and Matan Protter for their feedbacks and productive discussions.
References

(1)
R. Aljundi, P. Chakravarty, and T. Tuytelaars.
Expert gate: Lifelong learning with a network of experts.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 3366–3375, 2017.  (2) H. Cai, L. Zhu, and S. Han. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332, 2018.
 (3) F. P. Casale, J. Gordon, and N. Fusi. Probabilistic neural architecture search. arXiv preprint arXiv:1902.05116, 2019.
 (4) N. CesaBianchi and G. Lugosi. Prediction, learning, and games. Cambridge university press, 2006.
 (5) K. Chen, L. Xu, and H. Chi. Improved learning algorithms for mixture of experts in multiclass classification. Neural networks, 12(9):1229–1252, 1999.
 (6) X. Chen, L. Xie, J. Wu, and Q. Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. arXiv preprint arXiv:1904.12760, 2019.
 (7) E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
 (8) L. N. Darlow, E. J. Crowley, A. Antoniou, and A. J. Storkey. Cinic10 is not imagenet or cifar10. arXiv preprint arXiv:1810.03505, 2018.
 (9) T. DeVries and G. W. Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
 (10) D. Eigen, M. Ranzato, and I. Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013.

(11)
E. Garmash and C. Monz.
Ensemble learning for multisource neural machine translation.
In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1409–1418, 2016.  (12) I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016.

(13)
D. Haussler, J. Kivinen, and M. K. Warmuth.
Tight worstcase loss bounds for predicting with expert advice.
In
European Conference on Computational Learning Theory
, pages 69–83. Springer, 1995.  (14) E. Hazan et al. Introduction to online convex optimization. Foundations and Trends® in Optimization, 2(34):157–325, 2016.
 (15) W. Hoeffding. A lower bound for the average sample number of a sequential test. The Annals of Mathematical Statistics, pages 127–130, 1953.
 (16) J. Hu, L. Shen, and G. Sun. Squeezeandexcitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141, 2018.
 (17) A. Hundt, V. Jain, and G. D. Hager. sharpdarts: Faster and more accurate differentiable architecture search. arXiv preprint arXiv:1903.09900, 2019.
 (18) R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. Neural Comput., 3(1):79–87, Mar. 1991.
 (19) P. Jund, N. Abdo, A. Eitel, and W. Burgard. The freiburg groceries dataset. arXiv preprint arXiv:1611.05799, 2016.
 (20) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 (21) J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. information and computation, 132(1):1–63, 1997.
 (22) G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultradeep neural networks without residuals. arXiv preprint arXiv:1605.07648, 2016.
 (23) L. Li and A. Talwalkar. Random search and reproducibility for neural architecture search. arXiv preprint arXiv:1902.07638, 2019.
 (24) C. Liu, B. Zoph, M. Neumann, J. Shlens, W. Hua, L.J. Li, L. FeiFei, A. Yuille, J. Huang, and K. Murphy. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pages 19–34, 2018.
 (25) H. Liu, K. Simonyan, and Y. Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
 (26) I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
 (27) R. Luo, F. Tian, T. Qin, E. Chen, and T.Y. Liu. Neural architecture optimization. In Advances in Neural Information Processing Systems, pages 7827–7838, 2018.
 (28) Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011, 2011.
 (29) A. Noy, N. Nayman, T. Ridnik, N. Zamir, S. Doveh, I. Friedman, R. Giryes, and L. ZelnikManor. Asap: Architecture search, anneal and prune. arXiv preprint arXiv:1904.04123, 2019.

(30)
H. Pham, M. Y. Guan, B. Zoph, Q. V. Le, , and J. Dean.
Efficient neural architecture search via parameter sharing.
In
International Conference on Machine Learning (ICML)
, 2018.  (31) N. Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1):145–151, 1999.
 (32) C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of gaussian process experts. In Advances in neural information processing systems, pages 881–888, 2002.

(33)
E. Real, A. Aggarwal, Y. Huang, and Q. V. Le.
Regularized evolution for image classifier architecture search.
In International Conference on Machine Learning  ICML AutoML Workshop, 2018.  (34) S. J. Reddi, S. Kale, and S. Kumar. On the convergence of adam and beyond. arXiv preprint arXiv:1904.09237, 2019.
 (35) C. Sciuto, K. Yu, M. Jaggi, C. Musat, and M. Salzmann. Evaluating the search phase of neural architecture search. arXiv preprint arXiv:1902.08142, 2019.
 (36) S. ShalevShwartz et al. Online learning and online convex optimization. Foundations and Trends® in Machine Learning, 4(2):107–194, 2012.
 (37) N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean. Outrageously large neural networks: The sparselygated mixtureofexperts layer. arXiv preprint arXiv:1701.06538, 2017.
 (38) L. N. Smith. Cyclical learning rates for training neural networks. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464–472. IEEE, 2017.
 (39) C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
 (40) C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.
 (41) R. Teja Mullapudi, W. R. Mark, N. Shazeer, and K. Fatahalian. Hydranets: Specialized dynamic architectures for efficient inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8080–8089, 2018.

(42)
A. Torralba, R. Fergus, and W. T. Freeman.
80 million tiny images: A large data set for nonparametric object and scene recognition.
IEEE transactions on pattern analysis and machine intelligence, 30(11):1958–1970, 2008.  (43) T. Van Erven, W. Kotłowski, and M. K. Warmuth. Follow the leader with dropout perturbations. In Conference on Learning Theory, pages 949–974, 2014.
 (44) X. Wang, F. Yu, R. Wang, Y.A. Ma, A. Mirhoseini, T. Darrell, and J. E. Gonzalez. Deep mixture of experts via shallow embedding. arXiv preprint arXiv:1806.01531, 2018.
 (45) R. J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(34):229–256, 1992.
 (46) B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, and K. Keutzer. Fbnet: Hardwareaware efficient convnet design via differentiable neural architecture search, 2018.
 (47) H. Xiao, K. Rasul, and R. Vollgraf. Fashionmnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
 (48) S. Xie, A. Kirillov, R. Girshick, and K. He. Exploring randomly wired neural networks for image recognition. arXiv preprint arXiv:1904.01569, 2019.
 (49) S. Xie, H. Zheng, C. Liu, and L. Lin. Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018.
 (50) S. Xie, H. Zheng, C. Liu, and L. Lin. Snas: Stochastic neural architecture search. In International Conference on Learning Representations (ICLR), 2019.
 (51) B. Yang, G. Bender, Q. V. Le, and J. Ngiam. Soft conditional computation. CoRR, abs/1904.04971, 2019.
 (52) B. Yao, D. Walther, D. Beck, and L. FeiFei. Hierarchical mixture of classification experts uncovers interactions between brain regions. In Advances in Neural Information Processing Systems, pages 2178–2186, 2009.
 (53) X. Zhang, Z. Huang, and N. Wang. You only search once: Single shot neural architecture search via direct sparse optimization. arxiv 1811.01567, 2018.
 (54) Z. Zhong, L. Zheng, G. Kang, S. Li, and Y. Yang. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017.
 (55) B. Zoph and Q. V. Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
 (56) B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697–8710, 2018.
7 Proofs
7.1 Proof of Lemma 1
Proof: By contradiction, assume that expert is being wipedout at the iteration based on line 11 in algorithm 1, and without loss of generality, is the leading expert at that time,
(7) 
Since expert is the optimal one in hindsight, specifically,
(8) 
However, since the loss is bounded, the ratios between weights at time are bounded as well,
(9)  
(10) 
In contradiction to (7.1). ∎
7.2 Proof of Lemma 2
Proof: Left Hand Side. Due to the nonnegativity of for all , we have,
(11) 
Hence,
(12) 
7.3 Proof of XNAS Regret Bound
Proof: First let us state an auxiliary lemma,
Lemma 3 (Hoeffding).
Proof can be found in hoeffding1953lower .
We start with defining experts’ auxiliary and accumulatedauxiliary losses following cesa2006prediction ,
(18)  
(19) 
Notice that the auxiliary losses are bounded by an input parameter in line 1 of algorithm 1,
(20) 
We also define the set of remaining experts at time ,
(21) 
Such that,
(22) 
We Now bound the ratio of weights sums from both sides.
Let us derive a lower bound:
(23)  
Where we used lemma 1 in (23), assuring that loss minimizer is among the remaining experts.
Let us derive an upper bound:
(24)  
(25)  
(26)  
(27) 
Where (24) is due to (22) and (25) is by setting,
(28) 
Inequality (26) results from lemma 3, with and (27) is a result of the linearity of .
Summing the logs telescopic sum:
(29) 
Setting with bounds specified in by lemma 2, we have,
(30) 
Combining the lower and upper bounds and dividing by ,
(31) 
We now bound the accumulated regret, using the convexity of the loss,
(32)  
8 Supporting Materials for the Key Properties Discussion
8.1 A Deterministic 3D Axes Toy Problem
In an attempt to demonstrate the possible recovery of late bloomers, we view an optimization problem in a three dimensional space as a predictionwithexperts problem, where each axis represents an expert with a constant prediction, i.e. for the axes respectively. The forecaster then makes a prediction according to the following,
with in the simplex, i.e. . Setting for , for a given loss function , the update terms for GD with softmax (section 3.2.1) and XNAS (section 2.2), associated with the axis, are as follows,
as the corresponding terms for the and axes are similar by symmetry, see a full derivation in section 8.4.2.
Now let us present the following toy problem: A three dimensional linear loss function is being optimized for gradient steps over the simplex, i.e. . Then for illustrating a region shift in terms of the best expert, the loss function shifts into another linear loss function which is then optimized for additional steps. The trajectories are presented in Figure 3.
At the first stage the axis suffers many penalties as its weight shrinks. Then it starts receiving rewards. For GD with softmax, those rewards are attenuated, as explained in section 3.2.1 and can be seen in Figure 3 (right). Despite the linear loss, with constant gradients, the update term decays. Note that this effect is even more severe when dealing with more common losses of higher curvature, where the gradients decay near the local minimum and then further attenuated, as illustrated in section 8.2.4. Once the gradient shifts, it is already depressed due to past penalties, hence the axis struggles to recover. XNAS, however, is agnostic to the order of the loss values in time. Once the rewards balance out the penalties, the path leads towards the axis. In the meanwhile, the axis takes the lead.
8.2 A Deterministic 2D Axes Toy Problem
In section 8.1 we show the builtin attenuation of weak operations by GD with softmax. This is illustrated by a three dimensional toy example where the axes represent experts of constant predictions. Here we elaborate on this effect using a similar two dimensional toy problem, where the gradients with respect to the axes are the negative values of one another. See section 8.4.3 for the setup and full derivations. All the experiments in this section are conducted using a learning rate of for both optimizers for steps.
8.2.1 Balanced Initialization
Figure 4 illustrates the attenuation of gradients for GD with softmax, as although the gradients of the loss are constant, the gradients’ magnitude decreases as we move away from the initialization , i.e. . XNAS indeed receives constant gradients thus reaches the minima faster.
8.2.2 Imbalanced Initialization
The attenuated gradients also make GD with softmax more sensitive to the initialization, as demonstrated in Figure 5, where and GD with softmax, whose gradients are attenuated, makes no progress while XNAS reaches the minima.
8.2.3 The Preference of Dense Solutions
Presenting the attenuation factor on the simplex, i.e. , in Figure 6, demonstrates how gradients are harder attenuated as far away as the variables move from a dense solution, e.g. at .
Hence, it is harder for GD with softmax to strengthen a single expert over the other. This effect encourages dense solutions over sparse solutions, i.e. a choice of a single expert. Due to the descretization stage, described in section 2.1, the denser the solution is, the more degradation in performance is incurred. Hence dense solutions should be discouraged rather than encouraged.
8.2.4 Loss Functions of a Higher Curvature
Sections 8.2.1 and 8.2.2 deal with a linear loss function of no curvature and constant gradients. Once convex loss functions are considered, the gradients decrease towards the local minimum. Figure 7 illustrates the further attenuation of GD with softmax for a quadratic loss function , which makes it even harder for GD with softmax to reach the local minimum.
8.3 Regret and Correct Selection in Statistical Setting
We consider a statistical setup for comparing XNAS with the common Gradient Descent (GD) with softmax, described in section 3.2.1. This setup simulates the iterative architecture optimization and final selection of the top expert for a single forecaster.
Two forecasters are compared, XNAS and GD with softmax. Both receive noisy independent and identically distributed (i.i.d) rewards of experts. Each expert has an initial i.i.d bias simulating its inherent value, so its rewards satisfy for and , where
is a Gaussian distribution with a mean
and a standard deviation
.The first forecaster updates its weights using GD with softmax update rule from Equation 6 (full derivation in section 8.4.1), common to previous NAS methods, while the second is using Algorithm 1.
The forecasters use their update rules to update weights along the run. At the end of the run each selects the expert with the largest weight. A correct classification satisfies . The average regret of those runs is also calculated based on equation 2.
Figure 8 shows a mean of MonteCarlo runs, each of timesteps, plotting the regret and the fraction of correct selection (classification). In Figure 8 (left), both terms are plotted versus a varying number of experts. It can be seen that the regret of XNAS is significantly smaller, scaling with the number of experts like , as implied by its regret upper bound in equation 4, while GD regret scales like hazan2016introduction .
In Figure 8 (right), the noise standard deviation is varying, making it harder to correctly classify the expert with the highest bias. Again, XNAS dominates GD with softmax, which is more sensitive to the noisy rewards due to the ’late bloomers’ described in 3.2.1, e.g. the best experts might suffer some large penalties right at the beginning due to the noise, thus might not recover for GD with softmax.
In both graphs it can be seen that the correct selection fraction is monotonically decreasing as the regret is increasing. This gives an additional motivation for the use of the regret minimization approach as a criterion for neural architecture search.
8.4 Gradients Derivations
For the comparison to previous work liu2018darts , we consider the decision variables , as the right hand side is defined at (1).
8.4.1 The Derivation of Derivatives in the General Case
(34) 
where is the Kronecker delta.
Observe that:
(35) 
Finally,
(36)  
Comments
There are no comments yet.