1 Introduction
The behavior of deep neural networks often becomes analytically tractable when the network width is very large (Neal, 1994; Williams, 1997; Hazan and Jaakkola, 2015; Schoenholz et al., 2017; Lee et al., 2018; Matthews et al., 2018a, b; Borovykh, 2018; GarrigaAlonso et al., 2019; Novak et al., 2019a; Yang and Schoenholz, 2017; Yang et al., 2019; Yang, 2019a, b; Pretorius et al., 2019; Novak et al., 2020; Hron et al., 2020; Jacot et al., 2018; Li and Liang, 2018; AllenZhu et al., 2018; Du et al., 2019a, 2018; Zou et al., 2019a, b; Chizat et al., 2019; Lee et al., 2019; Arora et al., 2019a; Du et al., 2019b; SohlDickstein et al., 2020; Huang et al., 2020; Pennington et al., 2017; Xiao et al., 2018, 2019; Hu et al., 2020; Li et al., 2019; Arora et al., 2020; Shankar et al., 2020a; Cho and Saul, 2009; Daniely et al., 2016; Poole et al., 2016; Chen et al., 2018; Li and Nguyen, 2019; Daniely, 2017; Pretorius et al., 2018; Hayou et al., 2018; Karakida et al., 2018; Blumenfeld et al., 2019; Hayou et al., 2019)
. One such example is that Bayesian inference in the parameter space of a deep neural network becomes equivalent to a Gaussian process prediction in function space
(Neal, 1994; Lee et al., 2018; Matthews et al., 2018a). An intriguing consequence of this correspondence is that once the neural network Gaussian process (NNGP) kernel is computed, exact Bayesian inference is possible in wide networks. This further allows the computation of properties such as the expected validation accuracy of a wide Bayesian neural network with a given architecture and initialization scale.We expect performance metrics of the NNGP associated with a given network to correlate well with the network’s actual performance after training, since we expect the predictions of Bayesian and gradient descent trained networks to be correlated. An important topic in neural architecture search (NAS) Zoph and Le (2016); Baker et al. (2016) is the discovery of computationally cheap methods to predict the fullytrained performance of a given network. This suggests NNGP performance should provide a useful signal for NAS. This use case has been previously suggested (Novak et al., 2019b; Arora et al., 2019b; Shankar et al., 2020b), but never explored.
There have been extensive studies on computing the exact kernel of NNGPs, but the actual computation can be very expensive Novak et al. (2019b); Arora et al. (2019b); Shankar et al. (2020b), requiring hundreds of accelerator hours. Furthermore, networks in neural network search spaces typically use operations for which the NNGP correspondence has no known closed form. However, Monte Carlo estimates of NNGP kernels are often far cheaper, and can be computed for any architecture by repeated random initialization of the network (Lee et al., 2018).
We have released a Colab notebook^{1}^{1}1https://github.com/googleresearch/googleresearch/tree/master/nngp_nas demonstrating our algorithm to compute NNGP performance, as well as metrics used in this paper to measure the quality of NNGP performance as a predictor of the groundtruth network performance.
1.1 Summary of contributions
We examine the utility of NNGP validation accuracy for predicting final network performance, and compare it against that of shortenedtraining, which is a common method for approximating network performance Zoph and Le (2016). We do so in two different settings: on the NASBench101 dataset Ying et al. (2019) with 423k network architectures evaluated on CIFAR10 Krizhevsky et al. ; and on 10k randomly sampled networks from the MNAS search space Tan et al. (2019) evaluated on ImageNet Russakovsky et al. (2015). In both cases we find that NNGP performance is indicative of final network performance, while being at least an order of magnitude cheaper to compute than shortenedtraining. We further find that for the large NASBench101 search space, thresholding by NNGP accuracy dramatically shrinks the search space which needs to be examined by more expensive methods. We also demonstrate that NNGP and shortenedtraining performance can potentially be combined to produce a metric with improved predictive quality on NASBench101.
1.2 Further related work
Proxy tasks, i.e., computationally manageable tasks for approximating the performance of a neural network, are commonly used in neural architecture search Zoph and Le (2016); Tan et al. (2019); Real et al. (2019); Ghiasi et al. (2019). Early stopping by leveraging features and training curves from finished trials has also been used Swersky et al. (2014); Domhan et al. (2015); Klein et al. (2017); Baker et al. (2017). A measure for predicting network performance at initialization has recently been proposed in Mellor et al. (2020). Architecture search can also be framed as finding a subnetwork within a supernetwork that is trained once. In architecture search with weight sharing Pham et al. (2018); Liu et al. (2019); Cai et al. (2019), a controller samples subnetworks from the supernetwork and uses a per minibatch reward as an approximation for the subnetwork performance. In differentiable architecture search Liu et al. (2019) the subnetwork selection is part of the gradient based training. Learningbased methods Wen et al. (2019) have also been recently applied to predict the performance of the network, where a neural network that takes the architecture as the input and produces the ground truth performance as the output is trained on a subset of the search space.
Our approach is orthogonal and could be combined with these prior works; the NNGP performance studied is obtained without any gradient based training and furthermore, without any preexisting gradient based training data.
2 Background
2.1 NNGP Inference
Consider a deep neural network with parameters , whose architecture maps an input
into a feature vector
of dimension, followed by a linear readout layer with weight variance
producing predicted labels . Consider data points . In the NNGP approximation, the distribution over output vectors for any label at initialization is jointly Gaussian,(1) 
where
is the samplesample second moment of
averaged over random network initializations and units. This approximation has been shown to be exact in the wide limit for many architectures Neal (1994); Schoenholz et al. (2017); Lee et al. (2018); Matthews et al. (2018a, b); GarrigaAlonso et al. (2019); Novak et al. (2019a); Yang (2019a, b); Hron et al. (2020). For the rest of the paper, we proceed with the assumption that this is an adequate approximation for the architectures being studied. While exact convergence of Bayesian inference on the neural network parameter space to the NNGP has not been proven for architectures with some of components we use in this work (e.g., maxpooling), it is expected that very general classes of architectures will exhibit GPlike behaviour as they become wide.
Since the distribution of label vectors are given by a Gaussian distribution, we can compute the exact conditional expectation values of labels. If the parameter initialization distribution is interpreted as a prior distribution, this corresponds to the predictions that would be made by a Bayesian neural network. In other words, if the inputs
produce the label vectors , and introducing a regularization constant , the expected label vector for an input is(2)  
Label of  (3) 
The central challenge in carrying out an NNGP calculation is computing the kernel . In this paper, we estimate the kernel by a MonteCarlo method first studied in Novak et al. (2019a). That is, is computed by stochastically evaluating the expectation in equation (1) over repeated random initializations of the network. We denote the number of random initializations the ensemble number.
We search over a range of normalization constants , as the result of NNGP inference can vary significantly with
. We normalize the search range of this constant with respect to the average eigenvalue
of the kernel matrix, i.e., . Since has been made dimensionless in this way, the same search range of can be used for all NNGP experiments. We take this range to be numpy.logspace(7, 2, 20).The full procedure for computing NNGP validation accuracy in shown in Algorithm 1. The target label vectors are derived from the target labels to be the onehot vector shifted to have meanzero.
Computational Cost
Let be the architecture () dependent number of flops for inference on a single sample, the dimension of the feature space of the penultimate layer of the network, the ensemble number, the number of labels, and and the size of the training and validation sets for the NNGP, respectively. The computational FLOPs required for computing the NNGP validation accuracy is
(4) 
The details of this computation can be found in section B of the supplementary material (SM). This cost does not scale well with , which we cap at 8k samples when computing NNGP accuracy to keep the inference cost reasonable.
Let us denote the FLOPs required for training a model per step per sample by . We have computed and for all the networks in the NASBench101 dataset for multiple batch sizes and found that the relation holds robustly (see section B of the SM). Thus, the cost of training a model for epochs and carrying out inference for validation can be written as
(5) 
We add the superscript “all" to emphasize that gradientbased training of the networks is always performed on the entire dataset, while NNGP inference is performed on subsampled datasets.
For various plots regarding NASBench101, we will plot the computational cost of obtaining NNGP or shortened training performance based on average FLOPs computed over all networks in the dataset. We note that GFLOPs while for all networks in NASBench101.
2.2 Metrics
We use the following metrics to evaluate the quality of NNGP and shorttraining proxy tasks:
Kendall Rank Correlation Coefficient (Kendall’s Tau): The Kendall rank correlation coefficient measures how well the prediction of two orderings agree. Let us assume two orderings of a set. For every pair of elements of the set, let us denote the number of concordant pairs (pairs that are ordered the same way in both orderings) , the number of discordant pairs (pairs that are ordered the opposite way in the two orderings) , and the number of ties in each of the orderings and . There are multiple versions of Kendall’s Tau which treat ties differently. We use the following definition:
(6) 
We compute how well a validation accuracy ranks the networks by computing its Kendall rank correlation coefficient against the groundtruth accuracy of the networks.
Correlation Coefficient: We compute the Pearson correlation coefficient between the performance metrics and the groundtruth accuracy.
Prediction Quality for Exceedance of Threshold Performance (PQETP): To judge the utility of a performance metric of a network, we can measure how well it predicts whether the true network performance is above some threshold
. We do so by computing the area under the receiver operating characteristics (AUROC). The ROC curve for a binary classifier with a continuous output is obtained by plotting the true positive rate against the false positive rate as the true/false boundary of the classifier is varied. In our case, the binary classifier is set to be the performance metric we would like to evaluate (e.g., validation accuracy after shortenedtraining, NNGP validation accuracy), and the binary class is whether the groundtruth validation accuracy of the network exceeds the threshold
. Thus a metric with a better PQETP for performance is better at determining whether the groundtruth performance of the network is above .Discovered Performance: The “discovered performance" of a set of networks is obtained by first choosing the top performers in the set according to the reference performance metric, and taking the best groundtruth performance among those of the selected networks, i.e., it is the performance of the top performer “discovered" using the metric. We choose to be 10 throughout the paper, and will be computing the average discovered performance of subsets of NASBench101 of fixed size.
3 Experiment Design
3.1 The NASBench101 Dataset
The NASBench101 dataset Ying et al. (2019) consists of 423k image classification networks (details of which are provided in section C.1 of the supplementary materials) evaluated on CIFAR10, with the standard train/validation/test split of 40k/10k/10k samples. The dataset contains the validation and training accuracy of each network after training for 4, 12, 36 and 108 epochs for 3 different trials. We take the groundtruth performance of a network to be the average validation accuracy for the three 108epoch training trials. Our goal is to evaluate how well NNGP performance predicts this ground truth accuracy.
We compute the NNGP validation accuracy with a range of ensemble numbers and fixed subsampled training sets and validation sets of different sizes with balanced labels for all networks. We take , and . We compare the utility of the NNGP validation accuracy obtained using different values of with those obtained by shortened training. To do so, we use the validation accuracy computed at the end of a single trial of 4, 12 and 36 epoch training. We can compute the cost for obtaining each measure using equations (4) and (5). The computational costs will be presented in petaFLOPs (PFLOPs). All metrics introduced in section 2.2 are computed for the validation accuracies obtained from NNGP inference and shortened training.
3.2 The MNAS Search Space
The MNAS search space Tan et al. (2019) is intended for mobile neural networks on ImageNet Russakovsky et al. (2015) with the aim of balancing performance and computational cost. Throughout this paper, we refer to the provided 50k “validation set" of ImageNet as the “test set" and split a separate 50k subset from the 1.3Msample training set, designating it as the validation set for evaluation. We select a randomly sampled set of 10k networks from the MNAS search space to study. The 5epoch validation accuracy, and the MNAS reward function, which is a function of this accuracy and latency, are computed for all 10k networks. For more details, see section C.2 of the supplementary material.
We carry out NNGP inference on a random 8ksample subset of the training set with balanced labels and the 50k validation set. We also construct training and validation subsets using 100 randomly subsampled labels with sizes (1k, 5k) and (8k, 5k). We use these three pairs of training/validation sets for NNGP inference with fixed ensemble number 4, and compare its utility as a performance measure against the MNAS reward obtained by 5epoch training. We do so by selecting the top10 networks according to the NNGP validation accuracies and shortenedtraining measures, and evaluating the groundtruth performance by training the networks for 400epochs and evaluating the test set accuracy. The hyperparameters used for 5 and 400epoch training are identical with those used in
Tan et al. (2019).4 Results
4.1 CIFAR10 on NASBench101
We compute Kendall’s Tau and the correlation of the NNGP validation accuracies for different combinations of and validation accuracies measured for shortenedtraining against the groundtruth performance of the 423k networks in NASBench101. (The raw inference results are presented in SM section D.) These measures have been plotted against the relative computational flops in figure 1. We find that the Kendall’s Tau performance of 4epoch or 12epoch training can be matched or bested with NNGP inference with an orderofmagnitude less FLOPs.
In figure 2 we plot PQETP for a range of threshold accuracies computed for validation accuracy via 4, 12epoch training and NNGP inference. We see that while 12epoch training validation accuracy is better suited than NNGP measures for discerning whether a network performance is within the top 1, 5 or 20 percentile, many of the NNGP measures are better at predicting whether a network performance is abovemedian. Meanwhile, we find that most NNGP measures have better PQETP compared to 4epoch training beyond 92 percentile values of , despite costing less to compute.
We also compute the average discovered performance for 10 randomly sampled 10ksubsets of networks for each metric. These values have been plotted in figure 3
against the computational cost. The standard error for the result of 4 and 12epoch training has been plotted in bands. We see that the NNGP performance is at most as good as 4epoch training, and is worse than 12epoch training.
Comments on Biases of NNGP performance
We found that competitive architectures with poor NNGP performance were mostly “linear" networks, having none or a small number of residual connections. In figure
4 we demonstrate this bias. In the directed adjacency matrix specifying the architecture, the th diagonal elements with correspond to residual connections. We observe that relative performance of architectures with a small number of residual connections are poor early in training but improve significantly with extensive training. For such architectures, NNGP predicts poor performance.Comments on usage of NTK
NAS aims to find architectures that train well by gradient descent. Since the neural tangent kernel (NTK) Jacot et al. (2018) characterizes gradient descent training of infinitely wide networks, one might expect signals provided by NTK to be better than NNGP for NAS. To compare their utility, we computed the empirical NTK validation accuracy for a size1k subset of networks from NASBench101 with . While the NTK validation accuracy has a nontrivial peak value of Kendall’s Tau at against the groundtruth performance for , this value is lower than that computed for NNGP performance at for equivalent parameters. Moreover, for the same dataset sizes, computing NTK inference is more expensive than NNGP since the Jacobian with respect to the network parameters need to be computed for all datapoints. This incurs a similar compute cost for computing gradients for all samples. In practice, computing the full Jacobian also consumes a large amount of memory, and the VectorJacobian Product and the JacobianVector Product need to be utilized (see nt.empirical_ntk_fn in Novak et al. (2019b) for a reference).
NNGP Performance and Model Size
The models within NASBench101 have widely varying sizes spanning over an order of magnitude—the smallest model has 2M parameters, while the biggest one has 50M. Given this range, model size is a strong indicator of performance for NASBench101, as we explore further in section F of the SM. One may thus be concerned that NNGP performance is merely capturing the size of the network, which is a trivial aspect of the neural network architecture.^{2}^{2}2One can carry out an equivalent analysis of the computational budget, rather than the number of parameters of each model. For NASBench101, the model size and computational budget almost perfectly correlate—we thus restrict our discussion to model size with this understanding.
In figure 6, we present the Kendall’s Tau and correlation between the NNGP validation accuracies and network size. We find very low correlation. We present further analysis on the model size distribution of NASBench101 and the utility of performance predictors under model size constraints in section F of the SM.
4.2 ImageNet on MNAS Search Space
In the leftmost panel of figure 7, we plot the performance of ten randomly selected networks, and the top10 networks selected according to the 5epoch MNAS reward, 5epoch accuracy and NNGP validation accuracy indexed by the number of subsampled labels, training set size and validation set size, i.e., . We find that the best networks according to NNGP validation accuracies perform worse than the best randomly selected network. As we discuss in section 5, this suggests that while NNGP provides a strong signal for whether a network will perform reasonably, it does not on its own identify the top performing networks.
In the latter two panels of figure 7, we give Kendall’s Tau and correlation between the MNAS reward function computed after 5epoch training and the NNGP validation accuracies. We see that there is a nontrivial correlation between the two different type of measures.
5 Discussion and exploration of practical use of NNGP inference in NAS
On NASBench101, we find that MonteCarlo estimated NNGP inference provides a computationally inexpensive signal that shows comparable utility against the validation accuracy obtained from shortened training. We further find that NNGP inference provides a strong signal for discerning whether a network exceeds median performance, but lags behind shortened training when predicting the hierarchy of topperformers.
This is further exemplified in the experiments conducted in the MNAS search space, where a randomly sampled network already exhibits good performance. To see this point, we note that the worst 5epoch training validation accuracy we obtain from the 10k networks we sampled from the MNAS search space is 23.69%. This is to be compared with the NASBench101 networks, for which the average of the worst validation accuracy over 10 sets of 10k random networks after training for 4, 12, 36 and 108 epochs is given by 4.37%, 9.12%, 9.49% and 9.49%, respectively. The quality of the MNAS search space is evident, even before considering the fact that ImageNet is a much more difficult task than CIFAR10 with a hundred times more labels. Thus the fact that the max performance over the top 10 networks ranked by NNGP is less than that over 10 random networks, despite the NNGP performance being correlated with the much more predictive shortenedtraining performance, is consistent with what we have observed for the NASBench101 dataset.
Based on these results, we suggest two ways that NNGP inference can be utilized in NAS. The first is that it can be used to shrink a large architecture search space in which there is a large variance in performance of networks (e.g., Radosavovic et al. (2020)) at a low computational cost. The second is that it can be used as a complementary signal that can improve trainingbased performance measures.
Example of SearchSpace Reduction
Consider 10 randomly selected subsets of the networks in NASBench101 of size 10k. We compare the average discovered performance on these sets obtained by shortened training on a reduced search space obtained by selecting the top% of networks according to NNGP validation accuracy. As baselines, we consider the average discovered performance of shortened training without NNGP screening, as well as the average discovered performance via shortened training on a random p% subset. We experiment with and .
The results are shown in figure 8. We find that 70% reduction of the search space by screening with NNGP performance for both 12epoch and 36epoch trainingbased random architecture search does not sacrifice the performance of the search while significantly reducing the computational cost. This is to be contrasted with when the search space is reduced by random selection, which leads to marked degradation in performance (orange/black vs. blue in plots). On the other hand, 90% reduction leads to average performance degradation, showing similar results obtained from random reduction of the search space. This is consistent with the PQETP results, where we found NNGP validation accuracy to be competitive against 4 and 12epoch training for discerning performance threshold values in the top 29 to 92 percentile. We thus expect performance degradation when the search space size is reduced to being significantly below 29% of the original size.
Potential Hybrid Performance Metrics
Here, we show that a simple linear model combining 4epoch training validation accuracy and NNGP validation accuracy produces a better performance metric than 4epoch training alone, for only a small additional computational cost. We note that we have omitted the computational cost to actually fit the linear model used, as we aim to demonstrate the existence of a hybrid performance metric with improved predictive ability.
We use a linear model with three parameters (including the bias) to fit the 12epoch validation accuracy against the 4epoch validation accuracy and each NNGP validation accuracy. By doing so, we obtain a hybrid performance metric, with which we measure the average discovered performance for 10 randomly selected size10k sets of networks. The results obtained for the hybrid metric is plotted in figure 9. We see that hybrid metrics built out of NNGPs with larger training sets can exhibit statistically significant performance gain with marginal additional computational cost.
Broader Impact
Our research aims to reduce the computational cost of evaluating neural network performance and neural architecture search, which would lead to reduction of the environmental footprint of deep learning research and applications
(Strubell et al., 2019).We thank Yasaman Bahri, Gabriel Bender, PieterJan Kindermans, Quoc V. Le, Esteban Real, Samuel S. Schoenholz and Mingxing Tan for useful discussions.
References
 Neal (1994) Radford M. Neal. Priors for infinite networks (tech. rep. no. crgtr941). University of Toronto, 1994.
 Williams (1997) Christopher KI Williams. Computing with infinite networks. In Advances in neural information processing systems, pages 295–301, 1997.
 Hazan and Jaakkola (2015) Tamir Hazan and Tommi Jaakkola. Steps toward deep kernel methods from infinite neural networks. arXiv preprint arXiv:1508.05133, 2015.
 Schoenholz et al. (2017) Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha SohlDickstein. Deep information propagation. International Conference on Learning Representations, 2017.
 Lee et al. (2018) Jaehoon Lee, Yasaman Bahri, Roman Novak, Sam Schoenholz, Jeffrey Pennington, and Jascha Sohldickstein. Deep neural networks as gaussian processes. In International Conference on Learning Representations, 2018.
 Matthews et al. (2018a) Alexander G. de G. Matthews, Jiri Hron, Mark Rowland, Richard E. Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. In International Conference on Learning Representations, 4 2018a. URL https://openreview.net/forum?id=H1nGgWC.
 Matthews et al. (2018b) Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin Ghahramani. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271, 9 2018b.
 Borovykh (2018) Anastasia Borovykh. A gaussian process perspective on convolutional neural networks. arXiv preprint arXiv:1810.10798, 2018.
 GarrigaAlonso et al. (2019) Adrià GarrigaAlonso, Laurence Aitchison, and Carl Edward Rasmussen. Deep convolutional networks as shallow gaussian processes. In International Conference on Learning Representations, 2019.
 Novak et al. (2019a) Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Greg Yang, Jiri Hron, Daniel A. Abolafia, Jeffrey Pennington, and Jascha SohlDickstein. Bayesian deep convolutional networks with many channels are gaussian processes. In International Conference on Learning Representations, 2019a.
 Yang and Schoenholz (2017) Ge Yang and Samuel Schoenholz. Mean field residual networks: On the edge of chaos. In Advances in Neural Information Processing Systems. 2017.

Yang et al. (2019)
Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha SohlDickstein, and Samuel S.
Schoenholz.
A mean field theory of batch normalization.
In International Conference on Learning Representations, 2019.  Yang (2019a) Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, 2019a.

Yang (2019b)
Greg Yang.
Wide feedforward or recurrent neural networks of any architecture are gaussian processes.
In Advances in Neural Information Processing Systems, pages 9947–9960, 2019b.  Pretorius et al. (2019) Arnu Pretorius, Herman Kamper, and Steve Kroon. On the expected behaviour of noise regularised deep neural networks as gaussian processes. arXiv preprint arXiv:1910.05563, 2019.
 Novak et al. (2020) Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha SohlDickstein, and Samuel S. Schoenholz. Neural tangents: Fast and easy infinite neural networks in python. In International Conference on Learning Representations, 2020. URL https://github.com/google/neuraltangents.

Hron et al. (2020)
Jiri Hron, Yasaman Bahri, Jascha SohlDickstein, and Roman Novak.
Infinite width attention networks.
In
International Conference on Machine Learning (ICML)
, 2020. submission under review.  Jacot et al. (2018) Arthur Jacot, Franck Gabriel, and Clement Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, 2018.

Li and Liang (2018)
Yuanzhi Li and Yingyu Liang.
Learning overparameterized neural networks via stochastic gradient descent on structured data.
In Advances in Neural Information Processing Systems, pages 8157–8166, 2018.  AllenZhu et al. (2018) Zeyuan AllenZhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In International Conference on Machine Learning, 2018.
 Du et al. (2019a) Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, 2019a.
 Du et al. (2018) Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes overparameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.

Zou et al. (2019a)
Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu.
Stochastic gradient descent optimizes overparameterized deep relu networks.
Machine Learning, 2019a.  Zou et al. (2019b) Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Gradient descent optimizes overparameterized deep relu networks. Machine Learning, Oct 2019b. ISSN 15730565. doi: 10.1007/s10994019058396. URL https://doi.org/10.1007/s10994019058396.
 Chizat et al. (2019) Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. arXiv preprint arXiv:1812.07956, 2019.
 Lee et al. (2019) Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha SohlDickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. In Advances in neural information processing systems, 2019.
 Arora et al. (2019a) Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In Advances in Neural Information Processing Systems 32, pages 8141–8150. Curran Associates, Inc., 2019a.
 Du et al. (2019b) Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. In Advances in Neural Information Processing Systems 32, pages 5724–5734. Curran Associates, Inc., 2019b.
 SohlDickstein et al. (2020) Jascha SohlDickstein, Roman Novak, Samuel S Schoenholz, and Jaehoon Lee. On the infinite width limit of neural networks with a standard parameterization. arXiv preprint arXiv:2001.07301, 2020.
 Huang et al. (2020) Wei Huang, Weitao Du, and Richard Yi Da Xu. On the neural tangent kernel of deep networks with orthogonal initialization. ArXiv, abs/2004.05867, 2020.
 Pennington et al. (2017) Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In Advances in neural information processing systems, pages 4785–4795, 2017.

Xiao et al. (2018)
Lechao Xiao, Yasaman Bahri, Jascha SohlDickstein, Samuel Schoenholz, and
Jeffrey Pennington.
Dynamical isometry and a mean field theory of CNNs: How to train 10,000layer vanilla convolutional neural networks.
In International Conference on Machine Learning, 2018.  Xiao et al. (2019) Lechao Xiao, Jeffrey Pennington, and Samuel S Schoenholz. Disentangling trainability and generalization in deep learning. arXiv preprint arXiv:1912.13053, 2019.
 Hu et al. (2020) Wei Hu, Lechao Xiao, and Jeffrey Pennington. Provable benefit of orthogonal initialization in optimizing deep linear networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkgqN1SYvr.
 Li et al. (2019) Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S Du, Wei Hu, Ruslan Salakhutdinov, and Sanjeev Arora. Enhanced convolutional neural tangent kernels. arXiv preprint arXiv:1911.00809, 2019.
 Arora et al. (2020) Sanjeev Arora, Simon S. Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, and Dingli Yu. Harnessing the power of infinitely wide deep nets on smalldata tasks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkl8sJBYvH.
 Shankar et al. (2020a) Vaishaal Shankar, Alex Chengyu Fang, Wenshuo Guo, Sara FridovichKeil, Ludwig Schmidt, Jonathan RaganKelley, and Benjamin Recht. Neural kernels without tangents. ArXiv, abs/2003.02237, 2020a.
 Cho and Saul (2009) Youngmin Cho and Lawrence K Saul. Kernel methods for deep learning. In Advances In Neural Information Processing Systems, 2009.
 Daniely et al. (2016) Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In Advances In Neural Information Processing Systems, 2016.
 Poole et al. (2016) Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha SohlDickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances In Neural Information Processing Systems, pages 3360–3368, 2016.
 Chen et al. (2018) Minmin Chen, Jeffrey Pennington, and Samuel Schoenholz. Dynamical isometry and a mean field theory of RNNs: Gating enables signal propagation in recurrent neural networks. In International Conference on Machine Learning, 2018.

Li and Nguyen (2019)
Ping Li and PhanMinh Nguyen.
On random deep weighttied autoencoders: Exact asymptotic analysis, phase transitions, and implications to training.
In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HJx54i05tX.  Daniely (2017) Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in Neural Information Processing Systems, 2017.
 Pretorius et al. (2018) Arnu Pretorius, Elan van Biljon, Steve Kroon, and Herman Kamper. Critical initialisation for deep signal propagation in noisy rectifier neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 5717–5726. Curran Associates, Inc., 2018.
 Hayou et al. (2018) Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. On the selection of initialization and activation function for deep neural networks. arXiv preprint arXiv:1805.08266, 2018.
 Karakida et al. (2018) Ryo Karakida, Shotaro Akaho, and Shunichi Amari. Universal statistics of fisher information in deep neural networks: Mean field approach. arXiv preprint arXiv:1806.01316, 2018.
 Blumenfeld et al. (2019) Yaniv Blumenfeld, Dar Gilboa, and Daniel Soudry. A mean field theory of quantized deep networks: The quantizationdepth tradeoff. arXiv preprint arXiv:1906.00771, 2019.
 Hayou et al. (2019) Soufiane Hayou, Arnaud Doucet, and Judith Rousseau. Meanfield behaviour of neural tangent kernel for deep neural networks, 2019.
 Zoph and Le (2016) Barret Zoph and Quoc V. Le. Neural architecture search with reinforcement learning. ArXiv, abs/1611.01578, 2016.

Baker et al. (2016)
Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar.
Designing neural network architectures using reinforcement learning.
2016.  Novak et al. (2019b) Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha SohlDickstein, and Samuel S. Schoenholz. Neural tangents: Fast and easy infinite neural networks in python. https://github.com/google/neuraltangents, 2019b. URL https://github.com/google/neuraltangents.
 Arora et al. (2019b) Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In Advances In Neural Information Processing Systems, 2019b.
 Shankar et al. (2020b) Vaishaal Shankar, Alex Fang, Wenshuo Guo, Sara FridovichKeil, Ludwig Schmidt, Jonathan RaganKelley, and Benjamin Recht. Neural kernels without tangents. 2020b.
 Ying et al. (2019) Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hutter. Nasbench101: Towards reproducible neural architecture search. In International Conference on Machine Learning, 2019.
 (55) Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/~kriz/cifar.html.

Tan et al. (2019)
Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew
Howard, and Quoc V Le.
Mnasnet: Platformaware neural architecture search for mobile.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 2820–2828, 2019.  Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li FeiFei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. doi: 10.1007/s112630150816y. URL http://www.imagenet.org/downloadimages.

Real et al. (2019)
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le.
Regularized evolution for image classifier architecture search.
In
AAAI Conference on Artificial Intelligence (AAAI)
, pages 4780–4789, 2019.  Ghiasi et al. (2019) Golnaz Ghiasi, TsungYi Lin, and Quoc V Le. Nasfpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7036–7045, 2019.
 Swersky et al. (2014) Kevin Swersky, Jasper Snoek, and Ryan Prescott Adams. Freezethaw bayesian optimization, 2014.
 Domhan et al. (2015) Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI’15, page 3460–3468. AAAI Press, 2015. ISBN 9781577357384.
 Klein et al. (2017) Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. Learning curve prediction with bayesian neural networks. In ICLR, 2017.
 Baker et al. (2017) Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Accelerating neural architecture search using performance prediction. 2017.
 Mellor et al. (2020) Joseph Mellor, Jack Turner, Amos Storkey, and Elliot J Crowley. Neural architecture search without training. arXiv preprint arXiv:2006.04647, 2020.
 Pham et al. (2018) Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In ICML, 2018.
 Liu et al. (2019) Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. ArXiv, abs/1806.09055, 2019.
 Cai et al. (2019) Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations (ICLR), 2019.
 Wen et al. (2019) Wei Wen, Hanxiao Liu, Hiuming Li, Yiran Chen, Gabriel Bender, and PieterJan Kindermans. Neural predictor for neural architecture search. ArXiv, abs/1912.00848, 2019.
 Radosavovic et al. (2020) Ilija Radosavovic, Raj Prateek Kosaraju, Ross B. Girshick, Kaiming He, and Piotr Dollár. Designing network design spaces. ArXiv, abs/2003.13678, 2020.
 Strubell et al. (2019) Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. arXiv preprint arXiv:1906.02243, 2019.
 Hu et al. (2018) Jie Hu, Li Shen, and Gang Sun. Squeezeandexcitation networks. 2018.
Appendix A Comments on BatchNormalization
All convolution cells in NASBench101 and MNAS utilize batch normalization to ensure that the search space contains ResNet and Inceptionlike cells Ying et al. [2019], whose parameters need to be initialized. We initialize the moving averages in batchnormalization layers using oneforward pass on a random subset of the training set of batch size of (or for ). In practice, we use the NASBench101’s default batchnormalization momentum value of and use inference mode (training=False
) to compute NNGP kernels, thus the batch normalization parameters are not far from the initial values set at zero mean and unit standard deviation.
In figure S1, we compared the result of NNGP inference with settings used in the paper, and that of updating the batchnormalization parameters with statistics of a random subset of the training set by setting momentum parameter to be . We observe that while the performance of NNGP increases in expectation, the produced validation accuracy becomes less indicative of the groundtruth performance of the network. We in fact see that Kendall’s Tau and the correlation with respect to the ground truth performance when momentum is set to zero is close to , which is far below respective numbers for default momentum or partial training, indicating nearly random chance of predicting the correct ground truth ordering.
Appendix B Computational Cost
The computational cost of NNGP inference stems from computing the Monte Carlo NNGP kernels , (see algorithm 1) and from performing inference with different choices of the regularizer.
Let be the architecture () dependent number of flops for inference on a single sample, the dimension of the feature space of the penultimate layer of the network, the ensemble number, the number of labels, and and the size of the training set and validation set for the NNGP, respectively. Then the computational flops required for NNGP training is given by
Kernel Evaluation Flops  (S1) 
where the first term comes from forwardpropagation on the network and the second term comes from matrix computations used to construct the kernels. Meanwhile, the cost of NNGP inference via Cholesky solve for distinct values of the regularizer is given by
(S2) 
Adding the two terms, we arrive at the expression
(S3) 
Denoting the FLOPs required for training a model per step per sample by , training a model for epochs and carrying out inference costs
(S4) 
where we always train/validate over the full training and validation set for gradientbased training.
We have computed and for all the networks in the NASBench101 dataset for multiple batch sizes and found that the relation holds to a good approximation (see figure S2). We thus replace with when evaluating computational cost.
Appendix C More Details on Search Spaces and Experiment Design
c.1 NASBench101
A network in the NASBench101 dataset Ying et al. [2019] is defined by a “cell" architecture, which is parameterized by a labelled directed graph. The labelled directed graph is defined by a set of vertices, whose label defines an operation (, convolutions or max pooling) and a directed adjacency matrix that specifies how to compose these operations to yield an output. The network is constructed by composing a stem convolutional layer with 128 output channels, with 3 repeated applications of blocks, which in turn consists of three concatenations of the defined cell, followed by an average pooling layer and a fullyconnected layer. A downsampling layer, which halves the height and width of the image and widens the network by twice the width is applied in between blocks.
All the networks in the NASBench101 dataset produce a feature in the penultimate layer of dimension 512. This is because each of the three blocks used for constructing the network preserve the channel number, while each of the two pooling layers situated in between the three blocks make the channel size 2 times as wide. Since the final averagepooling layer does not change the channel number, starting from the 128channel output of the stem layer, we arrive at channels for the penultimate layer. This has been verified by explicit inspection of all networks.
All NNGP inference has been carried out on CPUs. All CIFAR10 images for NNGP have been processed by standardizing the RGB channels using means 125.3, 123.0, 113.9 and standard deviations 63.0, 62.1, 66.7. No augmentations have been applied.
c.2 MNAS Search Space
The MNAS search space Tan et al. [2019] is a factorized hierarchical search space, where the model is assumed to be composed of seven feedforward blocks, each of which acts by repeated application of a convolutional layer. It includes multiple configuration parameters for each block, including convolution operation, kernel size, squeezeandexcitation ratio Hu et al. [2018], skip operation, filter size, and repetition number (see section of 4.1 of Tan et al. [2019] for details).
The “reward" used in MNAS search is where is the validation accuracy of the model trained after 5epochs, is the latency of the model and is the target latency, which is set to be milliseconds.
All NNGP inference has been carried out on CPUs. All ImageNet images for NNGP (including the NNGP training set images) have undergone standard validation processing, i.e., the RGB channels are standardized using means 123.7, 116.3, 103.5 and standard deviations 58.4, 57.1, 57.4, scaled so that the shortest edge has length 256 and then centercropped to size 224 224. No augmentations have been applied.
Data processing, training and validation for 5epoch and 400epoch training have all been conducted in accord with Tan et al. [2019]. 5epoch training of networks has been carried out using 8 Google Cloud TPU chips, while 400epoch training was done using 32 Google Cloud TPU chips.
Appendix D NASBench101 Inference Results
In this section, we present raw NNGP inference results and some basic statistics, along with the reference shortenedtraining results for NASBench101.
In figure S3 we plot the groundtruth accuracy against NNGP validation accuracy for all values of and for a selected set of 10k networks. The ground truth accuracy is plotted against shortenedtraining validation accuracies for these same networks in figure S4.
In figure S5 we plot the mean and median validation accuracy obtained from NNGP inference and shortened training.
Appendix E More Performance Measure Plots
In this section, we present some additional analysis on the utility of NNGP validation accuracy as an indicator of groundtruth performance.
In figure S6, for each NNGP validation accuracy, we plot the range of for which its PQETP exceeds that of validation accuracies obtained from 4 and 12 epoch training. To obtain this plot, we have scanned the threshold accuracy range 0.78 to 0.95 with step size 0.003.
Appendix F Model Size Distribution of NASBench101
NASBench101 contains models with wildly varying sizes, the largest model having 20 times as many parameters as the smallest model. Given this variance, model size turns out to be a good indicator for selecting top performing models.
Figure S7 presents an overall view of the size distribution of models in NASBench101. From the left panel, which plots the groundtruth accuracy against model size for a selected set of 10k networks, we see a correlation between model size and performance. It is also evident that there are multiple clusters of models with respect to size. Meanwhile, from the PQETP plot against the threshold groundtruth performance depicted in the right panel, we see that model size is a very strong discriminator for threshold performances above the top percentile.
As a consequence, we find that while the overall ranking of performance does not align very well with model size, model size is a surprisingly good discriminator for singling out the top performing networks. This is demonstrated in figure S8, which displays the Kendall’s Tau of model size against ground truth accuracy, and average discovered performance obtained by choosing models based on their size.
Given that there are distinct clusters of models with respect to their sizes that have very different number of parameters, the model size being a strong indicator of performance should be taken as a statement about the dataset, rather than a statement about the utility of model size as a performance indicator. An ideal performance metric should be able to make a distinction between models that have comparable sizes, and in fact, the goal of architecture search is often to find highperformance models with constraints on computational budget, e.g., Tan et al. [2019]. In figure S9, we see that indeed, shortened training performance as well as NNGP validation accuracy satisfy this criterion, having varying values within each cluster of models.
To examine the utility of each performance metric within a size cluster, let us focus on the cluster of models with less than 10M parameters. There are 297k models in this cluster with GFLOPs. As before, we compute Kendall’s Tau between shortened training performance, NNGP performance and model size against ground truth performance for these models, and the average discovered performance across ten 10ksize subsets of such models according to each metric. The results are plotted in figure S10. We find that the discovered performance of models selected based on model size has declined significantly more compared to that of models selected based on NNGP or shortened training accuracy in this setting. Meanwhile the hierarchy of both metrics between NNGP and shortened training performance stay largely the same.
Comments
There are no comments yet.