VINNAS: Variational Inference-based Neural Network Architecture Search

07/12/2020 ∙ by Martin Ferianc, et al. ∙ Imperial College London UCL 0

In recent years, neural architecture search (NAS) has received intensive scientific and industrial interest due to its capability of finding a neural architecture with high accuracy for various artificial intelligence tasks such as image classification or object detection. In particular, gradient-based NAS approaches have become one of the more popular approaches thanks to their computational efficiency during the search. However, these methods often experience a mode collapse, where the quality of the found architectures is poor due to the algorithm resorting to choosing a single operation type for the entire network, or stagnating at a local minima for various datasets or search spaces. To address these defects, we present a differentiable variational inference-based NAS method for searching sparse convolutional neural networks. Our approach finds the optimal neural architecture by dropping out candidate operations in an over-parameterised supergraph using variational dropout with automatic relevance determination prior, which makes the algorithm gradually remove unnecessary operations and connections without risking mode collapse. The evaluation is conducted through searching two types of convolutional cells that shape the neural network for classifying different image datasets. Our method finds diverse network cells, while showing state-of-the-art accuracy with up to 3 × fewer parameters.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Neural networks (NNs) have demonstrated their great potential in a wide range of artificial intelligence tasks such as image classification, object detection or speech recognition [zoph2016neural, wang2020fcos, ding2020autospeech]. Nevertheless, designing a NN for a given task or a dataset requires significant human expertise, making their application restricted in the real-world [elsken2018neural]. Recently, neural architecture search (NAS) has been demonstrated to be a promising solution for this issue [zoph2016neural], which automatically designs a NN for a given dataset and a target objective. Current NAS methods are already able to automatically find better neural architectures, in comparison to hand-made NNs [zoph2016neural, ding2020autospeech, wang2020fcos, real2019regularized].

Fig. 1: (a) A structure of a computational cell which accepts inputs from two previous cells. Each cell accepts inputs from the immediate last cell as well as the penultimate cell

. The random variables

represent the learnable relevance over the operations. The coloured lines represent different candidate operations and their thickness represents their likelihood. All outputs from the processed states and are concatenated in the output along the channel dimension into , symbolised by the dashed lines. Green rectangles - states signify data. (b) Network skeleton comprising of and normal cells and two reduction cells and which share the same structure, in total giving cells. The network also contains a stem comprised of convolution and at the end of the network are average pooling followed by a linear classifier.

NAS itself is a challenging problem spanning on a discrete search space, which can be simplified into reasoning about what operations should be present and how should they be interconnected between each other in the NN architecture. Common operation types that are considered are, for example, different types of convolutions or pooling [zoph2016neural]. If the search is not approached with caution, the resultant NN might not be flexible enough to learn useful patterns. Additionally, the ability of the model to generalise is also directly dependant on the NN architecture [zoph2016neural, liu2018darts]. Therefore, there is an omnipresent need for finding architectures that are expressive enough and at the same time achieve good generalisation performance.

Based on the core algorithmic principle operating during the search, NAS can be divided into four categories: (i) reinforcement learning-based on an actor-critic framework 

[zoph2016neural]

(ii) evolutionary methods based on genetic algorithms 

[real2019regularized], (iii) Bayesian optimisation-based on proxy models [cai2018path] or (iv) gradient-based methods [liu2018darts]. In particular, gradient-based NAS’s [liu2018darts] have been recently popularised for convolutional NN (CNN) architecture search due to compute efficiency during the search. Nevertheless, gradient-based NAS is likely to collapse into a situation where it selects all operations to be the same [zela2019understanding], treats operations unfairly [chu2019fair] or is hard to adapt across different datasets and search spaces [li2019random].

To solve the issues in the existing gradient-based NAS methods, this paper proposes Variational Inference-based Neural Network Architecture Search (VINNAS). Under the same search space as in the case of NAS methods [liu2018darts, chu2020noisy, zela2019understanding]

, our approach does not require any additional computation to the standard backpropagation algorithm. In

VINNAS

, we tackle NAS using Bayesian inference, by modeling the architecture search through additional random variables

which determine different operation types or connections between operations, our algorithm is able to conduct effective NN architecture search. The importance of using particular operations is determined by using a variational dropout scheme [molchanov2017variational, kingma2015variational] with the automatic relevance determination (ARD) [mackay1995probable] prior. We specifically search for a network structure that is composed of cells containing a variety of operations. The operations are organised into two types of cells: normal and reduction, and similarly to cell-based NAS [liu2018darts], the cells are replicated and then used to construct the complete CNN. The model is shown in Figure 1. To encourage traversal through the NN architecture search space, we formulated an auto-regularising objective that promotes exploration, while ensure high levels of certainty in the selection phase.

We performed experiments on searching CNNs for classification on image datasets namely MNIST, FashionMNIST and CIFAR-10. Our results demonstrate state-of-the-art performance, thanks to targeting sparse architectures that focus on learning efficient representations, which is enforced by strict regularisation. For example on CIFAR-10, we demonstrate finding an architecture that has up to fewer parameters needed in comparison to the state-of-the-art, without any human intervention.

In summary, our main contributions are as follows:

  • A differentiable neural architecture search method adopting variational dropout, which is effective in searching neural network architectures with the state-of-the-art performance on multiple datasets.

  • An architecture search objective using scheduled regularisation to promote exploration, but at the same time motivate certainty in the operation selection.

  • An updated rule for selecting the most dominant operations based on their inferred uncertainty.

In the sequel, we describe our approach in detail. In Section II we review related work, in Section III we introduce variational learning and gradient-based NAS. In Section IV we introduce our search objective, search space and the proposed overall algorithm. Section V documents the performance of our search method on experiments and lastly, in Section VI we draw our conclusions. Our implementation can be found at: https://github.com/iiml-ucl/vinnas.

Architecture Architecture search space (supergraph) Data/State in architecture Architecture var. Dataset / Dataset size
Operation candidates Candidate operations Total number of cells N Normal cell R Reduction cell
Prior density Approximation density Weights Other params. Reparametrisation params.
TABLE I: Notation used in this paper.

Ii Related Work

Differentiable Neural Architecture Search

Since Zoph et al. [zoph2016neural] popularised NAS for CNNs, the field has been growing from intensive scientific [liu2018darts, zhou2019bayesnas] and industrial [zoph2016neural, real2019regularized] interests. NAS techniques automate the design of CNNs, mainly in terms of high-level operations, such as different types of convolutions or pooling, and their connections. The core to these techniques is the search space of potential architectures, their optimisation objective and search algorithm. For further detail of NAS, we refer the reader to a review of NAS by Elsken et al. [elsken2018neural]. It is a common practise to organise the search space for all potential architectures into finding cells that specify the operations and their connections [liu2018darts], which are then stacked on top of each other to construct the final NN, as previously shown in Figure 1. Modern NAS methods often apply a weight-sharing [pham2018efficient] approach where they optimise the search over several architectures in parallel by sharing weights of their operations to save memory resources. Among these approaches, gradient-based NAS has become one of the most popular methods [liu2018darts], mainly due to its compute feasibility. DARTS [liu2018darts] defines the search for an architecture as optimising continuous weights associated to operations in an overparametrised supergraph , while utilising weight-sharing. After the best combination of operations in the supergraph is identified, it is then used to construct the final architecture for evaluation. However, Zela et al. [zela2019understanding] identified a wide range of search spaces for which DARTS yields degenerate architectures with very poor test performance. Chu et al. [chu2019fair] observed critical problems in two-stage weight-sharing NAS due to inherent unfairness in the supergraph training. Chu et al. [chu2020noisy] attempt to fix this problem by adding noise to the skip-connection operation during the search. Our approach is similar to [chu2020noisy], however, we do not bias the search only towards skip-connections, but rather, infer the properties of the noise distribution with respect to ARD.

Pruning

Gradient-based NAS can be regarded as a subset of pruning in NNs and there have been many approaches introduced for pruning, such as by LeCun et al. [lecun1990optimal] who pruned networks by analysing second-order derivatives. Other approaches [scardapane2017group] consider removing groups of filters in convolutions. Kingma et al. [kingma2015variational] prune NNs at a node-level by noticing connections between dropout [srivastava2014dropout] and approximate variational inference. Molchanov et al. [molchanov2017variational] show that the interpretation of Gaussian dropout as performing variational inference in a network with log uniform priors over weights leads to high sparsity in weights. Blundell et al. [blundell2015weight] introduce a mixture of Gaussians prior on the weights, with one mixture tightly concentrated around zero, thus approximating a spike and slab prior over weights. Ghosh et al. [ghosh2018structured] and Loizous et al. [louizos2017bayesian] simultaneously consider grouped Horseshoe priors [carvalho2009handling] for neural pruning. Zhou et al. [zhou2020posterior-guided] use variational dropout [kingma2015variational] to select filters for convolution. Our method differs to these approaches, by not only inferring sparse weights for operations, but also attempting to infer weights over the operations’ search space to search NN architectures.

Iii Preliminaries

In this Section we introduce variational learning and cell-based differential neural architecture search which serve as basic building blocks for developing VINNAS. Notation used in this paper in summarised in Table I.

Iii-a Variational Learning

We specify a CNN as a parametrisable function approximator with some architecture learnt on data samples consisting of inputs and targets forming a dataset as . The architecture , composed of operations, might have certain parameters, for example weights , which are distributed given some prior distributions . and combined define the model and the likelihood . We seek to learn the posterior distribution over the parameters using the Bayes rule. However, that is analytically intractable due to the normalising factor , which cannot be computed exactly due to the high dimensionality of .

Therefore, we need to formulate an approximate parametrisable posterior distribution whose parameters can be learnt in order to approach the true posterior, . Moving the distribution closer to in terms of naturally raises an objective: to minimise their separation, which is expressed as the Kullbeck-Leibler () divergence [kullback1951information]. This objective is approximated through the evidence lower bound (ELBO), shown in (1).

(1)

The first term is the negative log-likelihood of the data which measures the data-fit, while the second term is a regulariser whose influence can be manged through . The represent other learnable pointwise parameters that are assumed to have uniform priors, which contribute to the term that is independent of the parameters.

Kingma et al. introduced the local reparametrisation trick (LRT) [kingma2015variational] that allows us to solve the objective in (1) with respect to

through stochastic gradient descent (SGD) with low variance. We can backpropagete the gradients with respect to the distribution

by sampling that is obtained through deterministic transformation as where is a parameter-free noise, e.g.: .

Moreover, using this trick, Molchanov et al. [molchanov2017variational], were able to search for an unbounded approximation111 represents a Hadamard product. for weights as shown in (2), which corresponds to a Gaussian dropout model with learnable parameters  [srivastava2014dropout].

(2)

After placing a factorised log-uniform prior on the weights, such that , the authors observed an effect similar to ARD [molchanov2017variational], however, without the need to modify the prior. Throughout inference the learnt weights tend to a delta function centred at , leaving the model only with the important non-zero weights. The relevance determination is achieved by optimising both the and and if they are both close to zero, they can be pruned.

Iii-B Cell-based Differential Neural Architecture Search

As shown above, Bayesian inference can be used to induce sparsity in the weight space, however, we wish to find from some architecture space .

Authors of DARTS [liu2018darts] defined the search for an architecture as finding specific associated to choosing operations in an overparametrised directed acyclic graph (DAG) , where the learnt values of are then used to specify at test time. Due to compute feasibility, the search space for all potential architectures is simplified into finding cells. The cell structure is defined with respect to where the indices signify the potential connections and operations between information states and inside the cell with states, where . The information state

is a 4-dimensional tensor

with samples, containing channels, height and width . The index represents the number of different types of cells, where represents 2 different cell types: normal (N) cells preserve the input dimensionality while reduce (R) cells decrease the spatial dimensionality, but increase the number of channels [liu2018darts]. The cells can be interleaved and repeated giving total cells. The information for the state inside the cell is a weighted sum of the outputs generated from the different operations on . Choosing one of the operations can be approximated through performing on the architecture variables , instead of argmax, which provides the method with differentiable strengths of potential operations as shown in (3). The last state , which is the output of the cell, is then a concatenation of all the previous states, except the first two input states .

(3)

After the search, each state is connected with the outputs from two operations , whose strengths have the highest magnitude. The learnt weights and are discarded and the resultant architecture is retrained from scratch.

DARTS has been heavily adopted by the NAS community, due to its computational efficiency, in comparison to other NAS methods. However, upon a careful inspection it can be observed that it does not promote choosing a particular operation and often collapses to a mode based on the fact that the graph is overparametrised through a variety of parallel operations [chu2019fair]. The supergraph then focuses on improving the performance with respect to the whole graph, without providing a dominant architecture. Additionally, others have observed [chu2019fair, chu2020noisy]

that the method requires careful hyperparameter tuning without which it might collapse into preferring only one operation type over the others.

Iv Vinnas

In this Section, we first describe the search space assumptions for VINNAS in detail, followed by the objective that guides the exploration among the architectures. At last, we present the algorithm of VINNAS that couples everything together.

Iv-a Search Space

Our method extends the idea behind gradient-based NAS, while using variational learning to solve the aforementioned defects in previous work. VINNAS builds its search space as an overparametrised directed acyclic supergraph such that it contains the sought architecture template . Similarly to DARTS, we aim to search for two repeated cells, namely a normal and a reduction cell that will be repeated as shown in Figure 1. Therefore, the contains several of normal and reduction cells laid in a sequence with each containing the parallel operation options. However, is downscaled in the number of cells and channels in comparison to the considered during the evaluation, such that the supergraph can fit into GPU memory. Nevertheless, the pattern and the ratio of the number of cells and or s in are preserved in accordance to the model shown in Figure 1. To apply variational inference and subsequently ARD through variational dropout, we associate the structural strength for normal cells and for reduction cells with probabilistic interpretation. The graphical model of the supergraph that pairs together its weights and architecture strengths is shown in Figure 2.

Fig. 2: Graphical model capturing the search space in terms of the structural random variables and the weights . Note that, the parameters for will be discarded after the search.

For simplicity, we assume fully factorisable log-uniform prior for . The prior biases the distributions of the operations’ strengths towards zero, which avoids giving an advantage to certain operations over the others. We similarly model the weights of the supergraph as random variables such that the joint prior distribution is . It is not analytically possible to find the true posterior , therefore, we resort to formulating an approximation . We again set factorisable approximations for both and

, such that the joint distribution factorises

with respect to the optimisable parameters for and for . The prior and approximations are detailed in (4) and (5) respectively. The indeces stand for different states in the cells with and is associated to the available operations.

(4)
(5)

The approximate posteriors were selected as Gaussians with diagonal covariance matrices. We used the formulation by Molchanov et al. [molchanov2017variational] for both , during the search phase, and , during both the search and test phases. We aim to induce sparsity in the operations’ space, which would result in most operations’ strengths in the DAG as zero, while the most relevant operations are expected to be non-zero. At the same time, the method induces sparsity in the weight space and thus motivates the individual operations to be extremely efficient in their learnt patterns. We believe Gaussians are suitable approximations, since increasing the amount of training data implies that the posterior over these random variables will be similarly Gaussian. Also, the Gaussian noise used in our method effectively disrupts the previously observed unfairness in operation selection during NAS as partially demonstrated by [chu2020noisy] for skip-connection operation. Circling back to (3) the information in each cell during search is now calculated with respect to a sample from the inferred distributions . The second level parameters such as the individual means and variances are assumed to have non-informative uniform prior.

Iv-B Search Objective

The goal of the search is to determine the right set of structural variables or their corresponding parameters such that they can be later used to construct the desired architecture . Therefore, the search objective is in fact a secondary objective to the primary objective of minimising (1) with respect to some unknown parameters implied by the chosen as shown in (6).

(6)

The and refer to the reparametrisations for the supergraph. Therefore, at the same time it is necessary to optimise the objective with respect to the structural parameters , the operations’ weight parameters and indicating their usefulness in the final architecture . Derived from the original ELBO in (1), optimising the supergraph with respect to the learnable parameters rises the following objective in (7) below.

(7)

The first term again corresponds to the data-fitting term which pushes the parameters toward maximising the expectation of the log-likelihood with respect to the variational distributions towards targets . The other two terms are regulariser terms, which due to the factorisation of the joint distributions and priors can be separated, and scaled by arbitrary constants . As previously stated, and enable the trade-off between the data-fit and regularisation. Molchanov et al. [molchanov2017variational] approximated the divergence between the prior and the posterior using as . After the search or training of the final evaluation the variances are only considered to compute which weights can be pruned and otherwise they are not considered during evaluation.

Additionally, we were inspired by [chu2019fair] which promoted the confidence in selecting connections in a graph by explicitly minimising their entropy in a similar NAS setup to minimise their uncertainty. In our case, we want to achieve certainty in the operations’ selection across , which is equivalent to minimising their joint entropy across the potential operations as . Applying a regulated coefficient on the entropy term, the final search objective is formulated in (8).

(8)

Iv-C Algorithm

Our algorithm, shown in Algorithm 1, is based on SGD and relies on complete differentiation of all the operations. VINNAS iterates between two stages: (1, lines 6-8) optimisation of and and (2, lines 10-14) optimisation of . The usage of this two-stage optimisation aims to avoid over-adaption of parameters as suggested in [liu2018darts].

1:Initialise
2:Initialise scaling factors
3:Initialise
4:for  in search budget do
5:     Stage (1)
6:     Sample one for updating from
7:     Compute loss based on (8) with respect to
8:     Update by gradient descent: ;
9:     Stage (2)
10:     if  then
11:         Sample one for updating from
12:         Compute loss based on (8) with respect to
13:         Update by gradient descent:        
14:     end if
15:     Compute error on
16:     if Error on  then
17:         Save and update      
18:     end if
19:     Linearly increase
20:end for
21:Choose

based on the positive signal to noise ratio

Algorithm 1 VINNAS

After the initialisation of the parameters, the optimisation loops over stages (1) and (2) using two same-sized portions of the dataset. However, the optimisation of the stage (2) is not started from the very beginning, but only after a certain number of epochs -

weight epochs, which are meant as a warm-up for training the weights of the individual operations, to avoid oscillations and settling in local minima [liu2018darts]. The variance parameters are optimised as logarithms to guarantee computational stability. We linearly increase the values of and to force the cells to gradually choose the most relevant operations and weight patterns with respect to and . To avoid stranding into a local minima, we do not enforce the regularisation from the very start of the search, meaning the s are initialised as zero. After each iteration of (1) and (2), we compute the error on the the data sampled from and save the if that error was lower than that in previous iterations. The search is repeated until the search budget, which is defined as number of epochs that the search is allowed to perform, is not depleted. Note that the parameters for the weights or are discarded after the search. The main outcome of the search algorithm are the parameters that are used further to perform the architecture selection that leads to .

Signal to noise ratio (SNR) is a commonly used measure in signal processing to distinguish between useful information and unwanted noise contained in a signal. In the context of NN architecture, the SNR can be used as an indicative of parameter importance; the higher the SNR, the more effective or important the parameter is to the model predictions for a given task. In this work we propose to look at the SNR when choosing the operations through the learnt variances , which can be used to compute the positive SNR as . It can then be used as a measure based on which the right operations should be chosen, instead of just relying on the means as in the previous work [liu2018darts].

Dataset Method Test Accuracy (%) # Params (M) Search Cost
Positive SNR Magnitude Positive SNR Magnitude (GPU days)
MNIST VINNAS 0.02
Random 0.0
FashionMNIST VINNAS 0.16
Random 0.0
CIFAR-10 VINNAS 1.7
Random 0.0
TABLE II: Comparison of found architectures from VINNAS to random search.
Search
method
Principal
algorithm
Test Accuracy
(%)
# Params
(M)
Search Cost
(GPU days)
Yamada et al. [yamada2016deep] hand-made 97.33 26.2 -
Li & Talkwalkar [li2019random] random 97.15 4.3 2.7
Liu et al. [liu2018darts] gradient 97.240.09 3.4 1
Zoph et al. [zoph2016neural] reinf. lear. 97.35 3.3 1800
Real et al. [real2019regularized] genetic alg. 97.450.05 2.8 3150
Liu et al. [cai2018path] Bayesian opt. 96.590.09 3.2 225
Zhou et al. [zhou2019bayesnas] gradient 97.390.04 3.400.62 0.2
Chu et al. [chu2020noisy] gradient 97.61 3.25 -
Chu et al. [chu2019fair] gradient 97.460.05 3.320.46 -
Zela et al. [zela2019understanding] gradient 97.05 - -
VINNAS [Ours] gradient 97.66 1.18 1.7
TABLE III: Comparison on NAS methods for CIFAR-10.

V Experiments

To demonstrate the effectiveness of the proposed VINNAS method, we perform experiments on three different datasets, namely MNIST (M), FashionMNIST (F) and CIFAR-10 (C).

V-a Experimental Settings

For each dataset, we search for a separate network structure composing of operations commonly used in CNNs, namely: , and separable convolutions, and dilated separable convolutions, convolution, max pooling, average pooling, skip-connection and zero, meaning no connection making

. Note that we clip the strength of the zero operation to avoid scaling problems with respect to other operations. All operations are followed by BN and ReLU activation except zero and skip-connection.

Each cell accepts an input from the previous cells and . Each input is processed trough ReLU-convolution-BN block to map the input shape required by that particular cell. For M, we search for an architecture comprising of a single reduction cell (R), with states and for F, we search for an architecture comprising of 2 normal (N) and 2 reduction cells (NRRN) with states each. Both of these architectures have the same layout during evaluation, however, for F the number of channels is quadrupled during evaluation. For C, during the search phase we optimise a network consisting of 8 cells with states (NNRNNRNN) that is then scaled to 20 cells during evaluation (6NR6NR6N), along with the channel sizes, which are increased by a factor of 2.5. Each state always accepts 2 inputs processed through 2 operations. Each net also has a stem, which is a convolution followed by BN. At the end of the network, we perform average pooling followed by a linear classifier with the softmax activation.

The search space complexity for each net is given as which for M is , for F is and for C is . Weights from the search phase are not kept and we retrain the resultant architectures from scratch. We repeat each search and evaluation 3 times. We train the networks with respect to a single sample with respect to s and LRT. Instead of cherry-picking of the found architectures through further evaluation and then selecting the resultant architectures by hand [liu2018darts], we report the results of the found architectures directly through VINNAS.

Search Settings

For optimising both the architecture parameters as well as the weight parameters, we use Adam [kingma2014adam] with different initial learning rates. We use cosine scheduling [loshchilov2016sgdr] for the learning rate of the weights’ parameters and we keep the architecture’s learning rate constant through the search. We initialise s and start applying and gradually linearly increasing them during the search process. We disable tracking of BN’s learnable parameters for affine transformation or stats tracking. We initialise the operations strengths’ through sampling . We utilise label smoothing [muller2019does] to avoid the architecture parameters to hard commit to a certain pattern. To speed up the search we not only search reduced architectures in terms of number of channels and cells, but also search on 25%, 50% and 50% of the data for M, F and C respectively, while using 50% of that portion as the dataset for learning the architecture parameters. For M we use z-normalisation. For F and C we use random crops, flips and erasing [zhong2017random] together with input channel normalisation. We search for 20, 50 and 100 epochs for M, F and C respectively.

Evaluation Settings

During evaluation we scale up the found architectures in terms of channels and cells as described previously. We again use Adam optimiser with varying learning rates and cosine learning rate scheduling. We similarly initialise and linearly increase it from a given epoch. We do so, in the search phase, to avoid over-regularisation and the clamping weights to zero too soon during the optimisation. We train on full datasets for M, F and C for 100, 200 and 300 epochs respectively, and we preserve the data augmentation strategies also during retraining.

For both the search and evaluation we initialise the weights’ means with Xavier uniform initialisation [glorot2010understanding]. At the same time we initialise all the log-variances to . We use a batch size of 256 for all experiments. We encourage the reader to inspect the individual hyperparamter values, random seeds and scheduling at our publicly made available implementation at https://github.com/iiml-ucl/vinnas.

Fig. 3: MNIST reduction cell with its positive SNR.
Fig. 4: FashionMNIST normal and reduction cells with their positive SNR.
Fig. 5: CIFAR-10 normal and reduction cells with their positive SNR.

V-B Evaluation

The evaluation is condensed in Tables II and III. The numbers in bold represent the score for the best performing model with the given selection method: positive SNR/magnitude and the dataset. The found best performing architectures are shown in Figs. 34 and 5. We first perform random search on our search spaces for M, F and C. Note that the search spaces are vast and we deem it impossible to evaluate all architectures in the search space, given our available compute resources, and thus we sample 3 separate architectures from each search space and we train them with the same hyperparameter settings as the found architectures to avoid any bias. The number of parameters is taken as the amount after pruning with respect to .

When comparing the found architectures for the different datasets in Table II, we noticed that in case of M or F, there are certain connections onto which an operation could potentially be completely omitted with the positive SNR being relatively small. We attribute this to the fact that these datasets are easy to generalise to, which can be also seen by the overall performance of the random search for these datasets. However, on CIFAR-10, it can be seen that the inferred importance of all the operations and the structure is very high. The results also demonstrated that using the learnt uncertainty in the operation selection, in addition to the magnitude, benefits the operation selection. Compared with DARTS [liu2018darts] which only uses separable convolutions and max pooling everywhere, it can be observed that the found architectures are rich in the variety of operations that they employ and the search does not collapse into a mode where all the operations are the same. For future reference regarding deeper models such as for F and C, we observe that the found cells of the best performing architectures do contain at least one skip-connection to enable efficient propagation of gradients and better generalisation.

The main limiting factor of this work is the GPU search cost which is higher, in comparison to the other NAS methods, due to using LRT, which requires two forward passes during both search and evaluation.

Most importantly, all the found architectures demonstrate good generalisation performance in terms of the measured test accuracy. Specifically for the case of CIFAR-10, in Table III it is shown that VINNAS found an architecture that is comparable to the state-of-the-art, however, with fewer parameters.

Vi Conclusion

In summary, our work introduces a new direction of using a combined approach of probabilistic modelling and neural architecture search. Specifically, we give the operations’ strengths a probabilistic interpretation by viewing them as learnable random variables. Automatic relevance determination-like priors are imposed on these variables, along with their corresponding operation weights, which incentivises automatic detection of pertinent operations and zeroing-out the others, without significant hyperparameter tuning. Additionally, we promote certainty in the operations selection, through a custom loss function which allows us to determine the most relevant operations in the architecture. We demonstrated the effectiveness of

VINNAS on three different datasets and search spaces. On CIFAR-10 we achieve state-of-the-art accuracy with up to fewer parameter.

In the future work, we aim to explore a hierarchical Bayesian model for the architecture parameters, which could lead to architectures composed of more diverse cell types, instead of just two. Additionally, all of the evaluated NNs shared the same evaluation hyperparameters and in the future we want to investigate an approach which can automatically determine suitable hyperparameters for the found architecture.

References