Bayesian Learning of Neural Network Architectures

01/14/2019 ∙ by Georgi Dikov, et al. ∙ 32

In this paper we propose a Bayesian method for estimating architectural parameters of neural networks, namely layer size and network depth. We do this by learning concrete distributions over these parameters. Our results show that regular networks with a learnt structure can generalise better on small datasets, while fully stochastic networks can be more robust to parameter initialisation. The proposed method relies on standard neural variational learning and, unlike randomised architecture search, does not require a retraining of the model, thus keeping the computational overhead at minimum.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

One of the reasons for the success of modern deep learning models is attributed to the development of powerful architectures that exploit certain regularities in the data (e.g., convolutional networks such as 

[Simonyan and Zisserman, 2014, Szegedy et al., 2015]) and alleviate issues with numerical optimisation (e.g., learning an identity mapping in very deep networks [He et al., 2016]). In fact, it has been shown [Saxe et al., 2011] that architecture alone can improve representation learning even with randomly initialised weights.

Traditionally, the architecture of a neural network is treated as a set of static hyperparameters, which are tuned based on an observed performance on a held-out validation set. This viewpoint, however, requires that a network is initialised, trained until convergence and evaluated at each modification of the architecture—a time-consuming procedure which does not allow for an efficient, exhaustive hyperparameter search.

In this work, we propose a scalable Bayesian method to structure optimisation by treating hyperparameters, such as the layer size and network depth, as random variables whose parameterised distributions are learnt together with the rest of the network weights. Taking a Bayesian probabilistic approach to architecture learning is good for two main reasons: (i) the posterior distribution over the architectural parameters reveals whether or not the model has the capacity to represent the training data well; and (ii) imposing prior beliefs over the parameters naturally allows for expert knowledge to be incorporated into the model, without imposing any unbreakable constraints as a side effect. However, obtaining the correct posterior distribution in closed form is not possible due to the highly nonlinear nature of deep neural networks; also residing to a Markov Chain Monte Carlo sampling technique is computationally prohibitive. Instead, we apply the framework of approximate variational inference in order to estimate a posterior distribution over the architectural variables and maintain the differentiability of the model by the means of a continuous relaxation on the discrete categorical (concrete) distribution 

[Maddison et al., 2016, Jang et al., 2016]. Thus we are able to efficiently evaluate a continuum of architectures. We will show empirically that ensembling predictions from networks of sampled architectures acts as a regulariser and mitigates overfitting.

In the next section we review the necessary background in approximate variational inference, present our model from a Bayesian viewpoint and briefly introduce the concrete categorical distribution. In Section 3 we show the mechanism of layer size and network depth learning and give an intuitive interpretation of the approach. Section 4 compares our method to existing ones and discusses their shortcomings. In Section 5 we evaluate multiple models in regression, classification and bandits tasks and finally we discuss potential consequences in Section 6.

2 Background and Model Statement

2.1 Approximate Variational Inference

Let denote the weights of an -layer network and the architectural parameters which are going to be learnt. Further, let be a labelled dataset. Then, in the framework of Bayesian reasoning, we define a prior distribution , a likelihood model and we seek to infer the posterior distribution . The latter, however, cannot be evaluated precisely due to the intractability of the normalisation constant . The variational Bayes approach reframes the problem of inferring the posterior distribution into an optimisation one, by minimising an approximation error between a parameterised surrogate distribution and the posterior distribution. For the sake of computational simplicity, throughout this work we will assume that the approximate posterior is fully factorisable, :

(1)

and that the network weights in each layer ,

, are independent and Gaussian distributed with parameters

, . Note that relaxing the independence and/or the functional form assumption on the network weights can improve modelling performance, as shown by [Cremer et al., 2018, Pawlowski et al., 2017]. Nevertheless, we leave the extension of architecture learning in Bayesian neural networks with more sophisticated posterior approximation to future work. The prior distribution over the weights

will be a zero-mean factorised Gaussian with the same fixed variance

for each weight, .

The specific form of and will be elaborated in detail in Section 3

where we will consider learning the layer sizes and the overall network depth. Due to the discrete nature of these parameters, we cannot use backpropagation to learn their posteriors. We will show in 

Sections 2.3 and 2.2 how we could circumvent this issue.

Let and represent the sets of variational parameters for the approximate marginals and which we denote as and respectively. One way to quantify the approximation error between the surrogate and the true posterior

is to measure their Kullback-Leibler divergence 

[Kullback and Leibler, 1951]. It can be shown that the following relation holds [Jordan et al., 1999]:

(2)
(3)
(4)

The quantity in Eq. 4, , is called the Evidence Lower Bound and will be approximated with Monte Carlo (MC) sampling since the prior, the approximate posterior and the likelihood distributions will have known densities as we will see in Section 3. Also, given that the prior distribution is a Gaussian, the KL-divergence term for the network weights will be computed analytically and thus will reduce the variance in the gradient estimates. However, the KL-divergence for the architectural parameters will be estimated using MC sampling. Finally, using the approximations and

we can define a posterior predictive distribution over the labels

and approximate it with MC sampling:

(5)

Note that even if we treat the network weights as point estimates we can still compute an approximate posterior distribution over and optimise it using the ELBO objective while performing a MAP estimate over . That is, the approach of Bayesian architecture learning is applicable to regular neural networks as well and we will show such an example in Section 5.

2.2 The Reparameterisation Trick

The reparameterisation trick [Kingma and Welling, 2013]

refers to a technique of representing sampling from a probability distribution as a deterministic operation over the distributional parameters and an external source of independent noise. In the context of architecture learning we would like to show that such a reparameterisation is possible for the architectural random variable

of some -parameterised distribution . Then, if there is a deterministic and differentiable function such that with guaranteeing that , we can compute the gradient on and use standard backpropagation to learn .

2.3 The Concrete Categorical Distribution

Proposed by [Jang et al., 2016, Maddison et al., 2016] the Gumbel-softmax or concrete categorical distribution is a continuous extension of its discrete counterpart. It is fully reparameterisable as sampling

-dimensional probability vectors

can be expressed as a deterministic function of its parameters—the probability vector —and an external source of randomness which is Gumbel-distributed:

Here is a temperature hyperparameter controlling the smoothness of the approximation. For the samples become one-hot vectors and for . In this work we will consider fixed. The density of the concrete categorical distribution is

(6)

Analogously for the binary case (

), one can express a sample from a concrete Bernoulli distribution by perturbing the logit with noise from a Logistic distribution and squashing it through a sigmoid:

The functional form of its density function is given as:

(7)

For more properties of the concrete distributions see the appendices in [Jang et al., 2016, Maddison et al., 2016].

3 Adaptive Network Architecture

In this work we will focus on two important architectural hyperparameters but analogous extensions to others are possible. First we will look into learning the size of an arbitrary layer denoted with and then we will proceed with estimating the optimal depth of a network by means of independent layer-wise skip connections . Following the independence assumption from Eq. 1 for a network of layers we have:

(8)

Analogous factorisation applies for the prior as well. In our work, it has the same functional form as the approximate posterior but has fixed parameters.

3.1 Layer Size

Let be a concrete-categorically distributed random variable encoding the size of an arbitrary fully-connected layer with maximum capacity of units111Or filters if the layer is convolutional.. Then the integer number representing the layer size encoded in a sample is given as . In order to enforce the sampled size on the layer, we propose building a soft and differentiable mask which multiplicatively gates the output of :

(9)

where we omit the bias for the sake of notational brevity and use

to denote the activation function. Due to the fully-connected nature of the layer, there is in general no preference for which

units should be used. However, one has to be consistent in selecting them across different gradient updates, as this subset of units will represent the reduced in size layer and all others should be discarded, e.g. by deleting rows of . To do this, we construct the mask such that the top rows are approximately 1s (letting through gradient updates) and the rest 0s (blocking gradient updates). where is an upper triangular matrix of ones. Since will never be a one-hot vector in practice, the resulting mask will be soft. Note that in a fully Bayesian neural network, the approximate posterior on the parameters of all redundant (blocked) units will conform to the prior, essentially paying a portion of the divergence debt borrowed by the active units.

Before giving explicitly the form of the approximate posterior

we argue that (i) the learnt distribution should be unimodal, such that a unique optimal layer size can be deduced, and (ii) it should provide us with a meaningful uncertainty estimate. As the probabilities of the concrete categorical distribution are not constrained to express unimodality, we suggest to limit the degrees of freedom by coupling

through a deterministic and differentiable function. One such candidate is the renormalised density of the truncated Normal distribution which we denote as

. By abuse of notation we express as a function of and and evaluate it at points :

(10)
(11)

Besides the unimodality, this parameterisation is also advantageous for requiring a constant number of variational parameters the layer size. Throughout this work, the prior assumes the same parameterisation as and and are specified in advance. Care must be taken, however, when setting the temperature . Since the gradient is scaled with the inverse of , small values, e.g. in the order of , can lead to optimisation instability. We have observed a good performance with a constant temperature in the range of to , which we found empirically. Finally, we note that the gradients the weights and biases are multiplicatively stretched by the sampled mask vector. Therefore, our method can be interpreted as an auxiliary per-unit learning rate, modulating the error signal coming from the data log-likelihood term in the ELBO objective.

3.2 Network Depth

Inspired by [He et al., 2016]

, we infer the optimal depth of a feed-forward neural network by learning a bypass variable

for each layer independently. Using the notation from above, we can express the layer output as

(12)

We treat in a Bayesian manner and assume a concrete Bernoulli distribution for the form of the approximate posterior. Thus we learn a single variational parameter per layer and, again, keep the temperature hyperparameter fixed:

(13)

We set the prior to be another concrete Bernoulli distribution with fixed parameter . Similarly to the concrete categorical distribution, the temperature hyperparameter cannot be small enough so that the sampled bypass coefficient becomes a numerical 1 or 0. Therefore, in the process of training, the outputs of the skipped layer are only strongly inhibited and not completely shut off but as we will see, this still allows to detect an optimal layer count.

One drawback of the presented approach is its limited applicability to those layers only which do not change the dimensionality of their inputs. The reason is that the skip connection is implemented as a simple convex combination of the layer’s input and output as given in Eq. 12. Nevertheless, this method can be used in parallel with the adaptive layer size and thus enable intermediate dimensionality fluctuations. Analogously to the per-unit learning rate argument, we can view the skip connection as a modulation on the gradients to all units and we interpret this method as an adaptive per-layer learning rate.

4 Related Work

Neural network architecture search has long been a topic of research and diverse methods such as evolutionary algorithms 

[Todd, 1988, Miller et al., 1989, Kitano, 1990]

, reinforcement learning 

[Zoph and Le, 2016] or Bayesian optimisation [Bergstra et al., 2013, Mendoza et al., 2016] have been applied. Despite the underlying differences, all these approaches share a common trait in the fact that they decouple the architecture design from the training. Consequently, this has a significant computational burden and to the best of our knowledge, we are the first to oppose to this paradigm and merge weight and architectural hyperparameter optimisation using the forward- and backpropagation cycle of neural network training.

In [LeCun et al., 1990, Hassibi and Stork, 1993]

unimportant weights are identified and removed from the architecture. A major limitation is that the initial network architecture can only be reduced. Our approach is similar in the sense that it has an upper limit on the network size, but it also allows for growth after initial contraction, should there be new evidence supporting it. Furthermore the method presented in this work is principled in the inclusion of expert knowledge in the form of fixed prior probability for each layer and only requires the manual tuning of the temperature constant

.

5 Experiments

5.1 Regression on Toy Data

Point-estimate Weights

In this first toy data experiment we demonstrate learning a suitable layer size in a single-layer neural network with 50 units and ReLU activation functions. We set a very conservative prior on the size variable

with and and record the change in the approximate posterior over time. Figure 1 depicts qualitatively the probabilities of the concrete categorical distribution and three snapshots show the current fit over the dataset.

Figure 1:

Change in the posterior probabilities

over time (as used in Eqs. 11 and 10). Below the diagram, three snapshots show the fit of the training data: the more units are released, the better the network is able to account for the non-linearity of the data. The optimisation converges to parameters and . The temperature hyperparameter is set to .

In this example, we generate 2000 points from a one-dimensional noisy periodic function. Due to the large number of data points, the total loss is largely dominated by the data likelihood term and the increasing divergence between the approximate posterior and the prior is acting as a weak regulariser. Consequently, the allocation of more units stops after the data is well approximated. Note that this would not happen, should the prior parameter be set to a large value, e.g. 40, as there is no incentive for the model to converge to a simpler solution. We will see in short that this is no longer the case once we treat the network weights in a Bayesian way as well.

Next, we initialise a deep neural network with 11 layers, 10 of which are subject to the bypassing mechanism. In order to enforce the usage of more than one layer we limit the size of each to 5 units and we use again a ReLU activation function. Figure 2 shows the change in the probability of skipping a layer over time. The posterior allows for a clear interpretation that a rigid network of 5 layers will be able to reliably fit the data.

Figure 2: Change in the posterior probabilities for the skip variables (see Eq. 13) over time. Five of the layers are bypassed with high probability, indicating that a network with 5 hidden layers of 5 units each is enough to fit the data. The temperature hyperparameter is set to for each layer.
Bayesian Weights

We now construct a fully Bayesian neural network with independently normally distributed weights and biases. In Bayesian neural networks the KL-divergence between the approximate posterior and the prior is acting as a strong regulariser on the parameters and in cases of small data size and overly parameterised models, the noise in the parameters dominates. The aim of this experiment is to show that the presented framework of architecture optimisation mitigates this issue by not only extending inadequately small architectures but also reducing oversized ones. Figures 2(b) and 2(a) show the change in posterior for two different priors: one with and and another with and . Notice that in both cases the variational parameter converges to approximately the same value, suggesting that the method is robust to setting inappropriate prior distributions.

(a)
(b)
Figure 3: Change in posterior over the size of a single-layer Bayesian neural network. Prior parameters: (a) and and (b) and . The temperature hyperparameter is set to .

In addition, we performed experiments where the layer size and the network depth are jointly learnt. In the cases where the architectural prior is on very few units and layers, as in Figure 1, the network first allocates more layers. This is an easier way to increase capacity in comparison to adding more units to a layer. It has, however, one important consequence—having a very deep but narrow Bayesian neural network can be computationally inconvenient, as the variance in the output becomes intractably large. One way to alleviate this problem would be to balance the network depth and layer size, e.g. by choosing an appropriate prior connecting the size and skip variables. We leave this to future research.

5.2 Regression on UCI Datasets

We explored the robustness in performance of Bayesian neural networks on several real-world datasets [Dheeru and Karra Taniskidou, 2017]

. We trained shallow and deep rigid networks and their architecture-regularised counterparts for 200 epochs with small batch size of 8. The shallow model comprises of a single ReLU-activated layer with 50 units and the deep one stacks 5 of them. In all cases the prior distributions over the structural variables were initialised with parameters

, for the size mechanism, for the layer bypassing one. All network weights have a standard normal prior. The posterior approximation over the weights is initialised from the prior as well. As in the previous experiments, the temperature parameters are kept fixed at and for the layer size and network depth respectively. The datasets chosen for this experiment are multidimensional (varying between 6 and 13 features) and contain a fairly small amount of samples (between 300 and 1500), which results in very noisy predictions on the overparameterised models.

We show that learning the structure has significant benefits in performance measured as a root mean squared error (RMSE) and log-likelihood on a held-out test set. The experiments have been repeated 20 times. In Fig. 4

the RMSE of the depth and size adaptive models are lower meaning that they generalise better and the standard deviations narrower, signifying a robustness to initialisations. The results for the log-likelihood in 

Fig. 5 show that the structure-regularised models are less uncertain about the predictions. Deep rigid models however, fail to fit the data as the noise in the network weights is prevailing. Moreover, both rigid models are highly dependent of the particular parameter initialisation, which is reflected in the large standard deviations in the box plots. On the other hand, the performance of the adaptive models is consistent throughout independent experiment repetitions.

Figure 4: Test set RMSE performance on 5 UCI datasets for single-layer rigid and adaptive and deep rigid and adaptive Bayesian neural networks. Lower is better.
Figure 5: Test set log-likelihood performance on 5 UCI datasets for single-layer rigid and adaptive and deep rigid and adaptive Bayesian neural networks. Higher is better.

5.3 Contextual Bandits

In this experiment we set up a discrete decision making task where an agent’s action triggers a reward from the environment, i.e. the bandit. At each time step the agent’s action is conditioned on a context which is independent of all previous ones. Hereby we aim to show the versatility of the adaptive architecture approach in an online learning scenario as changing the quality and quantity of the data changes the requirements for a network structure.

In the bandit task the goal of the agent is to maximise the expected received reward, or equivalently, to minimise the expected regret. The latter is defined as the difference in the rewards received by an oracle and the agent. In order to perform optimally, the agent learns an approximation to the bandit’s intrinsic reward function and uses it to pick an action. The current context, performed action and received reward are then kept in a data buffer.

The reward approximation function is parameterised as a Bayesian neural network with weights and a prior . Furthermore, let be the likelihood of a reward under . Then, using variational inference we can define a Bayesian objective and learn an approximate posterior . Using the likelihood term , we can now define the optimal action as the one that maximises the expected reward. After performing the action we then update

and repeat for the next context sample. This iterative approach is called Thompson sampling 

[Thompson, 1933] and was developed as an efficient way to tradeoff exploration for exploitation in the framework of Bayesian decision making.

In the following we compare agents with purely greedy, randomised and (adaptive) Bayesian reward estimation models. The purely greedy agent is deterministic in nature and always picks the action with highest reward estimate for a given context. The randomised or -greedy agent performs the estimated best action with probability , otherwise a random one is chosen. This way, despite the agent’s deterministic reward model it will still explore potentially better options. Nevertheless, if is not annealed during the interaction with the bandit, the agent will never achieve a 0 expected regret, even with a perfect reward model. The Bayesian agent, however, will explore more actively in the beginning when few data are seen, and will transition automatically into an exploitation regime once the uncertainty in the posterior becomes small enough. The speed at which this transition happens depends on the prior, the initialisation and the variance in the gradients.

Following [Blundell et al., 2015] we evaluate the agents on the Mushroom UCI dataset [Dheeru and Karra Taniskidou, 2017] consisting of more than 8000 mushrooms, described as categorical vectors of features. T he task is then to decide whether or not to consume a given mushroom. If it is labelled as poisonous and is being consumed the agent receives a randomised reward of either or with 50% chance each. If the consumed mushroom is edible the reward is positive 5. All rejected samples receive a reward of 0. In this experiment we measure the cumulative regret over the course of 30 000 interactions. Both the greedy and Bayesian agents are parameterised by 2-layer neural networks with 100 units and ReLU activations in each layer. The adaptive Bayesian agent has a prior centred at 50 units and a broad standard deviation of 20. For the sake of computational efficiency, we do not retrain the reward model at each new bandit interaction but only fine-tune it with one epoch on the current dataset buffer whose size is limited to the last 4096 samples. We used a learning rate of 0.0005 and initialised the standard deviations of the Bayesian weights at 0.02. The reported results are the average of 5 independent runs of the experiment.

Throughout the experiments, the Bayesian rigid agent consistently encountered stability issues and after about 20 000 interactions the reward estimates became so unreliable, that the model settled for the suboptimal solution of picking the reject action for all observed mushrooms. Fig. 6 shows the cumulative regret over time. The failure of the rigid Bayesian model is due to a numerical instability arising from huge gradients caused by wrong reward guesses as it can be seen in the plot of the reward RMSE in Fig. 7. Clearly, the suboptimal behaviour of the Bayesian rigid agent is remedied by the adaptive size regularisation.

In addition, we show the benefits of the learnt architecture by initialising a new one from the converged posterior approximation over the size, in this case—two layers with 34 and 20 units accordingly. It has best performance among the Bayesian and greedy agents with the only exception being the purely greedy agent. We attribute its surprising success to chance and claim without proof that a more challenging dataset will be able to display its lack of principled exploration skills.

Figure 6: Cumulative regret, aggregated over 30 000 randomly presented context vectors. The estimated reward is modelled by 2-layer rigid and adaptive size Bayesian neural networks. The rigid network consistently exhibits instability after about 17 000 steps, while the adaptive one remains stable. The best performance among all Bayesian models is obtained by a rigid network whose architecture is initialised from the converged structural parameters of the adaptive network. As a baseline and purely greedy agents are evaluated.
Figure 7: Reward RMSE for the rigid and adaptive agents. The instability in the estimate results in suboptimal behaviour in action picking and hence a substantial increase in cumulative regret.

5.4 Image Classification

To demonstrate the broad applicability of the proposed adaptive architecture method, we apply it on the filter count hyperparameter in Bayesian convolutional neural networks. The extension from the fully connected layers to the output channels of a convolutional layer is straightforward. Similarly, the adaptive network depth regularisation remains unchanged. In this case though, the number of channels from the previous layer should match the one from the current. All experiments are performed on three popular 10-class datasets of increasing discrimination difficulty: MNIST 

[LeCun et al., 2010], Fashion MNIST [Xiao et al., 2017] and CIFAR-10 [Krizhevsky et al., 2014]. The training sets of these are comprised of 60 000, 60 000 and 50 000 samples respectively and all results presented are based on the average of 100 samples form the model predictive distribution over the held-out 10 000 test samples.

We check the advantage of the adaptive size regularisation in a fairly “wide” model architecture consisting of three Bayesian convolutional layers, each followed by a ReLU non-linearity and a max pooling operation and two Bayesian fully-connected layers. The first two layers have a window size of 5 and the third of 3. The layers host 81, 64, and 64 filters respectively and padding is added to preserve the input dimensionality. After the convolutional layers, the data is flattened and processed by a ReLU-activated fully-connected layer of size 64 and fed into a softmax output layer. For the adaptive network we apply the size regularisation after each convolutional layer. The priors over the size parameters are set to

of the maximum filter count and we set . All configurations are trained for 200 epochs using early stopping, the Bayesian layers have a standard normal prior and the standard deviations of the network weights are initialised to . Additionally, we create a deep architecture with 9 convolutional layers grouped into 3 blocks of 3 consecutive layers with 32 filters (16 for the first block only) and a max-pooling operation at the end. For the adaptive depth networks, the second and third layer in each block are skipped. We set a very conservative skip prior probability and keep the temperature constant at . At the end of the third block, the data is flattened and passed through the fully-connected ReLU and softmax output layers as described above. All other training configurations remain the same.

We evaluate all four neural network configurations in two experimental scenarios. In the first one we learn the parameters from the full training dataset and in the second we reduce each to 1000 randomly chosen samples. Table 1 shows the test set accuracy on the full dataset size (top) and on the reduced one (bottom) for the Bayesian models. There is a clear advantage of the adaptive networks over the rigid ones and it is only amplified by the difficulty of the dataset—the improvement in test set accuracy on the reduced CIFAR-10 is almost . We remark, however, that even the best of these results are not representative for the state-of-the-art and that the purpose of the experiment is to compare the influence of the adaptive architecture method in a rather generic setup.

Dataset Rigid Adaptive size Deep rigid Adaptive depth
MNIST 99.34 99.40 99.46 99.42
Fashion 91.41 91.13 91.14 91.22
CIFAR-10 73.31 74.06 68.51 69.63
MNIST 94.47 95.67 95.72 94.81
Fashion 79.69 81.18 80.32 80.83
CIFAR-10 34.98 38.95 33.83 37.49
Table 1: Test set accuracy on the full (top) and reduced (bottom) datasets for “wide” rigid and adaptive as well as “deep” rigid and adaptive Bayesian convolutional neural networks.

6 Conclusion

In this work we introduced a novel method for learning a neural network architecture by including discrete hyperparameters such as the layer size and the network depth into the Bayesian framework. We used parameterised concrete distributions over the architectural variables and variational inference to approximate their posterior distributions. T his allowed us to learn the network structure without significant computational overhead, to sweep through a continuous hyperparameter space and to incorporate external knowledge in the form of prior distributions. The interpretability of the approximate posterior distribution over the layer size and network depth parameters gave us a tool to identify architectural misspecifications and choose optimal values for the layer dimensions. We showed empirically the benefits of the methods in predictive tasks on regression and classification datasets where regularised network structures demonstrated superior test set performance.

Acknowledgements

We thank Botond Cseke and Atanas Mirchev for their astute remarks and invaluable advice for improving the quality of this work.

References