1 Introduction
Bayesian Neural Networks (BNNs) are increasingly the defacto approach for modeling stochastic functions. By treating the weights in a neural network as random variables, and performing posterior inference on these weights, BNNs can avoid overfitting in the regime of small data, provide wellcalibrated posterior uncertainty estimates, and model a large class of stochastic functions with heteroskedastic and multimodal noise. These properties have resulted in BNNs being adopted in applications ranging from active learning
[11, 8][3].While there have been many recent advances in training BNNs [11, 3, 27, 18, 10]
, modelselection in BNNs has received relatively less attention. Unfortunately, the consequences for a poor choice of architecture are severe: too few nodes, and the BNN will not be flexible enough to model the function of interest; too many nodes, and the BNN predictions will have large variance because the posterior uncertainty in the weights will remain large. In other approaches to modeling stochastic functions such as Gaussian Processes (GPs), such concerns can be addressed via optimizing continuous kernel parameters; In BNNs, the number of nodes in a layer is a discrete quantity. Practitioners typically perform modelselection via onerous searches over different layer sizes.
In this work, we demonstrate that we can perform computationallyefficient and statisticallyeffective model selection in Bayesian neural networks by placing Horseshoe (HS) priors [5] over the variance of weights incident to each node in the network. The HS prior has heavy tails and supports both zero values and large values. Fixing the mean of the incident weights to be zero, nodes with small variance parameters are effectively turned off—all incident weights will be close to zero—while nodes with large variance parameters can be interpreted as active. In this way, we can perform model selection over the number of nodes required in a Bayesian neural network.
While they mimic a spikeandslab approach that would assign a discrete onoff variable to each node, the continuous relaxation provided by the Horseshoe prior keeps the model differentiable; with appropriate parameterization, we can take advantage of recent advances in variational inference (e.g. [16]) for training. We demonstrate that our approach avoids underfitting even with the required number of nodes in the network is grossly overestimated; we learn compact network structures without sacrificing—and sometimes improving—predictive performance.
2 Bayesian Neural Networks
A deep neural network with hidden layers is parameterized by a set of weight matrices , with each weight matrix being of size where is the number of units in layer . The neural network maps an input to a response by recursively applying the transformation
, where the vector
is the input into layer , the initial input , and is some pointwise nonlinearity, for instance the rectifiedlinear function, .A Bayesian neural network captures uncertainty in the weight parameters by endowing them with distributions . Given a dataset of observationresponse pairs , we are interested in estimating the posterior distribution,
(1) 
and leveraging the learned posterior for predicting responses to unseen data , . The prior allows one to encode problemspecific beliefs as well as general properties about weights. In the past, authors have used fully factorized Gaussians [11] on each weight , structured Gaussians [18, 7] on each layer as well as a two component scale mixture of Gaussians [3] on each weight . The scale mixture prior has been shown to encourage weight sparsity. In this paper, we show that by using carefully constructed infinite scale mixtures of Gaussians, we can induce heavytailed priors over network weights. Unlike previous work, we force all weights incident into a unit to share a common prior allowing us to induce sparsity at the unit level and prune away units that do not help explain the data well.
3 Automatic Model Selection through Horseshoe Priors
Let the node weight vector denote the set of all weights incident into unit of hidden layer . We assume that each node weight vector is conditionally independent and distributed according to a Gaussian scale mixture,
(2) 
Here,
is an identity matrix,
is the HalfCauchy distribution with density
for , is a unit specific scale parameter, while the scale parameter is shared by all units in the layer.The distribution over weights in Equation 2 is called horseshoe prior [5]. It exhibits Cauchylike flat, heavy tails while maintaining an infinitely tall spike at zero. Consequently, it has the desirable property of allowing sufficiently large node weight vectors to escape unshrunk—by having a large scale parameter—while providing severe shrinkage to smaller weights. This is in contrast to Lasso style regularizers and their Bayesian counterparts that provide uniform shrinkage to all weights. By forcing the weights incident on a unit to share scale parameters, the prior in Equation 2 induces sparsity at the unit level, turning off units that are unnecessary for explaining the data well. Intuitively, the shared layer wide scale pulls all units in layer to zero, while the heavy tailed unit specific scales allow some of the units to escape the shrinkage.
Parameterizing for More Robust Inference: Decomposing the Cauchy Distribution
While a direct parameterization of the HalfCauchy distribution in Equation 2 is possible, it leads to challenges during variational learning. Standard exponential family variational approximations struggle to capture the thick Cauchy tails, while a Cauchy approximating family leads to high variance gradients. Instead, we use a more convenient auxiliary variable parameterization [32],
(3) 
where
is the Inverse Gamma distribution with density
for . Since the number of output units is fixed by the problem at hand, there is no need for sparsity inducing prior for the output layer. We place independent Gaussian priors, with vague hyperpriors on the output layer weights.The joint distribution of the Horseshoe Bayesian neural network is then given by,
(4) 
where is an appropriate likelihood function, and, , with , .
Parameterizing for More Robust Inference: NonCentered Parameterization
The horseshoe prior of Equation 2 exhibits strong correlations between the weights and the scales . Indeed, its favorable sparsity inducing properties stem from this coupling. However, an unfortunate consequence is a strongly coupled posterior that exhibits pathological funnel shaped geometries [2, 12] and is difficult to reliably sample or approximate. Fully factorized approximations are particularly problematic and can lead to nonsparse solutions erasing the benefits of using the horseshoe prior.
Recent work [2, 12] suggests that the problem can be alleviated by adopting noncentered parameterizations. Consider a reformulation of Equation 2,
(5) 
where the distribution on the scales are left unchanged. Such a parameterization is referred to as noncentered since the scales and weights are sampled from independent prior distributions and are marginally uncorrelated. The coupling between the two is now introduced by the likelihood, when conditioning on observed data. Noncentered parameterizations are known to lead to simpler posterior geometries [2]. Empirically, we find that adopting a noncentered parameterization significantly improves the quality of our posterior approximation and helps us better find sparse solutions. Figure 1 summarizes the conditional dependencies assumed by the centered and the noncentered Horseshoe Bayesian neural networks model.
4 Learning Bayesian Neural Networks with Horseshoe priors
We use variational inference to approximate the intractable posterior . By exploiting recently proposed stochastic extensions we are able to scale to large architectures and datasets, and deal with nonconjugacy.
We proceed by selecting a tractable family of distributions , with free variational parameters . We then optimize such that the KullbackLiebler divergence between the approximation and the true posterior, is minimized. This is equivalent to maximizing the lower bound to the marginal likelihood (or evidence) , .
Approximating Family
We use a fully factorized variational family,
(6) 
We restrict the variational distribution for the noncentered weight between units in layer and in layer , to the Gaussian family . We will use to denote the set of all noncentered weights in the network. The nonnegative scale parameters and and the variance of the output layer weights are constrained to the logNormal family, , , and . We do not impose a distributional constraint on the variational approximations of the auxiliary variables , , or , but we will see that conditioned on the remaining variables the optimal variational family for these latent variables follow inverse Gamma distributions.
Evidence Lower Bound
The resulting evidence lower bound (ELBO),
(7) 
is challenging to evaluate. The nonlinearities introduced by the neural network and the potential lack of conjugacy between the neural network parameterized likelihoods and the Horseshoe priors render the first expectation in Equation 7 intractable. Consequently, the traditional prescription of optimizing the ELBO by cycling through a series of fixed point updates is no longer available.
4.1 Black Box Variational Inference
Recent progress in black box variational inference [16, 27, 26, 31]
provides a recipe for subverting this difficulty. These techniques provide noisy unbiased estimates of the gradient
, by approximating the offending expectations with unbiased MonteCarlo estimates and relying on either score function estimators [34, 26] or reparameterization gradients [16, 27, 31] to differentiate through the sampling process. With the unbiased gradients in hand, stochastic gradient ascent can be used to optimize the ELBO. In practice, reparameterization gradients exhibit significantly lower variances than their score function counterparts and are typically favored for differentiable models. The reparameterization gradients rely on the existence of a parameterization that separates the source of randomness from the parameters with respect to which the gradients are sought. For our Gaussian variational approximations, the well known noncentered parameterization, , allows us to compute MonteCarlo gradients,(8) 
for any differentiable function and . Further, as shown in [15], the variance of the gradient estimator can be provably lowered by noting that the weights in a layer only affect through the layer’s preactivations and directly sampling from the relatively lowerdimensional variational posterior over preactivations.
Variational distribution on preactivations
Recall that the preactivation of node in layer , in our noncentered model is . The variational posterior for the preactivations is given by,
(9) 
where is the input to layer , and are the means and variances of the variational posterior over weights incident into node , and denotes a point wise squaring of the input a. Since, the variational posteriors of and are restricted to the logNormal family, it follows that, , .
Algorithm
We now have all the tools necessary for optimizing Equation 7. By recursively sampling from the variational posterior of Equation 9 for each layer of the network, we are able to forward propagate information through the network. Owing to the reparameterizations (Equation 8), we are also able to differentiate through the sampling process and use reverse mode automatic differentiation tools [20] to compute the relevant gradients. With the gradients in hand, we optimize with respect to the variational weights , perunit scales , perlayer scales , and the variational scale for the output layer weights, using Adam [14]. Conditioned on these, the optimal variational posteriors of the auxiliary variables , , and follow Inverse Gamma distributions. Fixed point updates that maximize with respect to , holding the other variational parameters fixed are available. The overall algorithm, involves cycling between gradient and fixed point updates to maximize the ELBO in a coordinate ascent fashion.
5 Related Work
Early work on Bayesian neural networks can be traced back to [4, 19, 23]
. These early approaches relied on Laplace approximation or Markov Chain Monte Carlo (MCMC) for inference. They do not scale well to modern architectures or the large datasets required to learn them. Recent advances in stochastic variational methods
[3, 27], blackbox variational and alphadivergence minimization [10, 26], and probabilistic backpropagation
[11] have reinvigorated interest in BNNs by allowing inference to scale to larger architectures and larger datasets.Work on learning structure in BNNs remains relatively nascent. In [1] the authors use a cascaded Indian buffet process to learn the structure of sigmoidal belief networks. While interesting, their approach appears susceptible to poor local optima and their proposed Markov Chain Monte Carlo based inference does not scale well. More recently, [3] introduce a mixtureofGaussians prior on the weights, with one mixture tightly concentrated around zero, thus approximating a spike and slab prior over weights. Their goal of turning off edges is very different than our approach, which performs model selection over the appropriate number of nodes. Further, our proposed Horseshoe prior can be seen as an extension of their work, where we employ an infinite scale mixtureofGaussians. Beyond providing stronger sparsity, this is attractive because it obviates the need to directly specify the mixture component variances or the mixing proportion as is required by the prior proposed in [3]. Only the prior scales of the variances needs to be specified and in our experiments, we found results to be relatively robust to the values of these scale hyperparameters. Recent work [25] indicates that further gains may be possible by a more careful tuning of the scale parameters. Others [15, 6] have noticed connections between Dropout [30] and approximate variational inference. In particular, [21] show that the interpretation of Gaussian dropout as performing variational inference in a network with log uniform priors over weights leads to sparsity in weights. This is an interesting but orthogonal approach, wherein sparsity stems from variational optimization instead of the prior.
There also exists work on learning structure in nonBayesian neural networks. Early work [17, 9] pruned networks by analyzing secondorder derivatives of the objectives. More recently, [33] describe applications of structured sparsity not only for optimizing filters and layers but also computation time. Closest to our work in spirit, [24], [28] and [22] who use group sparsity to prune groups of weights—e.g. weights incident to a node. However, these approaches don’t model the uncertainty in weights and provide uniform shrinkage to all parameters. Our horseshoe prior approach similarly provides group shrinkage while still allowing large weights for groups that are active.
6 Experiments
In this section, we present experiments that evaluate various aspects of the proposed Bayesian neural network with horseshoe priors (HSBNN). We begin with experiments on synthetic data that showcase the model’s ability to guard against under fitting and recover the underlying model. We then proceed to benchmark performance on standard regression and classification tasks. For the regression problems we use Gaussian likelihoods with an unknown precision , . We place a vague prior on the precision, and approximate the posterior over using a Gamma distribution. The corresponding variational parameters are learned via a gradient update during learning. We use a Categorical likelihood for the classification problems. In a preliminary study, we found larger minibatch sizes improved performance, and in all experiments we use a batch size of . The hyper parameters and are both set to one.
6.1 Experiments on simulated data
Robustness to underfitting
We begin with a onedimensional non linear regression problem shown in Figure
2. To explore the effect of additional modeling capacity on performance, we sample twenty points uniformly at random in the interval from the function and train single layer Bayesian neural networks with , and units each. We compare HSBNN against a BNN with Gaussian priors on weights, , training both for a iterations. The performance of the BNN with Gaussian priors quickly deteriorates with increasing capacity as a result of under fitting the limited amount of training data. In contrast, HSBNN by pruning away additional capacity is more robust to model misspecification showing only a marginal drop in predictive performance with increasing number of units.Noncentered parameterization
Next, we explore the benefits of the noncentered parameterization. We consider a simple two dimensional classification problem generated by sampling data uniformly at random from and using a 221 network, whose parameters are known apriori to generate the class labels. We train three Bayesian neural networks with a unit layer on this data, with Gaussian priors, with horseshoe priors but employing a centered parameterization, and with the noncentered horseshoe prior. Each model is trained till convergence. We find that all three models are able to easily fit the data and provide high predictive accuracy. However, the structure learned by the three models are very different. In Figure 2 we visualize the distribution of weights incident onto a unit. Unsurprisingly, the BNN with Gaussian priors does not exhibit sparsity. In contrast, models employing the horseshoe prior are able to prune units away by setting all incident weights to tiny values. It is interesting to note that even for this highly stylized example the centered parameterization struggles to recover the true structure of the underlying network. The noncentered parameterization however does significantly better and prunes away all but two units. Further experiments provided in the supplement demonstrate the same effect for wider 100 unit networks. The noncentered parameterized model is again able to recover the two active units.
6.2 Classification and Regression experiments
We benchmark classification performance on the MNIST dataset. Additional experiments on a gesture recognition task are available in the supplement. We compare HSBNN against the variational matrix Gaussian (VMG) [18], a BNN with a twocomponent scale mixture (SMBNN) prior on weights proposed in [3] and a BNN with Gaussian prior (BNN) on weights. VMG uses a structured variational approximation, while the other approaches all use fully factorized approximations and differ only in the type of prior used. These approaches constitute the stateoftheart in variational learning for Bayesian neural networks.
Mnist
We preprocessed the images in the MNIST digits dataset by dividing the pixel values by 126. We explored networks with varying widths and depths all employing rectified linear units. For HSBNN we used Adam with a learning rate of
and epochs. We did not use a validation set to monitor validation performance or tune hyperparameters. We used the parameter settings recommended in the original papers for the competing methods. Figure 3 summarizes our findings. We showcase results for three architectures with two hidden layers each containing , and rectified linear hidden units. Across architectures, we find our performance to be significantly better than BNN, comparable to SMBNN, and worse than VMG. The poor performance with respect to VMG likely stems from the structured matrix variate variational approximation employed by VMG.More interestingly, we clearly see the sparsity inducting effects of the horseshoe prior. Recall that under the horseshoe prior, . As the scales tend to zero the corresponding units (and all incident weights) are pruned away. SMBNN also encourages sparsity, but on weights not nodes. Further, the horseshoe prior with its thicker tails and taller spike at origin encourages stronger sparsity. To see this we compared the 2norms of the inferred expected weight node vectors found by SMBNN and HSBNN (Figure 3
). For HSBNN the inferred scales are tiny for most units, with a few notable outliers that escape unshrunk. This causes the corresponding weight vectors to be zero for the majority of units, suggesting that the model is able to effectively “turn off” extra capacity. In contrast, the weight node vectors recovered by SMBNN (and BNN) are less tightly concentrated at zero. We also plot the density of
with the smallest norm in each of the three architectures. Note that with increasing architecture size (modeling capacity) the density peaks more strongly at zero, suggesting that the model is more confident in turning off the unit and not use the extra modeling capacity. To further explore the implications of node versus weight sparsity, we visualize learned by SMBNN and HSBNN in Figure 3. Weight sparsity in SMBNN encourages fundamentally different filters that pick up edges at different orientations. In contrast, HSBNN’s node sparsity encourages filters that correspond to digits or superpositions of digits and may lead to more interpretable networks. Stronger sparsity afforded by the horseshoe is again evident when visualizing filters with the lowest norms. HSBNN filters are nearly all black when scaled with respect to the SMBNN filters.Regression We also compare the performance of our model on regression datasets from the UCI repository. We follow the experimental protocol proposed in [11, 18] and train a single hidden layer network with rectified linear units for all but the larger “Protein” and “Year” datasets for which we train a unit network. For the smaller datasets we train on a randomly subsampled subset and evaluate on the remainder and repeat this process times. For “Protein” we perform 5 replications and for “Year” we evaluate on a single split. Here, we only benchmark against VMG, which has previously been shown to outperform alternatives [18]. Table 1 summarizes our results. Despite our fully factorized variational approximation we remain competitive with VMG in terms of both root mean squared error (RMSE) and predictive log likelihoods and even outperform it on some datasets. A more careful selection of the scale hyperparameters [25], and the use of structured variational approximations similar to VMG will likely help improve results further and constitute interesting directions of future work.
Dataset  N(d)  VMG(RMSE)  HSBNN (RMSE)  VMG(Test ll)  HSBNN(Test ll) 

Boston  506 (13)  
Concrete  1030 (8)  
Energy  768 (8)  
Kin8nm  8192 (8)  
Naval  11,934 (16)  
Power Plant  9568 (4)  
Protein  45.730 (9)  
Wine  1599 (11)  
Yacht  308 (6)  
Year  515,345 (90) 
7 Discussion and Conclusion
In Section 6, we demonstrated that a properly parameterized horseshoe prior on the scales of the weights incident to each node is a computationally efficient tool for model selection in Bayesian neural networks. Decomposing the horseshoe prior into inverse gamma distributions and using a noncentered representation ensured a degree of robustness to poor local optima. While we have seen that the horseshoe prior is an effective tool for model selection, one might wonder about more common alternatives. We lay out a few obvious choices and contrast their deficiencies. One starting point is to observe that a node can be pruned if all its incident weights are zero (in this case, it can only pass on the same bias term to the rest of the network). Such sparsity can be encouraged by a simple exponential prior on the weight scale, but without heavy tails all scales are forced artificially low and prediction suffers and has been noted in the context of learning sparse neural networks [21, 33]. In contrast, simply using a heavytail prior on the scale parameter, such as a halfCauchy, will not apply any pressure to set small scales to zero, and we will not have sparsity. Both the shrinkage to zero and the heavy tails of the horseshoe prior are necessary to get the model selection that we require. And importantly, using a continuous prior with the appropriate statistical properties is simple to incorporate with existing inference, unlike an explicit spike and slab model. Another alternative is to observe that a node can be pruned if the product is nearly constant for all inputs —having small weights is sufficient to achieve this property; weights that are orthogonal to the variation in is another. Thus, instead of putting a prior over the scale of , one could put a prior over the scale of the variation in . While we believe this is more general, we found that such a formulation has many more local optima and thus harder to optimize.
References
 [1] R. P. Adams, H. M. Wallach, and Z. Ghahramani. Learning the structure of deep sparse graphical models. In AISTATS, 2010.
 [2] M. Betancourt and M. Girolami. Hamiltonian monte carlo for hierarchical models. Current trends in Bayesian methodology with applications, 79:30, 2015.
 [3] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In ICML, pages 1613–1622, 2015.
 [4] W. L. Buntine and A. S. Weigend. Bayesian backpropagation. Complex systems, 5(6):603–643, 1991.
 [5] C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. In AISTATS, 2009.

[6]
Y. Gal and Z. Ghahramani.
Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.
In ICML, 2016. 
[7]
Y. Gal and Z. Ghahramani.
A theoretically grounded application of dropout in recurrent neural networks.
In NIPS, 2016.  [8] Y. Gal, R. Islam, and Z. Ghahramani. Deep Bayesian active learning with image data. In Bayesian Deep Learning workshop, NIPS, 2016.
 [9] B. Hassibi, D. G. Stork, and G. J. Wolff. Optimal brain surgeon and general network pruning. In Neural Networks, 1993., IEEE Intl. Conf. on, pages 293–299. IEEE, 1993.
 [10] J. HernandezLobato, Y. Li, M. Rowland, T. Bui, D. HernándezLobato, and R. Turner. Blackbox alpha divergence minimization. In ICML, pages 1511–1520, 2016.
 [11] J. M. HernándezLobato and R. P. Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In ICML, 2015.
 [12] J. B. Ingraham and D. S. Marks. Bayesian sparsity for intractable distributions. arXiv:1602.03807, 2016.
 [13] A. Joshi, S. Ghosh, M. Betke, S. Sclaroff, and H. Pfister. Personalizing gesture recognition using hierarchical bayesian neural networks. In CVPR, 2017.
 [14] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
 [15] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In NIPS, 2015.
 [16] D. P. Kingma and M. Welling. Stochastic gradient VB and the variational autoencoder. In ICLR, 2014.
 [17] Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In NIPS, pages 598–605, 1990.
 [18] C. Louizos and M. Welling. Structured and efficient variational deep learning with matrix Gaussian posteriors. In ICML, pages 1708–1716, 2016.
 [19] D. J. MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
 [20] D. Maclaurin, D. Duvenaud, and R. P. Adams. Autograd: Effortless gradients in numpy. In ICML AutoML Workshop, 2015.
 [21] D. Molchanov, A. Ashukha, and D. Vetrov. Variational dropout sparsifies deep neural networks. arXiv:1701.05369, 2017.
 [22] K. Murray and D. Chiang. Autosizing neural networks: With applications to ngram language models. arXiv:1508.05051, 2015.
 [23] R. M. Neal. Bayesian learning via stochastic dynamics. In NIPS, 1993.
 [24] T. Ochiai, S. Matsuda, H. Watanabe, and S. Katagiri. Automatic node selection for deep neural networks using group lasso regularization. arXiv:1611.05527, 2016.

[25]
J. Piironen and A. Vehtari.
On the hyperprior choice for the global shrinkage parameter in the horseshoe prior.
AISTATS, 2017.  [26] R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In AISTATS, pages 814–822, 2014.
 [27] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278–1286, 2014.
 [28] S. Scardapane, D. Comminiello, A. Hussain, and A. Uncini. Group sparse regularization for deep neural networks. Neurocomputing, 241:81–89, 2017.
 [29] Y. Song, D. Demirdjian, and R. Davis. Tracking body and hands for gesture recognition: Natops aircraft handling signals database. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 500–506. IEEE, 2011.
 [30] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, 2014.
 [31] M. Titsias and M. Lázarogredilla. Doubly stochastic variational Bayes for nonconjugate inference. In ICML, pages 1971–1979, 2014.
 [32] M. P. Wand, J. T. Ormerod, S. A. Padoan, R. Fuhrwirth, et al. Mean field variational Bayes for elaborate distributions. Bayesian Analysis, 6(4):847–900, 2011.
 [33] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In NIPS, pages 2074–2082, 2016.
 [34] R. J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(34):229–256, 1992.
Appendix A Fixed point updates
The ELBO corresponding to the noncentered HS model is,
(10) 
With our choices of the variational approximating families, all the entropies are available in closed form. We rely on a MonteCarlo estimates to evaluate the expectation involving the likelihood .
The auxiliary variables , and all follow inverse Gamma distributions. Here we derive for , the others follow analogously. Consider,
(11) 
from which we see that,
(12) 
Since, , it follows that . We can thus calculate the necessary fixed point updates for conditioned on and . Our algorithm uses these fixed point updates given estimates of and after each Adam step.
Appendix B Additional Experiments
b.1 Simulated Data
Here we provide an additional experiment with the data setup in Section 6.2. We use the same linearly separable data, but train larger networks with 100 units each. Figure 4 shows the inferred weights under the different models. Observe that the noncentered HSBNN is again able to prune away extra capacity and recover two active nodes.
b.2 Further Exploration of Model Selection Properties
Here we provide additional results that illustrate the model selection abilities of HSBNN. First we visualize the norms of inferred node weight vectors found by BNN, SMBNN and HSBNN for , and networks. Note that as we increase capacity the model selection abilities of HSBNN becomes more obvious and as opposed to the other approaches illustrate clear inflection points and it is evident that the model is using only a fraction of its available capacity.
As a reference we compare against SMBNN. We visualize the density of the inferred node weight vectors under the two models for networks , and . For each network we show the density of the units with the smallest norms from either layer. Note that in all three cases HSBNN produces weights that are more tightly concentrated around zero. Moreover for HSBNN the concentration around zero becomes sharper with increasing modeling capacity (larger architectures), again indicating that we are pruning away additional capacity.
b.3 Gesture Recognition
Gesture recognition
We also experimented with a gesture recognition dataset [29] that consists of 24 unique aircraft handling signals performed by 20 different subjects, each for 20 repetitions. The task consists of recognizing these gestures from kinematic, tracking and video data. However, we only use kinematic and tracking data. A couple of example gestures are visualized in Figure 7. The dataset contains gesture examples.
A 12dimensional vector of body features (angular joint velocities for the right and left elbows and wrists), as well as an 8 dimensional vector of hand features (probability values for hand shapes for the left and right hands) collected by Song et al.
[29] are provided as features for all frames of all videos in the dataset. We additionally used the 20 dimensional perframe tracking features made available in [29]. We constructed features to represent each gesture by first extracting frames by sampling uniformly in time and then concatenating the perframe features of the selected frames to produce 600dimensional feature vectors.This is a much smaller dataset than MNIST and recent work [13] has demonstrated that a BNN with Gaussian priors performs well on this task. Figure 7 compares the performance of HSBNN with competing methods. We train a two layer HSBNN with each layer containing 400 units. The error rates reported are a result of averaging over 5 random 75/25 splits of the dataset. Similar to MNIST, HSBNN significantly outperforms BNN and is competitive with VMG and SMBNN. We also see strong sparsity, just as in MNIST.
Comments
There are no comments yet.