Bayesian Neural Networks (BNNs) are increasingly the de-facto approach for modeling stochastic functions. By treating the weights in a neural network as random variables, and performing posterior inference on these weights, BNNs can avoid overfitting in the regime of small data, provide well-calibrated posterior uncertainty estimates, and model a large class of stochastic functions with heteroskedastic and multi-modal noise. These properties have resulted in BNNs being adopted in applications ranging from active learning[11, 8]3].
, model-selection in BNNs has received relatively less attention. Unfortunately, the consequences for a poor choice of architecture are severe: too few nodes, and the BNN will not be flexible enough to model the function of interest; too many nodes, and the BNN predictions will have large variance because the posterior uncertainty in the weights will remain large. In other approaches to modeling stochastic functions such as Gaussian Processes (GPs), such concerns can be addressed via optimizing continuous kernel parameters; In BNNs, the number of nodes in a layer is a discrete quantity. Practitioners typically perform model-selection via onerous searches over different layer sizes.
In this work, we demonstrate that we can perform computationally-efficient and statistically-effective model selection in Bayesian neural networks by placing Horseshoe (HS) priors  over the variance of weights incident to each node in the network. The HS prior has heavy tails and supports both zero values and large values. Fixing the mean of the incident weights to be zero, nodes with small variance parameters are effectively turned off—all incident weights will be close to zero—while nodes with large variance parameters can be interpreted as active. In this way, we can perform model selection over the number of nodes required in a Bayesian neural network.
While they mimic a spike-and-slab approach that would assign a discrete on-off variable to each node, the continuous relaxation provided by the Horseshoe prior keeps the model differentiable; with appropriate parameterization, we can take advantage of recent advances in variational inference (e.g. ) for training. We demonstrate that our approach avoids under-fitting even with the required number of nodes in the network is grossly over-estimated; we learn compact network structures without sacrificing—and sometimes improving—predictive performance.
2 Bayesian Neural Networks
A deep neural network with hidden layers is parameterized by a set of weight matrices , with each weight matrix being of size where is the number of units in layer . The neural network maps an input to a response by recursively applying the transformation
, where the vectoris the input into layer , the initial input , and is some point-wise non-linearity, for instance the rectified-linear function, .
A Bayesian neural network captures uncertainty in the weight parameters by endowing them with distributions . Given a dataset of observation-response pairs , we are interested in estimating the posterior distribution,
and leveraging the learned posterior for predicting responses to unseen data , . The prior allows one to encode problem-specific beliefs as well as general properties about weights. In the past, authors have used fully factorized Gaussians  on each weight , structured Gaussians [18, 7] on each layer as well as a two component scale mixture of Gaussians  on each weight . The scale mixture prior has been shown to encourage weight sparsity. In this paper, we show that by using carefully constructed infinite scale mixtures of Gaussians, we can induce heavy-tailed priors over network weights. Unlike previous work, we force all weights incident into a unit to share a common prior allowing us to induce sparsity at the unit level and prune away units that do not help explain the data well.
3 Automatic Model Selection through Horseshoe Priors
Let the node weight vector denote the set of all weights incident into unit of hidden layer . We assume that each node weight vector is conditionally independent and distributed according to a Gaussian scale mixture,
is an identity matrix,
is the Half-Cauchy distribution with densityfor , is a unit specific scale parameter, while the scale parameter is shared by all units in the layer.
The distribution over weights in Equation 2 is called horseshoe prior . It exhibits Cauchy-like flat, heavy tails while maintaining an infinitely tall spike at zero. Consequently, it has the desirable property of allowing sufficiently large node weight vectors to escape un-shrunk—by having a large scale parameter—while providing severe shrinkage to smaller weights. This is in contrast to Lasso style regularizers and their Bayesian counterparts that provide uniform shrinkage to all weights. By forcing the weights incident on a unit to share scale parameters, the prior in Equation 2 induces sparsity at the unit level, turning off units that are unnecessary for explaining the data well. Intuitively, the shared layer wide scale pulls all units in layer to zero, while the heavy tailed unit specific scales allow some of the units to escape the shrinkage.
Parameterizing for More Robust Inference: Decomposing the Cauchy Distribution
While a direct parameterization of the Half-Cauchy distribution in Equation 2 is possible, it leads to challenges during variational learning. Standard exponential family variational approximations struggle to capture the thick Cauchy tails, while a Cauchy approximating family leads to high variance gradients. Instead, we use a more convenient auxiliary variable parameterization ,
is the Inverse Gamma distribution with densityfor . Since the number of output units is fixed by the problem at hand, there is no need for sparsity inducing prior for the output layer. We place independent Gaussian priors, with vague hyper-priors on the output layer weights.
The joint distribution of the Horseshoe Bayesian neural network is then given by,
where is an appropriate likelihood function, and, , with , .
Parameterizing for More Robust Inference: Non-Centered Parameterization
The horseshoe prior of Equation 2 exhibits strong correlations between the weights and the scales . Indeed, its favorable sparsity inducing properties stem from this coupling. However, an unfortunate consequence is a strongly coupled posterior that exhibits pathological funnel shaped geometries [2, 12] and is difficult to reliably sample or approximate. Fully factorized approximations are particularly problematic and can lead to non-sparse solutions erasing the benefits of using the horseshoe prior.
where the distribution on the scales are left unchanged. Such a parameterization is referred to as non-centered since the scales and weights are sampled from independent prior distributions and are marginally uncorrelated. The coupling between the two is now introduced by the likelihood, when conditioning on observed data. Non-centered parameterizations are known to lead to simpler posterior geometries . Empirically, we find that adopting a non-centered parameterization significantly improves the quality of our posterior approximation and helps us better find sparse solutions. Figure 1 summarizes the conditional dependencies assumed by the centered and the non-centered Horseshoe Bayesian neural networks model.
4 Learning Bayesian Neural Networks with Horseshoe priors
We use variational inference to approximate the intractable posterior . By exploiting recently proposed stochastic extensions we are able to scale to large architectures and datasets, and deal with non-conjugacy.
We proceed by selecting a tractable family of distributions , with free variational parameters . We then optimize such that the Kullback-Liebler divergence between the approximation and the true posterior, is minimized. This is equivalent to maximizing the lower bound to the marginal likelihood (or evidence) , .
Approximating Family We use a fully factorized variational family,
We restrict the variational distribution for the non-centered weight between units in layer and in layer , to the Gaussian family . We will use to denote the set of all non-centered weights in the network. The non-negative scale parameters and and the variance of the output layer weights are constrained to the log-Normal family, , , and . We do not impose a distributional constraint on the variational approximations of the auxiliary variables , , or , but we will see that conditioned on the remaining variables the optimal variational family for these latent variables follow inverse Gamma distributions.
Evidence Lower Bound The resulting evidence lower bound (ELBO),
is challenging to evaluate. The non-linearities introduced by the neural network and the potential lack of conjugacy between the neural network parameterized likelihoods and the Horseshoe priors render the first expectation in Equation 7 intractable. Consequently, the traditional prescription of optimizing the ELBO by cycling through a series of fixed point updates is no longer available.
4.1 Black Box Variational Inference
provides a recipe for subverting this difficulty. These techniques provide noisy unbiased estimates of the gradient, by approximating the offending expectations with unbiased Monte-Carlo estimates and relying on either score function estimators [34, 26] or reparameterization gradients [16, 27, 31] to differentiate through the sampling process. With the unbiased gradients in hand, stochastic gradient ascent can be used to optimize the ELBO. In practice, reparameterization gradients exhibit significantly lower variances than their score function counterparts and are typically favored for differentiable models. The reparameterization gradients rely on the existence of a parameterization that separates the source of randomness from the parameters with respect to which the gradients are sought. For our Gaussian variational approximations, the well known non-centered parameterization, , allows us to compute Monte-Carlo gradients,
for any differentiable function and . Further, as shown in , the variance of the gradient estimator can be provably lowered by noting that the weights in a layer only affect through the layer’s pre-activations and directly sampling from the relatively lower-dimensional variational posterior over pre-activations.
Variational distribution on pre-activations
Recall that the pre-activation of node in layer , in our non-centered model is . The variational posterior for the pre-activations is given by,
where is the input to layer , and are the means and variances of the variational posterior over weights incident into node , and denotes a point wise squaring of the input a. Since, the variational posteriors of and are restricted to the log-Normal family, it follows that, , .
We now have all the tools necessary for optimizing Equation 7. By recursively sampling from the variational posterior of Equation 9 for each layer of the network, we are able to forward propagate information through the network. Owing to the reparameterizations (Equation 8), we are also able to differentiate through the sampling process and use reverse mode automatic differentiation tools  to compute the relevant gradients. With the gradients in hand, we optimize with respect to the variational weights , per-unit scales , per-layer scales , and the variational scale for the output layer weights, using Adam . Conditioned on these, the optimal variational posteriors of the auxiliary variables , , and follow Inverse Gamma distributions. Fixed point updates that maximize with respect to , holding the other variational parameters fixed are available. The overall algorithm, involves cycling between gradient and fixed point updates to maximize the ELBO in a coordinate ascent fashion.
5 Related Work
. These early approaches relied on Laplace approximation or Markov Chain Monte Carlo (MCMC) for inference. They do not scale well to modern architectures or the large datasets required to learn them. Recent advances in stochastic variational methods[3, 27], black-box variational and alpha-divergence minimization [10, 26]
, and probabilistic backpropagation have reinvigorated interest in BNNs by allowing inference to scale to larger architectures and larger datasets.
Work on learning structure in BNNs remains relatively nascent. In  the authors use a cascaded Indian buffet process to learn the structure of sigmoidal belief networks. While interesting, their approach appears susceptible to poor local optima and their proposed Markov Chain Monte Carlo based inference does not scale well. More recently,  introduce a mixture-of-Gaussians prior on the weights, with one mixture tightly concentrated around zero, thus approximating a spike and slab prior over weights. Their goal of turning off edges is very different than our approach, which performs model selection over the appropriate number of nodes. Further, our proposed Horseshoe prior can be seen as an extension of their work, where we employ an infinite scale mixture-of-Gaussians. Beyond providing stronger sparsity, this is attractive because it obviates the need to directly specify the mixture component variances or the mixing proportion as is required by the prior proposed in . Only the prior scales of the variances needs to be specified and in our experiments, we found results to be relatively robust to the values of these scale hyper-parameters. Recent work  indicates that further gains may be possible by a more careful tuning of the scale parameters. Others [15, 6] have noticed connections between Dropout  and approximate variational inference. In particular,  show that the interpretation of Gaussian dropout as performing variational inference in a network with log uniform priors over weights leads to sparsity in weights. This is an interesting but orthogonal approach, wherein sparsity stems from variational optimization instead of the prior.
There also exists work on learning structure in non-Bayesian neural networks. Early work [17, 9] pruned networks by analyzing second-order derivatives of the objectives. More recently,  describe applications of structured sparsity not only for optimizing filters and layers but also computation time. Closest to our work in spirit, ,  and  who use group sparsity to prune groups of weights—e.g. weights incident to a node. However, these approaches don’t model the uncertainty in weights and provide uniform shrinkage to all parameters. Our horseshoe prior approach similarly provides group shrinkage while still allowing large weights for groups that are active.
In this section, we present experiments that evaluate various aspects of the proposed Bayesian neural network with horseshoe priors (HS-BNN). We begin with experiments on synthetic data that showcase the model’s ability to guard against under fitting and recover the underlying model. We then proceed to benchmark performance on standard regression and classification tasks. For the regression problems we use Gaussian likelihoods with an unknown precision , . We place a vague prior on the precision, and approximate the posterior over using a Gamma distribution. The corresponding variational parameters are learned via a gradient update during learning. We use a Categorical likelihood for the classification problems. In a preliminary study, we found larger mini-batch sizes improved performance, and in all experiments we use a batch size of . The hyper parameters and are both set to one.
6.1 Experiments on simulated data
Robustness to under-fitting
We begin with a one-dimensional non linear regression problem shown in Figure2. To explore the effect of additional modeling capacity on performance, we sample twenty points uniformly at random in the interval from the function and train single layer Bayesian neural networks with , and units each. We compare HS-BNN against a BNN with Gaussian priors on weights, , training both for a iterations. The performance of the BNN with Gaussian priors quickly deteriorates with increasing capacity as a result of under fitting the limited amount of training data. In contrast, HS-BNN by pruning away additional capacity is more robust to model misspecification showing only a marginal drop in predictive performance with increasing number of units.
Next, we explore the benefits of the non-centered parameterization. We consider a simple two dimensional classification problem generated by sampling data uniformly at random from and using a 2-2-1 network, whose parameters are known a-priori to generate the class labels. We train three Bayesian neural networks with a unit layer on this data, with Gaussian priors, with horseshoe priors but employing a centered parameterization, and with the non-centered horseshoe prior. Each model is trained till convergence. We find that all three models are able to easily fit the data and provide high predictive accuracy. However, the structure learned by the three models are very different. In Figure 2 we visualize the distribution of weights incident onto a unit. Unsurprisingly, the BNN with Gaussian priors does not exhibit sparsity. In contrast, models employing the horseshoe prior are able to prune units away by setting all incident weights to tiny values. It is interesting to note that even for this highly stylized example the centered parameterization struggles to recover the true structure of the underlying network. The non-centered parameterization however does significantly better and prunes away all but two units. Further experiments provided in the supplement demonstrate the same effect for wider 100 unit networks. The non-centered parameterized model is again able to recover the two active units.
6.2 Classification and Regression experiments
We benchmark classification performance on the MNIST dataset. Additional experiments on a gesture recognition task are available in the supplement. We compare HS-BNN against the variational matrix Gaussian (VMG) , a BNN with a two-component scale mixture (SM-BNN) prior on weights proposed in  and a BNN with Gaussian prior (BNN) on weights. VMG uses a structured variational approximation, while the other approaches all use fully factorized approximations and differ only in the type of prior used. These approaches constitute the state-of-the-art in variational learning for Bayesian neural networks.
We preprocessed the images in the MNIST digits dataset by dividing the pixel values by 126. We explored networks with varying widths and depths all employing rectified linear units. For HS-BNN we used Adam with a learning rate ofand epochs. We did not use a validation set to monitor validation performance or tune hyper-parameters. We used the parameter settings recommended in the original papers for the competing methods. Figure 3 summarizes our findings. We showcase results for three architectures with two hidden layers each containing , and rectified linear hidden units. Across architectures, we find our performance to be significantly better than BNN, comparable to SM-BNN, and worse than VMG. The poor performance with respect to VMG likely stems from the structured matrix variate variational approximation employed by VMG.
More interestingly, we clearly see the sparsity inducting effects of the horseshoe prior. Recall that under the horseshoe prior, . As the scales tend to zero the corresponding units (and all incident weights) are pruned away. SM-BNN also encourages sparsity, but on weights not nodes. Further, the horseshoe prior with its thicker tails and taller spike at origin encourages stronger sparsity. To see this we compared the 2-norms of the inferred expected weight node vectors found by SM-BNN and HS-BNN (Figure 3
). For HS-BNN the inferred scales are tiny for most units, with a few notable outliers that escape un-shrunk. This causes the corresponding weight vectors to be zero for the majority of units, suggesting that the model is able to effectively “turn off” extra capacity. In contrast, the weight node vectors recovered by SM-BNN (and BNN) are less tightly concentrated at zero. We also plot the density ofwith the smallest norm in each of the three architectures. Note that with increasing architecture size (modeling capacity) the density peaks more strongly at zero, suggesting that the model is more confident in turning off the unit and not use the extra modeling capacity. To further explore the implications of node versus weight sparsity, we visualize learned by SM-BNN and HS-BNN in Figure 3. Weight sparsity in SM-BNN encourages fundamentally different filters that pick up edges at different orientations. In contrast, HS-BNN’s node sparsity encourages filters that correspond to digits or superpositions of digits and may lead to more interpretable networks. Stronger sparsity afforded by the horseshoe is again evident when visualizing filters with the lowest norms. HS-BNN filters are nearly all black when scaled with respect to the SM-BNN filters.
Regression We also compare the performance of our model on regression datasets from the UCI repository. We follow the experimental protocol proposed in [11, 18] and train a single hidden layer network with rectified linear units for all but the larger “Protein” and “Year” datasets for which we train a unit network. For the smaller datasets we train on a randomly subsampled subset and evaluate on the remainder and repeat this process times. For “Protein” we perform 5 replications and for “Year” we evaluate on a single split. Here, we only benchmark against VMG, which has previously been shown to outperform alternatives . Table 1 summarizes our results. Despite our fully factorized variational approximation we remain competitive with VMG in terms of both root mean squared error (RMSE) and predictive log likelihoods and even outperform it on some datasets. A more careful selection of the scale hyper-parameters , and the use of structured variational approximations similar to VMG will likely help improve results further and constitute interesting directions of future work.
|Dataset||N(d)||VMG(RMSE)||HS-BNN (RMSE)||VMG(Test ll)||HS-BNN(Test ll)|
|Power Plant||9568 (4)|
7 Discussion and Conclusion
In Section 6, we demonstrated that a properly parameterized horseshoe prior on the scales of the weights incident to each node is a computationally efficient tool for model selection in Bayesian neural networks. Decomposing the horseshoe prior into inverse gamma distributions and using a non-centered representation ensured a degree of robustness to poor local optima. While we have seen that the horseshoe prior is an effective tool for model selection, one might wonder about more common alternatives. We lay out a few obvious choices and contrast their deficiencies. One starting point is to observe that a node can be pruned if all its incident weights are zero (in this case, it can only pass on the same bias term to the rest of the network). Such sparsity can be encouraged by a simple exponential prior on the weight scale, but without heavy tails all scales are forced artificially low and prediction suffers and has been noted in the context of learning sparse neural networks [21, 33]. In contrast, simply using a heavy-tail prior on the scale parameter, such as a half-Cauchy, will not apply any pressure to set small scales to zero, and we will not have sparsity. Both the shrinkage to zero and the heavy tails of the horseshoe prior are necessary to get the model selection that we require. And importantly, using a continuous prior with the appropriate statistical properties is simple to incorporate with existing inference, unlike an explicit spike and slab model. Another alternative is to observe that a node can be pruned if the product is nearly constant for all inputs —having small weights is sufficient to achieve this property; weights that are orthogonal to the variation in is another. Thus, instead of putting a prior over the scale of , one could put a prior over the scale of the variation in . While we believe this is more general, we found that such a formulation has many more local optima and thus harder to optimize.
-  R. P. Adams, H. M. Wallach, and Z. Ghahramani. Learning the structure of deep sparse graphical models. In AISTATS, 2010.
-  M. Betancourt and M. Girolami. Hamiltonian monte carlo for hierarchical models. Current trends in Bayesian methodology with applications, 79:30, 2015.
-  C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In ICML, pages 1613–1622, 2015.
-  W. L. Buntine and A. S. Weigend. Bayesian back-propagation. Complex systems, 5(6):603–643, 1991.
-  C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. In AISTATS, 2009.
Y. Gal and Z. Ghahramani.
Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.In ICML, 2016.
Y. Gal and Z. Ghahramani.
A theoretically grounded application of dropout in recurrent neural networks.In NIPS, 2016.
-  Y. Gal, R. Islam, and Z. Ghahramani. Deep Bayesian active learning with image data. In Bayesian Deep Learning workshop, NIPS, 2016.
-  B. Hassibi, D. G. Stork, and G. J. Wolff. Optimal brain surgeon and general network pruning. In Neural Networks, 1993., IEEE Intl. Conf. on, pages 293–299. IEEE, 1993.
-  J. Hernandez-Lobato, Y. Li, M. Rowland, T. Bui, D. Hernández-Lobato, and R. Turner. Black-box alpha divergence minimization. In ICML, pages 1511–1520, 2016.
-  J. M. Hernández-Lobato and R. P. Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. In ICML, 2015.
-  J. B. Ingraham and D. S. Marks. Bayesian sparsity for intractable distributions. arXiv:1602.03807, 2016.
-  A. Joshi, S. Ghosh, M. Betke, S. Sclaroff, and H. Pfister. Personalizing gesture recognition using hierarchical bayesian neural networks. In CVPR, 2017.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
-  D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In NIPS, 2015.
-  D. P. Kingma and M. Welling. Stochastic gradient VB and the variational auto-encoder. In ICLR, 2014.
-  Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In NIPS, pages 598–605, 1990.
-  C. Louizos and M. Welling. Structured and efficient variational deep learning with matrix Gaussian posteriors. In ICML, pages 1708–1716, 2016.
-  D. J. MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4(3):448–472, 1992.
-  D. Maclaurin, D. Duvenaud, and R. P. Adams. Autograd: Effortless gradients in numpy. In ICML AutoML Workshop, 2015.
-  D. Molchanov, A. Ashukha, and D. Vetrov. Variational dropout sparsifies deep neural networks. arXiv:1701.05369, 2017.
-  K. Murray and D. Chiang. Auto-sizing neural networks: With applications to n-gram language models. arXiv:1508.05051, 2015.
-  R. M. Neal. Bayesian learning via stochastic dynamics. In NIPS, 1993.
-  T. Ochiai, S. Matsuda, H. Watanabe, and S. Katagiri. Automatic node selection for deep neural networks using group lasso regularization. arXiv:1611.05527, 2016.
J. Piironen and A. Vehtari.
On the hyperprior choice for the global shrinkage parameter in the horseshoe prior.AISTATS, 2017.
-  R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In AISTATS, pages 814–822, 2014.
-  D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278–1286, 2014.
-  S. Scardapane, D. Comminiello, A. Hussain, and A. Uncini. Group sparse regularization for deep neural networks. Neurocomputing, 241:81–89, 2017.
-  Y. Song, D. Demirdjian, and R. Davis. Tracking body and hands for gesture recognition: Natops aircraft handling signals database. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on, pages 500–506. IEEE, 2011.
-  N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, 2014.
-  M. Titsias and M. Lázaro-gredilla. Doubly stochastic variational Bayes for non-conjugate inference. In ICML, pages 1971–1979, 2014.
-  M. P. Wand, J. T. Ormerod, S. A. Padoan, R. Fuhrwirth, et al. Mean field variational Bayes for elaborate distributions. Bayesian Analysis, 6(4):847–900, 2011.
-  W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep neural networks. In NIPS, pages 2074–2082, 2016.
-  R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.
Appendix A Fixed point updates
The ELBO corresponding to the non-centered HS model is,
With our choices of the variational approximating families, all the entropies are available in closed form. We rely on a Monte-Carlo estimates to evaluate the expectation involving the likelihood .
The auxiliary variables , and all follow inverse Gamma distributions. Here we derive for , the others follow analogously. Consider,
from which we see that,
Since, , it follows that . We can thus calculate the necessary fixed point updates for conditioned on and . Our algorithm uses these fixed point updates given estimates of and after each Adam step.
Appendix B Additional Experiments
b.1 Simulated Data
Here we provide an additional experiment with the data setup in Section 6.2. We use the same linearly separable data, but train larger networks with 100 units each. Figure 4 shows the inferred weights under the different models. Observe that the non-centered HS-BNN is again able to prune away extra capacity and recover two active nodes.
b.2 Further Exploration of Model Selection Properties
Here we provide additional results that illustrate the model selection abilities of HS-BNN. First we visualize the norms of inferred node weight vectors found by BNN, SM-BNN and HS-BNN for , and networks. Note that as we increase capacity the model selection abilities of HS-BNN becomes more obvious and as opposed to the other approaches illustrate clear inflection points and it is evident that the model is using only a fraction of its available capacity.
As a reference we compare against SM-BNN. We visualize the density of the inferred node weight vectors under the two models for networks , and . For each network we show the density of the units with the smallest norms from either layer. Note that in all three cases HS-BNN produces weights that are more tightly concentrated around zero. Moreover for HS-BNN the concentration around zero becomes sharper with increasing modeling capacity (larger architectures), again indicating that we are pruning away additional capacity.
b.3 Gesture Recognition
We also experimented with a gesture recognition dataset  that consists of 24 unique aircraft handling signals performed by 20 different subjects, each for 20 repetitions. The task consists of recognizing these gestures from kinematic, tracking and video data. However, we only use kinematic and tracking data. A couple of example gestures are visualized in Figure 7. The dataset contains gesture examples.
A 12-dimensional vector of body features (angular joint velocities for the right and left elbows and wrists), as well as an 8 dimensional vector of hand features (probability values for hand shapes for the left and right hands) collected by Song et al. are provided as features for all frames of all videos in the dataset. We additionally used the 20 dimensional per-frame tracking features made available in . We constructed features to represent each gesture by first extracting frames by sampling uniformly in time and then concatenating the per-frame features of the selected frames to produce 600-dimensional feature vectors.
This is a much smaller dataset than MNIST and recent work  has demonstrated that a BNN with Gaussian priors performs well on this task. Figure 7 compares the performance of HS-BNN with competing methods. We train a two layer HS-BNN with each layer containing 400 units. The error rates reported are a result of averaging over 5 random 75/25 splits of the dataset. Similar to MNIST, HS-BNN significantly outperforms BNN and is competitive with VMG and SM-BNN. We also see strong sparsity, just as in MNIST.