Walsh-Hadamard Variational Inference for Bayesian Deep Learning

05/27/2019 ∙ by Simone Rossi, et al. ∙ 0

Over-parameterized models, such as DeepNets and ConvNets, form a class of models that are routinely adopted in a wide variety of applications, and for which Bayesian inference is desirable but extremely challenging. Variational inference offers the tools to tackle this challenge in a scalable way and with some degree of flexibility on the approximation, but for over-parameterized models this is challenging due to the over-regularization property of the variational objective. Inspired by the literature on kernel methods, and in particular on structured approximations of distributions of random matrices, this paper proposes Walsh-Hadamard Variational Inference (WHVI), which uses Walsh-Hadamard-based factorization strategies to reduce the parameterization and accelerate computations, thus avoiding over-regularization issues with the variational objective. Extensive theoretical and empirical analyses demonstrate that WHVI yields considerable speedups and model reductions compared to other techniques to carry out approximate inference for over-parameterized models, and ultimately show how advances in kernel methods can be translated into advances in approximate Bayesian inference.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction and Motivation

Since its inception, Variational Inference (vi, (Jordan et al., 1999)

) has continuously gained popularity as a scalable and flexible approximate inference scheme for a variety of models for which exact Bayesian inference is intractable. Bayesian neural networks

(Mackay, 1994; Neal, 1996) represent a good example of models for which inference is intractable, and for which vi– and approximate inference in general – is challenging due to the nontrivial form of the posterior distribution and the large dimensionality of the parameter space (Graves, 2011; Gal and Ghahramani, 2016). Recent advances in vi allow to effectively deal with these issues in various ways. A flexible class of posterior approximations can be constructed using, e.g., normalizing flows (Rezende and Mohamed, 2015), whereas large parameter space have pushed the research in the direction of Bayesian compression (Louizos et al., 2017; Molchanov et al., 2017).

Employing vi is notoriously challenging for over-parameterized statistical models. In this paper, we focus in particular on Bayesian Deep Neural Networks (dnn

s) and Bayesian Convolutional Neural Networks (


s) as typical examples of over-parameterized models. Let’s consider a supervised learning task with

input vectors and corresponding labels collected in

and , respectively; furthermore, let’s consider dnns with weight matrices , likelihood , and prior . Following standard variational arguments, after introducing an approximation to the posterior it is possible to obtain a lower bound to the log-marginal likelihood as follows:


The first term acts as a model fitting term, whereas the second one acts as a regularizer, penalizing solutions where the posterior is far away from the prior. It is easy to verify that the kl term can be the dominant one in the objective for over-parameterized models. For example, a mean field posterior approximation turns the kl term into a sum of as many kl terms as the number of model parameters, say , which can dominate the overall objective when . As a result, the optimization focuses on keeping the approximate posterior close to the prior, disregarding the other term with measures the data fit. This issue has been observed in a variety of deep models (Bowman et al., 2016), where it was proposed to gradually include the kl term throughout the optimization (Bowman et al., 2016; Sønderby et al., 2016), or to improve the initialization of variational parameters (Rossi et al., 2019). Alternatively, other approximate inference methods for deep models with connections to vi have been proposed, notably Monte Carlo Dropout (mcd, (Gal and Ghahramani, 2016)) and Noisy Natural Gradients (nng; (Zhang et al., 2018)).

In this paper, we propose a novel strategy to cope with model over-parameterization when using variational inference, which is inspired by the literature on kernel methods. Our proposal is to reparameterize the variational posterior over model parameters by means of a structured decomposition based on random matrix theory

Tropp (2011), which has inspired a number of fundamental contributions in the literature on approximations for kernel methods, such as fastfood (Le et al., 2013) and Orthogonal Random Features (orf, (Yu et al., 2016b)). The key operation within our proposal is the Walsh-Hadamard transform, and this is why we name our proposal Walsh-Hadamard Variational Inference (whvi).

Without loss of generality, consider Bayesian dnns with weight matrices of size . Compared with mean field vi, whvi has a number of attractive properties. The number of parameters is reduced from to , reducing the over-regularization effect of the kl term in the variational objective. We derive expressions for the reparameterization and the local reparameterization tricks, showing that, the computational complexity is reduced from to . Finally, unlike mean field vi, whvi induces a matrix-variate distribution to approximate the posterior over the weights, thus increasing flexibility at a log-linear cost in instead of linear.

We can think of our proposal as a specific factorization of the weight matrix, so we can speculate that other tensor factorizations

(Novikov et al., 2015) of the weight matrix could equally yield such benefits. Our comparison against various matrix factorization alternatives, however, shows that whvi is superior to other parameterizations that have the same complexity. Furthermore, while matrix-variate posterior approximations have been proposed in the literature of vi (Louizos and Welling, 2016), this comes at the expense of increasing the complexity, while our proposal keeps the complexity to log-linear in .

Through a wide range of experiments on dnns and cnns, we demonstrate that our approach allows to run variational inference on complex over-parameterized models, while being competitive with state-of-the-art alternatives. Ultimately, our proposal shows how advances in kernel methods can be instrumental in improving vi

, much like previous works showed how kernel methods can improve Markov chain Monte Carlo sampling

(Sejdinovic et al., 2014; Strathmann et al., 2015) and statistical testing (Gretton et al., 2008, 2012; Zaremba et al., 2013).

2 Walsh-Hadamard Variational Inference

2.1 Background on Structured Approximations of Kernel Matrices

whvi is inspired by a line of works that developed from random feature expansions for kernel machines (Rahimi and Recht, 2008), which we briefly review here. A positive-definite kernel function induces a mapping , which can be infinite dimensional depending on the choice of . Among the large literature of scalable kernel machines, random feature expansion techniques aim at constructing a finite approximation to . For many kernel functions (Rahimi and Recht, 2008; Cho and Saul, 2009), this approximation is built by applying a nonlinear transformation to a random projection , where has entries . If the matrix of training points is and we are aiming to construct random features, that is is , this requires times time, which can be prohibitive when is large.

fastfood (Le et al., 2013) tackles the issue of large dimensional problems by replacing the matrix with a random matrix for which the space complexity is reduced from to and time complexity of performing products with input vectors is reduced from to . In fastfood, the matrix is replaced by the following:


where is a permutation matrix, is the Walsh-Hadamard matrix111 The Walsh-Hadamard matrix is defined recursively by and then , possibly scaled by to make it orthonormal. , whereas and

are diagonal random matrices with standard Normal distributions and Rademacher (

), respectively. is also diagonal with i.i.d. entries, and it is chosen such that the elements of obtained by this series of operations are approximately independent and follow a standard Normal (see Tropp (2011) for more details). fastfood inspired a series of other works on kernel approximations (Yu et al., 2016a; Bojarski et al., 2017), whereby Gaussian random matrices are approximated by a series of products between diagonal Rademacher and Walsh-Hadamard matrices, for example .

Figure 1: Normalized covariance of and Table 1: Complexity of various approaches to vi
Space Time
Neural Network Mean field Gaussian Gaussian matrix variate Tensor factorization whvi
Note: is the dimensionality of the feature map, is the number of tensor cores, is the rank of tensor cores and

is the number of pseudo-data used to sample from a matrix Gaussian distribution (see

(Louizos and Welling, 2016)).

2.2 From fastfood to Walsh-Hadamard Variational Inference


and its variants yield cheap approximations to Gaussian random matrices with pseudo-independent entries, and zero mean and unit variance. The question we address in this paper is whether we can use these types of approximations as cheap approximating distributions for

vi. By considering a prior for the elements of the diagonal matrix and a variational posterior , we can actually obtain a class of approximate posterior with some desirable properties as discussed next. Let be the weight matrix of a dnn at layer , and consider


The choice of a Gaussian and the linearity of the operations, induce a parameterization of a matrix-variate Gaussian distribution for , which is controlled by and if we assume that we can optimize these diagonal matrices. Note that we have dropped the permutation matrix and we will show later that this is not critical for performance, while it speeds up computations.

For a generic matrix-variate Gaussian distribution, we have


where is the mean matrix and and are two positive definite covariance matrices among rows and columns, and denotes the Kronecker product. In whvi, as is diagonal, with , so can be rewritten in terms of and as follows


This rewriting, shows that the choice of yields , proving that whvi assumes a matrix-variate distribution , as also shown in Figure 1.

We report the expression for , , and and leave the full derivation to the Appendix. For the mean, we have , whereas for and , we have:


where each row of is the column-wise vectorization of , the matrix is defined similarly with instead of , and denotes the trace operator.

The mean of the structured matrix-variate posterior assumed by whvi can span a -dimensional linear subspace within the whole -dimensional parameter space, and the orientation is controlled by the matrices and ; more details on this geometric intuition can be found in the Appendix.

Matrix-variate Gaussian posteriors for variational inference have been introduced in Louizos and Welling (2016); however, assuming full covariance matrices and is memory and computationally intensive (quadratic and cubic in , respectively). whvi captures covariances across weights (see Figure 1), while keeping memory requirements linear in and complexity log-linear in .

2.3 Reparameterizations in whvi for Stochastic Optimization

The so-called reparameterization trick (Kingma and Welling, 2014) is a standard way to make the variational lower bound in Equation 1 a deterministic function of the variational parameters, so as to be able to carry out gradient-based optimization despite the stochasticity in the objective. As we assume a Gaussian posterior for , the expression separates out the stochastic component () from the deterministic one ( and ).

Considering input vectors to a given layer, an improvement over this approach is to consider the distribution of the product . This is also known as the local reparameterization trick (Kingma et al., 2015), and it reduces the variance of stochastic gradients in the optimization, thus improving convergence. The product follows the distribution (Gupta and Nagar, 1999), with


A sample from this distribution can be efficiently computed thanks to the Walsh-Hadamard transform as: with a linear matrix-valued function .

2.4 Alternative Structures and Comparison with Tensor Factorization

The choice of the parameterization of in whvi leaves space to several possible alternatives, which we compare in Table 2.4. For all of them, is learned variationally and the remaining diagonal (if any) are either optimized or treated variationally (Gaussian mean-field). Figure 2 shows the behavior of these alternatives when applied to a network with reluactivation functions. With the exception of the simple and highly constrained alternative , all parameterizations are converging quite easily and the comparison with mcd shows that indeed the proposed whvi performs better both in terms of error rate and mnll.

Figure 2: Test error rate and test mnll with different structures for the weights. Benchmarked on drive with a network. Table 2: List of alternative structures and test performance on drive dataset. test error mnll model mcd (whvi) Colors are coded to match the ones used in the adjacent Figure

whvi is effectively imposing a factorization of , where parameters are either optimized or treated variationally. Tensor decompositions for dnns and cnns have been proposed in Novikov et al. (2015); here is decomposed into small matrices (tensor cores), such that


where each has dimensions (with ). We adapt this idea to compare whvi with another factorization approach to obtain an approximate posterior over . In order to match the space and time complexity of whvi (Table 1), assuming , we set:


Also, to match the number of variational parameters, all internal cores () are learned with fully factorized Gaussian posterior, while the remaining are optimized. Given the same asymptotic complexity, Figure 3 reports the results when fitting a 2-hidden layer network with whvi and with tensor decomposition. Not only whvi can reach better solutions in terms of test performance, but optimization is also faster. We speculate that this is attributed to the redundant variational parameterization induced by the tensor cores, which makes the optimization landscapes highly multi-modal, leading to slow convergence.

Figure 3: Comparison between Hadamard factorization in whvi and tensor factorization. The number in the parenthesis is the hidden dimension. The dataset used is drive.

2.5 Extensions

Concatenating or Reshaping Parameters for whvi

For the sake of presentation, so far we have assumed with , but we can easily extend whvi to handle parameters of any shape . One possibility is to use whvi with a large matrix with , such that a subset of its elements represent . Alternatively, a suitable value of can be chosen so that is a concatenation by row/column of square matrices of size

, padding if necessary

When one of the dimensions is one so that the parameter matrix is a vector (), this latter approach is not ideal, as whvi would fall back on mean-field vi. whvi can be extended to handle these cases efficiently by reshaping the parameter vector into a matrix of size with suitable , again by padding if necessary. Thanks to the reshaping, whvi uses parameters to model a posterior over , and allows for computations in rather than . This is possible by reshaping the vector that multiplies the weights in a similar way. In the Appendix, we explore this idea to infer parameters of Gaussian processes linearized using large numbers of random features.

Normalizing Flows

To better model approximate distributions for variational inference Rezende and Mohamed (2015) introduce Normalizing Flows (nf). In the general setting, consider a set of invertible, continuous and differentiable functions with parameters . Given , is transformed with a chain of flows to . The variational lower bound slightly differs from Equation 1 to take into account the determinant Jacobian of the transformations, yielding a new variational objective as follows:


Setting the initial distribution to a fully factorized Gaussian and assuming Gaussian prior on the generated , the kl term is analytically tractable. The tranformation is generally chosen to allow for fast computation of the determinant Jacobian. The parameters of the initial density as well as the flow parameters are optimized. In our case, we consider as distribution over the elements of . This approach increases the flexibility of the form of the variational posterior in whvi, which is no longer Gaussian, while still capturing covariances across weights. This is obtained at the expense of losing the possibility of employing the local reparameterization trick.

3 Experiments

3.1 Toy example

Figure 4: Regression toy example trained using whvi with Gaussian vector (1541 parameters) and with planar normalizing flow (10 flows for a total of 4141 parameters), mfg (35k parameters) and Monte Carlo dropout (mcd; Gal and Ghahramani (2016)) (17154 parameters). The two shaded area represent respactively the 99th and the 75th percentile of the predictions.

We generated a 1D toy regression problem with 128 inputs sampled from , and removed inputs on a predefined interval; targets are noisy realizations of a random function (noise variance ). We model these data using a dnn with 2 hidden layers of 128 features and cosine activations. We test four models: mean-field Gaussian vi (mfg), Monte Carlo dropout (mcd) with dropout rate and two variants of whvi- g-hvi with Gaussian posterior and nf-hvi with planar normalizing flows (Rezende and Mohamed, 2015). As Figure 4 shows, whvi offers a sensible modeling of the uncertainty on the input domain, whereas ffg and mcd seem to be slightly over-confident.

3.2 Bayesian Neural Networks

test error test mnll
model mcd nng whvi mcd nng whvi
Table 3: Test rmse and test mnll for regression datasets

We conduct a series of comparisons with state-of-the-art vi schemes for Bayesian dnns; see the Appendix for the list of data sets used in the experiments. We compare whvi with mcd and noisy-kfac (also referred to as nng; (Zhang et al., 2018)). mcd draws on a formal connection between dropout and vi with Bernoulli-like posteriors, while the more recent noisy-kfac yields a matrix-variate Gaussian distribution using noisy natural gradients. In whvi, the last layer assumes a fully factorized Gaussian posterior.

Data is randomly divided into 90%/10% splits for training and testing eight times. We standardize the input features while keeping the targets unnormalized. The network has two hidden layers and 128 features with relu activations. Similarly to the experimental setup of Louizos and Welling (2016) and Zhang et al. (2018), the network output is parameterized as ; thus forcing the network to learn standardized realizations of the targets .

We report the test rmse and the average predictive test negative log-likelihood (mnll). A selection of results is showed in Table 3; an extended version is available in the Appendix. On the majority of the datasets, whvi outperforms mcd and noisy-kfac. Empirically, these results demonstrate the value of whvi, which offers a competitive parameterization of a matrix-variate Gaussian posterior while requiring log-linear time in .

whvi builds his computational efficiency on the Fast Walsh-Hadamard Transform (fwht), which allows one to cut the complexity of a -dimensional matrix-vector multiplication from a naive to . To empirically validate this claim, we extended pytorch (Paszke et al., 2017) with a custom c++/cuda kernel which implements a batched-version of the fwht. The box-plots in Figure 5 report the inference time for the entire test split on eight heterogeneously sized datasets as a function of the number of hidden features in a two-layer dnn.

Figure 5: Inference time on the test split with a batch size of 128, Monte Carlo samples of 64 and averaged over 1000 repetitions. The workstation used is equipped with two 16c/32t Intel Xeon cpus, four NVIDIA Tesla P100 (16 GB) and 512 GB of RAM. Each experiment is performed on an gpu fully dedicated to it.

3.3 Bayesian Convolutional Neural Networks

Table 4: Test performance of different Bayesian cnns on cifar10 cifar10 test error test mnll vgg16 noisy k-fac vgg16 ffg vgg16 mcd vgg16 (w/ bn) noisy k-fac n/a alexnet whvi alexnet nf-whvi resnet18 whvi resnet18 (w/ bn) whvi resnet18 (w/ bn) nf-whvi vgg16 (w/ bn) whvi Figure 6: Reliability diagram and expected calibration error (ece) of vgg16, AlexNet and resnet with whvi (DeGroot and Fienberg, 1983; Niculescu-Mizil and Caruana, 2005; Naeini et al., 2015).

We continue the experimental evaluation of whvi by analyzing its performance on cnns. For this experiment, we replace all fully-connected layers in the cnn with the whvi parameterization, while the convolutional filters are treated variationally using mcd. In this setup, we fit AlexNet (Krizhevsky et al., 2012), vgg16 (Simonyan and Zisserman, 2014) and resnet-18 (He et al., 2016) on cifar10. Using whvi, we can reduce the number of parameters in the linear layers without affecting neither test performance nor calibration properties of the resulting model, as shown in Figure 6 and Table 4. As a reference, AlexNet with its original 23.3m parameters is reduced to just 2.3m (9.9%) when using g-whvi and to 2.4m (10.2%) with whvi and 3 planar flows. Even though we lose the benefits of the local reparameterization, the higher flexibility of normalizing flows allows to reach better test performance with respect to the Gaussian posterior. This might be improved even further using more complex families of normalizing flows (Rezende and Mohamed, 2015; Van den Berg et al., 2018; Kingma et al., 2016; Louizos and Welling, 2017).

We also tested whvi for the convolutional filters, but the resulting models had too few parameters to obtain any interesting results. For AlexNet, we obtained a model with just to 189k, which corresponds to a sparsity of 99.2% with respect of the original model. As a reference, Wen et al. (2016) was able to reach sparsity only up to 60% in the convolutional layers without impacting performance. We are currently investigating ways to adapt whvi to be an effective method to reduce convolutional parameters.

Additional results and insights

We refer the reader to the Appendix for an extended version of the results, including new applications of whvi to Gaussian processes.

Related Work

In the early sections of the paper, we have already briefly reviewed some of the literature on vi and Bayesian dnns and cnns; here we complement the literature by including other relevant works that have connections with whvi for Bayesian deep learning.

Our work takes inspration from the works on random features for kernel approximation (Rahimi and Recht, 2008) and fastfood (Le et al., 2013)

in particular. Random feature expansions have had a wide impact on the literature on kernel methods. Such approximations have been successfully used to scale a variety of models, such as Support Vector Machines

(Rahimi and Recht, 2008), Gaussian processes (Lázaro-Gredilla et al., 2010) and Deep Gaussian processes (Cutajar et al., 2017; Gal and Ghahramani, 2016). This has contributed to bridging the gap between (Deep) Gaussian processes and Bayesian dnns and cnns (Neal, 1996; Duvenaud et al., 2014; Cutajar et al., 2017; Gal and Ghahramani, 2015), which is an active area of research which aims to gain a better understanding of deep learning models through the use of kernel methods (de G. Matthews et al., 2018; Dunlop et al., 2018; Garriga-Alonso et al., 2019)

While random feature expansions improve scalability of kernel methods with respect to the size of the data set, fastfood was designed to lower the complexity with respect to the dimensionality of the input (Le et al., 2013). fastfood was later extended to improve its efficiency in the works on Structured Orthogonal Random Features (sorf) (Yu et al., 2016a) and random spinners (Bojarski et al., 2017). fastfood and sorf have been applied to the problem of handling large dimensional convolutional features in the works on Deep Fried cnns (Yang et al., 2015) and Deep Convolutional Gaussian Processes (Tran et al., 2019), respectively.

This paper focuses in particular on dnns and cnns as working examples to present whvi. Recent advances in understanding dnns have investigated the effect of over-parameterization and how model compression can be used during or after training (Hubara et al., 2016; Louizos et al., 2017; Zhu and Gupta, 2018). Neyshabur et al. (2015) empirically shows that bigger and wider neural networks can converge better and they are more resilient to overfit while Neyshabur et al. (2019) derives a capacity bound that relates model performance (error rate) to network size. As we discussed in the introduction, over-parameterization is reflected on over-regularization of the variational objective, leading the optimization to converge to solutions where the posterior falls back to the prior. Several works have dealt with such issue by either rescaling the kl divergence in the variational objective (Higgins et al., 2017; Burgess et al., 2018), or by gradually including it during optimization (Bowman et al., 2016; Sønderby et al., 2016; Rossi et al., 2019).

4 Discussion and Conclusions

Inspired by the literature on scalable kernel methods, this paper proposed Walsh-Hadamard Variational Inference (whvi). whvi offers a novel parameterization of the variational posterior, which is particularly attractive for over-parameterized models, such as modern dnns and cnns. whvi assumes a matrix-variate posterior distribution, which therefore captures covariances across weights. Crucially, unlike previous work on matrix-variate posteriors for vi, this is achieved with a low parameterization and fast computations, bypassing the over-regularization issues of vi for over-parameterized models. The large experimental campaign, demonstrates that whvi is a strong competitor of other variational approaches for such models, while offering considerable speedups.

One limitation of the parameterization induced by whvi is that its mean cannot span the whole -dimensional space. A remedy could be to modify the parameterization from to with so that the mean can span the whole space thanks to , while the rest would allow to model covariances across weights in a cheap way. However, we carry out a numerical study on the rmse between the weights induced by whvi and arbitrary weight matrices can be found in the Appendix, showing constant behavior w.r.t. .

The key operation that contributes to accelerate computations in whvi

is the Walsh-Hadamard transform. This has obvious connections with other matrix/vector operations, such as the Discrete Fourier Transform, so we are currently investigating whether it is possible to generalize

whvi to other kinds of transforms to increase flexibility.

We are currently investigating other extensions where we capture the covariance between weights across layers, by either sharing the matrix across, or by concatenating all weights into a single matrix which is then treated using whvi, with the necessary adaptations to handle the sequential nature of computations. Finally, we are looking into employing whvi for other models, such as deep generative models.


MF gratefully acknowledges support from the AXA Research Fund.


  • Bojarski et al. (2017) M. Bojarski, A. Choromanska, K. Choromanski, F. Fagan, C. Gouy-Pailler, A. Morvan, N. Sakr, T. Sarlos, and J. Atif.

    Structured Adaptive and Random Spinners for Fast Machine Learning Computations.

    In A. Singh and J. Zhu, editors,

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics

    , volume 54 of Proceedings of Machine Learning Research, pages 1020–1029, Fort Lauderdale, FL, USA, 20–22 Apr 2017. PMLR.
  • Bowman et al. (2016) S. R. Bowman, L. Vilnis, O. Vinyals, A. Dai, R. Jozefowicz, and S. Bengio. Generating Sentences from a Continuous Space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10–21. Association for Computational Linguistics, 2016.
  • Burgess et al. (2018) C. P. Burgess, I. Higgins, A. Pal, L. Matthey, N. Watters, G. Desjardins, and A. Lerchner. Understanding disentangling in -VAE. CoRR, abs/1804.03599, 2018.
  • Cho and Saul (2009) Y. Cho and L. K. Saul. Kernel Methods for Deep Learning. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 342–350. Curran Associates, Inc., 2009.
  • Cutajar et al. (2017) K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone. Random feature expansions for deep Gaussian processes. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 884–893, International Convention Centre, Sydney, Australia, Aug. 2017. PMLR.
  • de G. Matthews et al. (2018) A. G. de G. Matthews, J. Hron, M. Rowland, R. E. Turner, and Z. Ghahramani. Gaussian Process Behaviour in Wide Deep Neural Networks. In International Conference on Learning Representations, 2018.
  • DeGroot and Fienberg (1983) M. H. DeGroot and S. E. Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society. Series D (The Statistician), 32(1/2):12–22, 1983. ISSN 00390526, 14679884.
  • Ding and Cook (2014) S. Ding and D. Cook. Dimension folding PCA and PFC for matrix-valued predictors. Statistica Sinica, 24(1):463–492, 2014.
  • Dunlop et al. (2018) M. M. Dunlop, M. A. Girolami, A. M. Stuart, and A. L. Teckentrup. How Deep Are Deep Gaussian Processes? Journal of Machine Learning Research, 19(1):2100–2145, Jan. 2018. ISSN 1532-4435.
  • Duvenaud et al. (2014) D. K. Duvenaud, O. Rippel, R. P. Adams, and Z. Ghahramani. Avoiding pathologies in very deep networks. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014, volume 33 of JMLR Workshop and Conference Proceedings, pages 202–210. JMLR.org, 2014.
  • Gal and Ghahramani (2015) Y. Gal and Z. Ghahramani. Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference. CoRR, abs/1506.02158, 2015.
  • Gal and Ghahramani (2016) Y. Gal and Z. Ghahramani. Dropout As a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, pages 1050–1059. JMLR.org, 2016.
  • Garriga-Alonso et al. (2019) A. Garriga-Alonso, C. E. Rasmussen, and L. Aitchison. Deep Convolutional Networks as shallow Gaussian Processes. In International Conference on Learning Representations, 2019.
  • Graves (2011) A. Graves. Practical Variational Inference for Neural Networks. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2348–2356. Curran Associates, Inc., 2011.
  • Gretton et al. (2008) A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Schölkopf, and A. J. Smola. A Kernel Statistical Test of Independence. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 585–592. Curran Associates, Inc., 2008.
  • Gretton et al. (2012) A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A Kernel Two-sample Test. Journal of Machine Learning Research, 13:723–773, Mar. 2012. ISSN 1532-4435.
  • Gupta and Nagar (1999) A. K. Gupta and D. K. Nagar. Matrix variate distributions. Chapman and Hall/CRC, 1999.
  • He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In

    2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016

    , pages 770–778, 2016.
  • Higgins et al. (2017) I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework . In International Conference on Learning Representations, 2017.
  • Hubara et al. (2016) I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4107–4115. Curran Associates, Inc., 2016.
  • Jordan et al. (1999) M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An Introduction to Variational Methods for Graphical Models. Machine Learning, 37(2):183–233, Nov. 1999.
  • Kingma and Welling (2014) D. P. Kingma and M. Welling. Auto-Encoding Variational Bayes. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), Apr. 2014.
  • Kingma et al. (2015) D. P. Kingma, T. Salimans, and M. Welling. Variational Dropout and the Local Reparameterization Trick. In Advances in Neural Information Processing Systems 28, pages 2575–2583. Curran Associates, Inc., 2015.
  • Kingma et al. (2016) D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling. Improved Variational Inference with Inverse Autoregressive Flow. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 4743–4751. Curran Associates, Inc., 2016.
  • Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
  • Lázaro-Gredilla et al. (2010) M. Lázaro-Gredilla, J. Quinonero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse Spectrum Gaussian Process Regression. Journal of Machine Learning Research, 11:1865–1881, 2010.
  • Le et al. (2013) Q. Le, T. Sarlos, and A. Smola. Fastfood - Approximating Kernel Expansions in Loglinear Time. In 30th International Conference on Machine Learning (ICML), 2013.
  • Louizos and Welling (2016) C. Louizos and M. Welling. Structured and Efficient Variational Deep Learning with Matrix Gaussian Posteriors. In M. F. Balcan and K. Q. Weinberger, editors, Proceedings of The 33rd International Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1708–1716, New York, New York, USA, 20–22 Jun 2016. PMLR.
  • Louizos and Welling (2017) C. Louizos and M. Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2218–2227, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
  • Louizos et al. (2017) C. Louizos, K. Ullrich, and M. Welling. Bayesian Compression for Deep Learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3288–3298. Curran Associates, Inc., 2017.
  • Mackay (1994) D. J. C. Mackay.

    Bayesian methods for backpropagation networks.

    In E. Domany, J. L. van Hemmen, and K. Schulten, editors, Models of Neural Networks III, chapter 6, pages 211–254. Springer, 1994.
  • Matthews et al. (2017) A. G. d. G. Matthews, M. van der Wilk, T. Nickson, K. Fujii, A. Boukouvalas, P. León-Villagrá, Z. Ghahramani, and J. Hensman.

    GPflow: A Gaussian process library using TensorFlow.

    Journal of Machine Learning Research, 18(40):1–6, apr 2017.
  • Molchanov et al. (2017) D. Molchanov, A. Ashukha, and D. Vetrov. Variational Dropout Sparsifies Deep Neural Networks. In D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2498–2507, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR.
  • Naeini et al. (2015) M. P. Naeini, G. F. Cooper, and M. Hauskrecht.

    Obtaining well calibrated probabilities using Bayesian binning.

    In AAAI, pages 2901–2907. AAAI Press, 2015.
  • Neal (1996) R. M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag, Berlin, Heidelberg, 1996. ISBN 0387947248.
  • Neyshabur et al. (2015) B. Neyshabur, R. Tomioka, and N. Srebro. In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning. In ICLR (Workshop), 2015.
  • Neyshabur et al. (2019) B. Neyshabur, Z. Li, S. Bhojanapalli, Y. LeCun, and N. Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019.
  • Niculescu-Mizil and Caruana (2005) A. Niculescu-Mizil and R. Caruana. Predicting Good Probabilities with Supervised Learning. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05, pages 625–632, New York, NY, USA, 2005. ACM.
  • Novikov et al. (2015) A. Novikov, D. Podoprikhin, A. Osokin, and D. P. Vetrov. Tensorizing Neural Networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 442–450. Curran Associates, Inc., 2015.
  • Paszke et al. (2017) A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer. Automatic differentiation in PyTorch. In NIPS-W, 2017.
  • Rahimi and Recht (2008) A. Rahimi and B. Recht. Random Features for Large-Scale Kernel Machines. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1177–1184. Curran Associates, Inc., 2008.
  • Rezende and Mohamed (2015) D. Rezende and S. Mohamed. Variational inference with normalizing flows. In F. Bach and D. Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1530–1538, Lille, France, 07–09 Jul 2015. PMLR.
  • Rossi et al. (2019) S. Rossi, P. Michiardi, and M. Filippone. Good Initializations of Variational Bayes for Deep Models. In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5487–5497, Long Beach, California, USA, 09–15 Jun 2019. PMLR.
  • Sejdinovic et al. (2014) D. Sejdinovic, H. Strathmann, M. L. Garcia, C. Andrieu, and A. Gretton. Kernel Adaptive Metropolis-Hastings. In E. P. Xing and T. Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1665–1673, Bejing, China, 22–24 Jun 2014. PMLR.
  • Simonyan and Zisserman (2014) K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. CoRR, abs/1409.1556, 2014.
  • Sønderby et al. (2016) C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther.

    Ladder Variational Autoencoders.

    In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3738–3746. Curran Associates, Inc., 2016.
  • Strathmann et al. (2015) H. Strathmann, D. Sejdinovic, S. Livingstone, Z. Szabo, and A. Gretton. Gradient-free Hamiltonian Monte Carlo with Efficient Kernel Exponential Families. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 955–963. Curran Associates, Inc., 2015.
  • (48) S. Surjanovic and D. Bingham. Virtual library of simulation experiments: Test functions and datasets. Retrieved May 22, 2019, from http://www.sfu.ca/~ssurjano.
  • Tran et al. (2019) G.-L. Tran, E. V. Bonilla, J. Cunningham, P. Michiardi, and M. Filippone. Calibrating Deep Convolutional Gaussian Processes. In K. Chaudhuri and M. Sugiyama, editors, Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 1554–1563. PMLR, 16–18 Apr 2019.
  • Tropp (2011) J. A. Tropp. Improved Analysis of the subsampled Randomized Hadamard Transform. Advances in Adaptive Data Analysis, 3(1-2):115–126, 2011.
  • Van den Berg et al. (2018) R. Van den Berg, L. Hasenclever, J. M. Tomczak, and M. Welling. Sylvester Normalizing Flows for Variational Inference. In UAI ’18: Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, 2018.
  • Wen et al. (2016) W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning Structured Sparsity in Deep Neural Networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2074–2082. Curran Associates, Inc., 2016.
  • Yang et al. (2015) Z. Yang, M. Moczulski, M. Denil, N. d. Freitas, A. Smola, L. Song, and Z. Wang. Deep fried convnets. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1476–1483, Dec 2015.
  • Yu et al. (2016a) F. X. Yu, A. T. Suresh, K. M. Choromanski, D. N. Holtmann-Rice, and S. Kumar. Orthogonal Random Features. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1975–1983. Curran Associates, Inc., 2016a.
  • Yu et al. (2016b) F. X. X. Yu, A. T. Suresh, K. M. Choromanski, D. N. Holtmann-Rice, and S. Kumar. Orthogonal Random Features. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1975–1983. Curran Associates, Inc., 2016b.
  • Zaremba et al. (2013) W. Zaremba, A. Gretton, and M. Blaschko. B-test: A Non-parametric, Low Variance Kernel Two-sample Test. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 755–763. Curran Associates, Inc., 2013.
  • Zhang et al. (2018) G. Zhang, S. Sun, D. Duvenaud, and R. Grosse. Noisy Natural Gradient as Variational Inference. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5852–5861, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
  • Zhu and Gupta (2018) M. Zhu and S. Gupta. To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression. In ICLR (Workshop). OpenReview.net, 2018.

Appendix A Matrix-variate posterior distribution induced by whvi

We derive the parameters of the matrix-variate distribution of the weight matrix given by whvi,


The mean is directly obtained using the linearity of the expectation. The covariance matrices and are non-identifiable in the sense that for any scale factor , we have equals . Therefore, we constrain the parameters such that . The covariance matrices verify the following second-order expectations (see e.g. Section 1 in the Appendix of Ding and Cook (2014))

Recall that the Walsh-Hadamard matrix is symmetric. Denoting by a root of and considering , we have


If we define the matrix where the row is the column-wise vectorization of the matrix . We have

Using (12), we can then write a formula for a root of :


Similarly for , we have


Appendix B A geometric interpretation of whvi

The matrix in Section 2.2 expresses the linear relationship between the weights and the variational random vector , i.e. . Recall the definition of


We show that a -decomposition of can be explicitly formulated.


Let be a matrix such that , where is given by . Then a -decomposition of can be formulated as


where is the column of , , , , and .


Equation (16) derives directly from block matrix and vector operations. As is clearly lower triangular (even diagonal), let us proof that has orthogonal columns. Defining the matrix , we have:

Figure 7: Diagrammatic representation of whvi. The cube represent the high dimensional parameter space. The variational posterior (mean in orange) evolves during optimization in the (blue) subspace whose orientation (red) is controlled by and .

This decomposition gives direct insight on the role of the Walsh-Hadamard transforms: with complexity , they perform fast rotations of vectors living in a space of dimension (the plane in Figure 7) into a space of dimension (the cube in Figure 7). Treated as parameters gathered in , and control the orientation of the subspace by distortion of the canonical axes.

We empirically evaluate the minimum rmse, as a proxy for some measure of average distance, between and any given point . More precisely, we compute for ,


Figure 8 shows this quantity evaluated for sampled with i.i.d with increasing value of . The bounded behavior suggests that whvi can approximate any given matrices with a precision that does not increase with the dimension.

Figure 8: Distribution of the minimum rmse between and a sample matrix with i.i.d.

entries. For each dimension, the orange dots represent 20 repetitions. The median distance is displayed in black. Few outliers (with distance greater than 3.0) appeared, possibly due to imperfect numerical optimization. They were kept for the calculation of the median but not displayed.

Appendix C Additional details on Normalizing Flows

In the general setting, given a probabilistic model with observations , latent variables and model parameters , the variational lower bound to the log-marginal likelihood is defined as


where is the likelihood function with model parameters and is the prior on the latents. The objective is then to minimize the negative variational bound (nelbo):


Consider an invertible, continuous and differentiable function . Given , then follows defined as


As a consequence, after transformations the log-density of the final distribution is


We shall define the transformation which takes input from the previous flow and has parameters . The final variational objective is


Setting the initial distribution to a fully factorized Gaussian and assuming a Gaussian prior on the generated , the kl term is analytically tractable. A possible family of transformation is the planar flow (Rezende and Mohamed, 2015). For the planar flow, is defined as


where and

. This is equivalent to a residual layer with single neuron

mlp – as argued by Kingma et al. (2016). The log-determinant of the Jacobian of is


Alternatives can be found in (Rezende and Mohamed, 2015; Van den Berg et al., 2018; Kingma et al., 2016; Louizos and Welling, 2017).

Appendix D Additional results

name task n. d-in d-out boston Regr. 506 13 1 concrete Regr. 1030 8 1 energy Regr. 768 8 2 kin8nm Regr. 8192 8 1 naval Regr. 11934 16 2 powerplant Regr. 9568 4 1 protein Regr. 45730 9 1 yacht Regr. 308 6 1 borehol Regr. 200000 8 1 hartman6 Regr. 30000 6 1 rastrigin5 Regr. 10000 5 1 robot Regr. 150000 8 1 otlcircuit Regr. 20000 6 1 name task n. d-in d-out EEG Class. 14980 14 2 Magic Class. 19020 10 2 MiniBoo Class. 130064 50 2 Letter Class. 20000 16 26 Drive Class. 58509 48 11 MoCap Class. 78095 37 5 CIFAR10 Class. 60000 3 28 28 10
Table 5: List of dataset used in the experimental campaign

d.1 Results - Gaussian Processes with Random Feature Expansion

We test whvi for scalable gp inference, by focusing on gps with random feature expansions (Lázaro-Gredilla et al., 2010). In gp models, latent variables are given a prior ; the assumption of zero mean can be easily relaxed. Given a random feature expansion of the kernel martix, say , the latent variables can be rewritten as:


with . The random features are constructed by randomly projecting the input matrix using a Gaussian random matrix and applying a nonlinear transformation, which depends on the choice of the kernel function. The resulting model is now linear, and considering regression problems such that with , solving gp

s for regression becomes equivalent to solving standard linear regression problems. For a given set of random features, we treat the weights of the resulting linear layer variationally and evaluate the performance of


By reshaping the vector of parameters of the linear model into a matrix, whvi allows for the linearized gp model to reduce the number of parameters to optimize (see Table 6). We compare whvi with two alternatives; one is vi of the Fourier features gp expansion that uses less random features to match the number of parameters used in whvi, and another is the sparse Gaussian process implementation of gpflow (Matthews et al., 2017) with a number of inducing points (rounded up) to match the number of parameters used in whvi.

We report the results on five datasets (, , see Table 5). The data sets are generated from space-filling evaluations of well known functions in analysis of computer experiments (see e.g. Surjanovic and Bingham ). Dataset splitting in training and testing points is random uniform, 20% versus 80 %. The input variables are rescaled between 0 and 1. The output values are standardized for training. All gps have the same prior (centered gp with rbf

covariance), initialized with equal hyperparameter values: each of the

lengthscale to , the gp variance to

, the Gaussian likelihood standard deviation to

(prior observation noise). The training is performed with 12000 steps of Adam optimizer. The observation noise is fixed for the first 10000 steps. Learning rate is , except for the dataset hartman6 with a learning rate of . Sparse gps are run with whitened representation of the inducing points.

The results are shown in Figure 9 with diagonal covariance for the three variational posteriors, and with full covariance in Figure 10. In both mean field and full covariance settings, this variant of whvi using the reshaping of into a column largely outperforms the direct vi of Fourier features. However, it appears that this improvement of the random feature inference for gps is not enough to reach the performance of vi using inducing points. Inducing point approximations are based on the Nystroöm approximation of kernel matrices, which are known to lead to lower approximation error on the elements on the kernel matrix compared to random features approximations. This is the reason we attribute to the lower performance of whvi compared to inducing points approximations in this experiment.

Space Time
Mean field Gaussian - rf
whvi- rf
Inducing points

Note: is the number of pseudo-data/inducing points and is the number of random features used in the kernel approximation.

Table 6: Complexity table for gps with random feature and inducing points approximations. In the case of random features, we include both the complexity of computing random features and the complexity of treating the linear combination of the weights variationally (using vi and whvi).


Figure 9: Comparison of test errors with respect to the number model parameters.


Figure 10: Comparison of test errors with respect to the number model parameters.

d.2 Extended results - dnns

Figure 11: Analysis of model capacity for different features and hidden layers.

Being able to increase width and depth of a model without drastically increasing the number of variational parameters is one of the competitive advantages of whvi. Figure 11 shows the behavior of whvi for different network configurations. At test time, increasing the number of hidden layers and the numbers of hidden features allow the model to avoid overfitting while delivering better performance. This evidence is also supported by the analysis of the test mnll during optimization of the elbo, as showed in Figure 14 and Figure 12.

Thanks to whvi structure of the weights matrices, expanding and deepening the model is beneficial not only at convergence but during the entire learning procedure as well. While the analysis of the optimization problem is not a primary focus of this work, the behavior of the training elbo reported in Figure 13 shows that performance at convergence depends exclusively on the model size. Furthermore, the derived nelbo is still a valid lower bound of the true marginal likelihood and, therefore, a suitable objective function for model selection. Differently from the issue addressed in (Rossi et al., 2019), during our experiments we didn’t experience problems regarding initialization of variational parameters. We claim that this is possible thanks to both the reduced number of parameters and the effect of the Walsh-Hadamard transform.