Subspace Inference for Bayesian Deep Learning

by   Pavel Izmailov, et al.

Bayesian inference was once a gold standard for learning with neural networks, providing accurate full predictive distributions and well calibrated uncertainty. However, scaling Bayesian inference techniques to deep neural networks is challenging due to the high dimensionality of the parameter space. In this paper, we construct low-dimensional subspaces of parameter space, such as the first principal components of the stochastic gradient descent (SGD) trajectory, which contain diverse sets of high performing models. In these subspaces, we are able to apply elliptical slice sampling and variational inference, which struggle in the full parameter space. We show that Bayesian model averaging over the induced posterior in these subspaces produces accurate predictions and well calibrated predictive uncertainty for both regression and image classification.


page 2

page 7

page 9

page 14

page 15


Latent Projection BNNs: Avoiding weight-space pathologies by learning latent representations of neural network weights

While modern neural networks are making remarkable gains in terms of pre...

Implicit Variational Inference: the Parameter and the Predictor Space

Having access to accurate confidence levels along with the predictions a...

Combining Model and Parameter Uncertainty in Bayesian Neural Networks

Bayesian neural networks (BNNs) have recently regained a significant amo...

On the Effectiveness of Mode Exploration in Bayesian Model Averaging for Neural Networks

Multiple techniques for producing calibrated predictive probabilities us...

Bayesian deep learning with hierarchical prior: Predictions from limited and noisy data

Datasets in engineering applications are often limited and contaminated,...

On Bayesian inference for the Extended Plackett-Luce model

The analysis of rank ordered data has a long history in the statistical ...

Accelerating Bayesian inference in hydrological modeling with a mechanistic emulator

As in many fields of dynamic modeling, the long runtime of hydrological ...

1 Introduction

Bayesian methods were once the state-of-the-art approach for inference with neural networks (MacKay, 2003; Neal, 1996a). However, the parameter spaces for modern deep neural networks are extremely high dimensional, posing challenges to standard Bayesian inference procedures.

Figure 1: Predictive distribution and samples in the parameter space for subspace inference on a synthetic regression problem in a random subspace (a, b) and subspace containing a near-constant loss (log posterior) curve between two independently trained solutions (c, d) (see Garipov et al., 2018, for details). On the plots (a, c) data points are shown with red circles, the shaded region represents the -region of the predictive distribution at each point, the predictive mean is shown with a thick blue line and sample trajectories are shown with thin blue lines. Panels (b, d)

show the contour plots of the posterior log-density within the corresponding subspace; magenta circles represent samples from the posterior in the subspace. In the rich subspace containing the near-constant loss curve, the samples produce better uncertainty estimates and more diverse trajectories. We use a small fully-connected network with

hidden layers. See Section 5.1 for more details.

In this paper, we propose a different approach to approximate Bayesian inference in deep learning models: we design a low-dimensional subspace of the weight space and perform posterior inference over the parameters within this subspace. We call this approach Subspace Inference (SI).111PyTorch code is available at

It is our contention that the subspace can be chosen to contain a diverse variety of representations, corresponding to different high quality predictions, over which Bayesian model averaging leads to accuracy gains and well-calibrated uncertainties.

In Figure 1, we visualize the samples from the approximate posterior and the corresponding predictive distributions in performing subspace inference for a ten-dimensional random subspace, and a rich two-dimensional subspace containing a low-loss curve between two independently trained SGD solutions (see Garipov et al., 2018) on a synthetic one-dimensional regression problem. As we can see, the predictive distribution corresponding to a random subspace does not capture a diverse set of possible functions required for greater uncertainty away from the data, but sampling from the posterior in the rich curve subspace provides meaningful uncertainty over functions.

Our paper is structured as follows. We begin with a discussion of related work in Section 2. In Section 3, we describe the proposed method for inference in low-dimensional subspaces of the parameter space. In Section 4, we discuss possible choices of the low-dimensional subspaces. In particular, we consider random subspaces, subspaces corresponding to the first principal components of the SGD trajectory (Maddox et al., 2019), and subspaces containing low-loss curves between independently trained solutions (Garipov et al., 2018).

We analyze the effects of using different subspaces and approximate inference methods, by visualizing uncertainty on a regression problem in Section 5.1. We then apply the proposed method to a range of UCI regression datasets in Section 5.2, as well as CIFAR-10 and CIFAR-100 classification problems in Section 5.3, achieving consistently strong performance in terms of both test accuracy and likelihood. Although the dimensionality of the weight space for modern neural networks is extraordinarily large, we show that surprisingly low dimensional subspaces contain a rich diversity of representations. For example, we can construct 5 dimensional subspaces where Bayesian model averaging leads to notable performance gains on a 36 million dimensional WideResNet trained on CIFAR-100.

We summarize subspace inference in Algorithm 1. We note that this procedure uses three modular steps: (1) construct a subspace; (2) posterior inference in the subspace; and (3) form a Bayesian model average. Different design choices are possible for each step. For example, choices for the subspace include a random subspace, a PCA subspace, or a mode connected subspace. Many other choices are also possible. For posterior inference, one can use deterministic approximations over the parameters in the subspace, such as a variational method, or MCMC.

  Input: data, model,
  1. Construct subspace, i.e. using Alg. 2, Section 4
  2. Posterior inference within subspace (Section 3)
  3. Form a Bayesian model average (Section 3.2, 3.3)
Algorithm 1 Bayesian Subspace Inference

2 Related Work

Maddox et al. (2019)

proposed SWAG, which forms an approximate Gaussian posterior over neural network weights, with a mean and low rank plus diagonal covariance matrix formed from a partial trajectory of the SGD iterates with a modified learning rate schedule. SWAG provides scalable Bayesian model averaging, with compelling accuracy and calibration results on CIFAR and ImageNet. The low-rank part of the SWAG covariance defines a distribution over a low-dimensional subspace spanned by the first principal components of the SGD iterates.

Silva and Kalaitzis (2015) consider the related problem of Bayesian inference using projected methods for constrained latent variable models, with applications to probabilistic PCA (Roweis, 1998; Bishop, 1999).

Pradier et al. (2018) propose to perform variational inference (VI) in a subspace formed by an auto-encoder trained on a set of models generated from fast geometric ensembling (Garipov et al., 2018); this approach requires training several models and fitting an auto-encoder, leading to limited scalability.

Similarly, Karaletsos et al. (2018) propose to use a meta-prior in a low-dimensional space to perform variational inference for BNNs. This approach can be viewed as a generalization of hyper-networks (Ha et al., 2017). Alternatively, both Titsias (2017) and Krueger et al. (2017) propose Bayesian versions of hyper-networks to store meta-models of parameters.

Patra and Dunson (2018) provide theoretical guarantees for Bayesian inference in the setting of constrained posteriors. Their method samples from the unconstrained posterior before using a mapping into the constrained parameter space. In their setting, the constraints are chosen a priori; on the other hand, we choose the constraints (e.g. the subspace) after performing unconstrained inference via SGD.

Bayesian coresets (Huggins et al., 2016) use a weighted combination of the full dataset and Bayesian compressed regression (Guhaniyogi and Dunson, 2015)

uses random projections of the data inputs in linear regression settings; both are designed for the purpose of efficient inference, but unlike our subspace inference, these methods operate solely in data space, rather than in parameter space.

3 Inference Within a Subspace

In this section we discuss how to perform Bayesian inference within a given subspace of a neural network. In Section 4 we will propose approaches for effectively constructing such subspaces.

3.1 Model Definition

We consider a model, , with weight parameters . The model has an associated likelihood for the dataset, , given by

We perform inference in a -dimensional subspace defined by


where , , . With a fixed and projection matrix , which assign the subspace, the free parameters of the model, over which we perform inference, are now simply . We describe choices for and in Section 4.

Figure 2: Illustration of subspace

with shift vector

and basis vectors , with a contour plot of the posterior log-density over parameters .

The new model has the likelihood function:


where the right-hand side represents the likelihood for the model, , with parameters and data . We can then perform Bayesian inference over the low-dimensional subspace parameters . We illustrate the subspace parameterization as well as the posterior log-density over parameters in Figure 2.

We emphasize that the new model (2) is not a reparameterization of the original model, as the mapping from the full parameter space to the subspace is not invertible. For this reason, we consider the subspace model parameterized by as a different model that shares many functional properties with the original model (see Section A.1 for an extended discussion). We discuss potential benefits of using the subspace model (2) in Section A.2.

3.2 Bayesian Model Averaging

We can sample from an induced posterior over the weight parameters in the original space by first sampling from the posterior over the parameters in the subspace , using an approximate inference method of choice, and then transforming those samples into the original space as .

To perform Bayesian model averaging on new test data points, , we can compute a Monte Carlo estimate of the integral


Using the Monte Carlo estimate of the integral in (3) produces mixtures of Gaussian predictive distributions for regression tasks with Gaussian likelihoods, and categorical distributions for classification tasks.

3.3 Approximate Inference Procedures

Our goal is to approximate the posterior over the free parameters in the subspace , in order to perform a Bayesian model average. As we can set the number of parameters, , to be much smaller than the dimensionality of the full parameter space, performing Bayesian inference becomes considerably more tractable in the subspace. We can make use of a wide range of approximate inference procedures, even if we are working with a large modern neural network.

In particular, we can use powerful and exact full-batch MCMC methods to approximately sample from , such as Hamiltonian Monte Carlo (HMC) (Neal et al., 2011) or elliptical slice sampling (ESS) (Murray et al., 2010). ESS relies heavily on prior sampling, initially introduced for sampling from posteriors with informative Gaussian process priors; however, ESS has special relevance for subspace inference, since these subspaces are specifically constructed to be centred on good regions of the loss, where a wide range of priors will provide reasonable samples. Alternatively, we can perform a deterministic approximation , for example using Laplace or a variational approach, and then sample from . The low dimensionality of the problem allows us to choose very flexible variational families such as RealNVP (Dinh et al., 2017) to approximate the posterior.

Ultimately, the inference procedure is an experimental design choice, and we are free to use a wide range of approximate inference techniques.

3.4 Prior Choice

There is a significant practical difference between Bayesian model averaging (Section 3.2) and standard training (regularized maximum likelihood estimation) for a range of priors , including vague priors. The exact specification of the prior itself, , if sufficiently diffuse, is not crucial for good performance or for the benefits of Bayesian model averaging in deep learning. What matters is not the prior over parameters in isolation, but how this prior interacts with the functional form of the model. The neural network induces a structured prior distribution over functions, even when combined with a vague prior over its parameters. For subspace inference specifically, the subspace is constructed to be centred on a good region of the loss, such that a wide range of priors will provide coverage for weights corresponding to high performing networks. We discuss reasonable choices of priors for various subspaces in Section 4.

3.5 Preventing Posterior Concentration With Fixed Temperature Posteriors

In the model proposed in Section 3.1, there are only parameters as opposed to parameters in the full weight space, while the number of observed data points is constant. In this setting, the posterior can overly concentrate around the maximum likelihood estimate (MLE), becoming too constrained by the data, leading to overconfident uncertainty estimates.

To address the issue of premature posterior concentration in the subspace, we propose to introduce a temperature hyperparameter that scales the likelihood. In particular, we use the

tempered posterior:


When the true posterior is recovered, and as , the tempered posterior approaches the prior .

The temperature is a hyper-parameter that can be determined through cross-validation. We study the effect of temperature on the performance of subspace inference in Section F.1. When the temperature is close to the posterior concentrates around the MLE and subspace inference fails to improve upon maximum likelihood training. As becomes large, subspace inference produces increasingly less confident predictions. In Section F.1, good performance can be achieved with a broad range of .

Tempered posteriors are often used in Bayesian inference algorithms to enhance multi-modal explorations (e.g., Geyer and Thompson, 1995; Neal, 1996b). Similarly, Watanabe (2013) uses a tempered posterior to recover an expected generalization error of Bayesian models.

4 Subspace Construction

In the previous section we showed how to perform inference in a given subspace . We now discuss various ways to construct .

4.1 Random Subspaces

To construct a simple random subspace, , we draw random in the weight space. We then rescale each of the vectors to have norm . Random subspaces require only drawing random normal numbers and so are quick to generate and form, but contain little information about the model. In related work, Li et al. (2018a) train networks from scratch in a random subspace without a shift vector, requiring projections into much higher dimensions than are considered in this paper.

We use the weights of a network pre-trained with stochastic weight averaging (SWA) (Izmailov et al., 2018) as the shift vector . In particular, we run SGD with a high constant learning rate from a pre-trained solution, and form the average from the SGD iterates .

Since the log likelihoods as a function of neural network parameters for random subspaces appear approximately quadratic (Izmailov et al., 2018), and the subspace is centred on a good solution, a reasonable prior for is .

4.2 Pca of the Sgd Trajectory

Intuitively, we want the subspace over which we perform inference to (1) contain a diverse set of models that produce meaningfully different predictions and (2) be cheap to construct. Garipov et al. (2018) and Izmailov et al. (2018) argue that the subspace spanned by the SGD trajectory satisfies both (1) and (2). They run SGD starting from a pre-trained solution with a high constant learning rate and then ensemble predictions or average the weights of the iterates. Further, Maddox et al. (2019)

showed fitting the SGD iterates with a Gaussian distribution with a low-rank plus diagonal covariance for scalable Bayesian model averaging provides well-calibrated uncertainty estimates. Finally,

Li et al. (2018b) and Maddox et al. (2019) used the first few PCA components of the SGD trajectory for loss surface visualization. These observations motivate inference directly in the subspace spanned by the SGD trajectory.

We propose to use the first few PCA components of the SGD trajectory to define the basis of the subspace. As in Izmailov et al. (2018), we run SGD with a high constant learning rate from a pre-trained solution and capture snapshots of weights at the end of each of epochs. We store the deviations for the last epochs. The number here is determined by the amount of memory we can use.222We use in our experiments. To side-step any memory issues, we could use any online PCA technique instead, such as frequent directions (Ghashami et al., 2016). We then run PCA based on randomized SVD (Halko et al., 2011)333Implemented in sklearn.decomposition.TruncatedSVD. on the matrix comprised of vectors and use the first principal components to define the subspace (3.1). Like for the random subspace, we use the SWA solution (Izmailov et al., 2018) for the shift vector, We summarize this procedure in Algorithm 2.

  : pretrained weights; : learning rate; : number of steps;

moment update frequency;

: maximum number of columns in deviation matrix; : rank of PCA approximation; : projection matrix for subspace
  [Initialize mean]
  for do
      [SGD update]
      if  then
          [Number of models]
          [Update mean]
          if then
          [Store deviation]
   [Truncated SVD]
Algorithm 2 Subspace Construction with PCA

Maddox et al. (2019) showed empirically that the log likelihood in the subspace looks locally approximately quadratic, and so a reasonable choice of prior is , when scaling PCA vectors

to have norms proportional to the singular values of the matrix

as in Algorithm 2. We note that this prior would be centred around a set of good solutions because of the shift parameter in constructing the subspace.

Relationship to Eigenvalues of the Hessian

Li et al. (2018b) and Gur-Ari et al. (2019)

argue that the first principal components of the SGD trajectory correspond to the top eigenvectors of the Hessian of the loss, and that these eigenvectors change slowly during training. This observation suggests that these principal components captures many of the sharp directions of the loss surface, corresponding to large Hessian eigenvalues. We expect then that our PCA subspace should include variation in the type of functions that it contains. See Appendix

C for more details as well as a computation of Hessian and Fisher eigenvalues through a GPU accelerated Lanczos algorithm (Gardner et al., 2018).

4.3 Curve Subspaces

Garipov et al. (2018) proposed a method to find paths of near-constant low loss (and consequently high posterior density) in the weight space between converged SGD solutions starting from different random initializations. These curves lie in -dimensional subspaces of the weight space. We visualize the loss surface in such a space for a synthetic regression problem in Figure 1 (d). This curve subspace provides an example of a rich subspace containing diverse high performing models, and stress-tests the inference procedure for effectively exploring a highly non-Gaussian distribution.

To parameterize the curve subspace we set , , where and are the endpoints, and is the midpoint of the curve.

In this case, the posterior in the subspace is clearly non-Gaussian. However, a vague but centred Gaussian prior is reasonable as a simple choice with our parameterization of the curve subspace.

4.4 Computational Cost of Subspace Construction

We consider the cost of constructing each of the subspaces described in Section 4.1-4.3. We note that constructing any subspace in our approach is a one-time computation.

The random subspace (Section 4.1) is virtually free to construct, as it only requires sampling independent Gaussian vectors.

To construct the PCA subspace (Section 4.2), we run SVD on the deviation matrix , which is a one-time computation and is very fast. In particular, exact SVD takes , while randomized SVD takes (see Section 1.4.1 of Halko et al. (2011)). For our largest examples, while the number of parameters in the model is on the order of , is on the order of ; thus, taking the exact SVD is linear in the number of parameters. For example, using standard hardware (a Dell XPS-15 laptop, with Intel core i7, 16Gb RAM), it takes 4 minutes to perform exact SVD on our largest model, WideResNet on CIFAR-100, with 36 million parameters. By comparison, it takes approximately eight hours on an NVIDIA 1080Ti GPU to train the same WideResNet on CIFAR-100 to completion.

The curve subspace (Section 4.3) is the most expensive to construct, as it requires pre-training the two solutions corresponding to the endpoints of the curve, and then running the curve-finding procedure in Garipov et al. (2018), which in total is roughly the cost of training a single DNN of the same architecture.

Constructing the subspace is in general very fast and readily applicable to large deep networks, with minimal overhead compared to standard training.

5 Experiments

We evaluate subspace inference empirically using the random, PCA, and curve subspace construction methods discussed in Section 4, in conjunction with the approximate posterior inference methods discussed in Section 3.3. In particular, we experiment with the No-U-Turn-Sampler (NUTS) (Hoffman and Gelman, 2014), Elliptical Slice Sampling (ESS) (Murray et al., 2010) and Variational Inference with fully factorized Gaussian approximation family (VI) and Real-valued Non-Volume Preserving flow (RealNVP) (Dinh et al., 2017) family. Section B contains more details on each of the approximate inference methods. We use the priors for discussed in Section 4.

We show that approximate Bayesian inference within a subspace provides good predictive uncertainties on regression problems, first visually in Section 5.1 and then quantitatively on UCI datasets in Section 5.2.

We then apply subspace inference to large-scale image classification on CIFAR-10 and CIFAR-100 and obtain results competitive with state-of-the-art scalable Bayesian deep learning methods.

5.1 Visualizing Regression Uncertainty

Figure 3: Predictive distribution for visualizing uncertainty in regression. Data (red circles), predictive mean (dark blue line), sample posterior functions (light blue lines), standard deviations about the mean (shaded region). Elliptical slice sampling (ESS) with either a PCA or curve subspace provides uncertainty that intuitively grows away from the data, unlike variational inference in the full parameter space or ESS with a random subspace. This intuitive behaviour matches a GP with an RBF kernel, except the GP is less confident for extrapolation.

We want predicted uncertainty to grow as we move away from the data. Far away from the data there are many possible functions that are consistent with our observations and thus there should be greater uncertainty. However, this intuitive behaviour is difficult to achieve with Bayesian neural networks (Foong et al., 2019). Further, log-likelihoods on benchmark datasets (see Section 5.2) do not necessarily test this behaviour, where over-confident methods can obtain better likelihoods (Yao et al., 2019).

We use a fully-connected architecture with hidden layers that have neurons respectively. The network takes two inputs, and (the redundancy helps with training), and outputs a single real value . To generate the data we set the weights of the network with this same architecture randomly, and evaluate the predictions for points sampled uniformly in intervals , , . We add Gaussian noise to the outputs . We show the data with red circles in Figure 3.

We train an SWA solution (Izmailov et al., 2018), and construct three subspaces: a ten-dimensional random subspace, ten-dimensional PCA-subspace and a two-dimensional curve subspace (see Section 4). We then run each of the inference methods listed in Section B in each of the subspaces. We visualize the predictive distributions in the observed space for each combination of method and subspace in Appendix D - Figure 8 - and the posterior density overlayed by samples of the subspace parameters in Figure 9.

In Figure 9 the shape of the posterior in random and PCA subspaces is close to Gaussian, and all approximate inference methods produce reasonable samples. In the mode connecting curve subspace the posterior has a more complex shape. The variational methods were unable to produce a reasonable fit. The simple variational approach is highly constrained in its Gaussian representation of the posterior. However, RealNVP in principle has the flexibility to fit many posterior approximating distributions, but perhaps lacks the inductive biases to easily fit these types of curvy distributions in practice, especially when trained with the variational ELBO. On the other hand, certain MCMC methods, such as elliptical slice sampling, can effectively navigate these distributions.

In the top row of Figure 3

we visualize the predictive distributions for elliptical slice sampling in each of the subspaces. In order to represent this uncertainty, the posterior in the subspace must assign mass to settings of the weights that give rise to models making a diversity of predictions in these regions. In the random subspace, the predictive uncertainty does not significantly grow far away from the data, suggesting that it does not capture a diversity of models. On the other hand, the PCA subspace captures a diverse collection of models, with uncertainty growing away from the data. Finally, the predictive distribution for the curve subspace is the most adaptive, suggesting that it contains the greatest variety of models corresponding to weights with high posterior probabilities.

In the bottom row of Figure 3 we visualize the predictive distributions for simple variational inference applied in the original parameter space (as opposed to a subspace), SWAG (Maddox et al., 2019), and a Gaussian process with an RBF kernel. The SWAG predictive distribution is similar to the predictive distribution of ESS in the PCA subspace, as both methods attempt to approximate the posterior in the subspace containing the principal components of the SGD trajectory; however, ESS operates directly in the subspace, and is not constrained to a Gaussian representation. Variational inference interestingly appears to underfit the data, failing to adapt the posterior mean to structured variations in the observed points, and provides relatively homogenous uncertainty estimates, except very far from the data. We hypothesize that underfitting happens because the KL divergence term between approximate posterior and prior distributions overpowers the data fit term in the variational lower bound objective, as we have many more parameters than data points in this problem (see Blundell et al. (2015) for details about VI). In the bottom right panel of Figure 3 we also consider a Gaussian process (GP) with an RBF kernel — presently the gold standard for regression uncertainty. The GP provides a reasonable predictive distribution in a neighbourhood of the data, but arguably becomes underconfident for extrapolation, with uncertainty quickly blowing up away from the data.

5.2 Uci Regression

We next compare our subspace inference methods on UCI regression tasks to a variety of methods for approximate Bayesian inference with neural networks.444We use the pre-processing from To measure performance we compute Gaussian test likelihood (details in Appendix E.1.1). We follow convention and parameterize our neural network models so that for an input they produce two outputs, predictive mean

and predictive variance

. For all datasets we tune the temperature in (2) by maximizing the average likelihood over random validations splits (see Appendix F.1 for a discussion of the effect of temperature).

Figure 4: Test log-likelihoods for subspace inference and baselines on UCI regression datasets. Subspace inference (SI) with PCA achieves as good or better test log-likelihoods compared to SGD and SWAG, and is competitive with DVI (which does not immediately scale to larger problems or networks). We report mean over random splits of the data one standard deviation. For consistency with the literature, we report normalized log-likelihoods on large datasets (elevators, pol, protein, skillcraft, keggD, keggU; see Section 5.2.1) and unnormalized log-likelihoods on small datasets (naval, concrete, yacht, boston, energy; see Section 5.2.2).

5.2.1 Large UCI Regression Datasets

We experiment with large regression datasets from UCI: elevators, keggdirected, keggundirected, pol, protein and skillcraft. We follow the experimental framework of Wilson et al. (2016).

On all datasets except skillcraft we use a feedforward network with five hidden layers of sizes [1000, 1000, 500, 50, 2], ReLU activations and two outputs

and parameterizing predictive mean and variance. On skillcraft, we use a smaller architecture [1000, 500, 50, 2] like in Wilson et al. (2016). We additionally learn a global noise variance , so that the predictive variance at is , where is the variance output from the final layer of the network. We use softplus parameterizations for both and to ensure positiveness of the variance, initializing the global variance at (the total variance in the dataset).

We compare subspace inference to deep kernel learning (DKL) (Wilson et al., 2016) with a spectral mixture base kernel (Wilson and Adams, 2013), SGD trained networks with a Bayesian final layer as in Riquelme et al. (2018) and two approximate Gaussian processes: orthogonally decoupled variational Gaussian processes (OrthVGP) (Salimbeni et al., 2018) and Fastfood approximate kernels (FF)555Results for FF are from Wilson et al. (2016). (Yang et al., 2015). We show the test log-likelihoods and RMSEs in Appendix E.1.3 - Tables 5 and 6

, and the 95% credible intervals in Table

7. We summarize the test log-likelihoods in Figure 4. Subspace inference outperforms SGD by a large margin on elevators, pol and protein, and is competitive on the other datasets. Compared to SWAG, subspace inference typically improves results by a small margin.

Finally, we plot the coverage of the 95% predictive intervals in Figure 10. Again, subspace inference performs at least as well as the SGD baseline, and has substantially better calibration on elevators and protein.

5.2.2 Small UCI Regression Datasets

We compare subspace inference to the state-of-the-art approximate BNN inference methods including deterministic variational inference (DVI) (Wu et al., 2019), deep GPs (DGP) with two GP layers trained via expectation propagation (Bui et al., 2016), and variational inference with the re-parameterization trick (Kingma and Welling, 2013). We follow the set-up of Bui et al. (2016) and use a fully-connected network with a single hidden layer with 50 units. We present the test log-likelihoods, RMSEs and test calibration results in Appendix E.1.2 - Tables 2, 3 and 4. We additionally visualize test log-likelihoods in Figure 4 and calibrations in Appendix Figure 10.

Our first observation is that SGD provides a surprisingly strong baseline, sometimes outperforming DVI. The subspace methods outperform SGD and DVI on naval, concrete and yacht and are competitive on the other datasets.

5.3 Image Classification

Figure 5: Posterior log-density surfaces, ESS samples (shown with magenta circles), and VI approximation posterior distribution (-region shown with blue dashed line) in (a) random, (b) PCA and (c) curve subspaces for PreResNet-164 on CIFAR-100. In panel (b) the dashed black line shows the -region of the SWAG predictive distribution.

Next, we test the proposed method on state-of-the-art convolutional networks on CIFAR datasets. Similarly to Section 5.1, we construct five-dimensional random and five-dimensional PCA subspaces around a trained SWA solution. We also construct a two-dimensional curve subspace by connecting our SWA solution to another independently trained SWA solution. We use the value for temperature in all the image classification experiments with random and PCA subspaces, and in the curve subspace (Section 3.5), from cross-validation using VGG-16 on CIFAR-100.

We visualize the samples from ESS in each of the subspaces in Figure 5. For VI we also visualize the -region of the closed-form approximate posterior in the random and PCA subspaces. As we can see, ESS is able to capture the shape of the posterior in each of the subspaces.

In the PCA subspace we also visualize the SWAG approximate posterior distribution. SWAG overestimates the variance along the first principle component (horizontal axis in Figure 5b) of the SGD trajectory666See also Figure 5 in Maddox et al. (2019).. VI in the subspace is able to provide a better fit for the posterior distribution, as it is directly approximating the posterior while SWAG is approximating the SGD trajectory.

We report the accuracy and negative log-likelihood for each of the subspaces in Table 1. Going from random, to PCA, to curve subspaces provides progressively better results, due to increasing diversity and quality of models within each subspace. In the remaining experiments we use the PCA subspace, as it generally provides good performance at a much lower computational cost than the curve subspace.

Random PCA Curve
Accuracy (%)
Table 1: Negative log-likelihood and Accuracy for PreResNet-164 for -dimensional random, -dimensional PCA, and -dimensional curve subspaces. We report mean and stdev over independent runs.

We next apply ESS and simple VI in the PCA subspace on VGG-16, PreResNet-164 and WideResNet28x10 on CIFAR-10 and CIFAR-100. We report the results in Appendix Tables 8, 9. Subspace inference is competitive with SWAG and consistently outperforms most of the other baselines, including MC-dropout (Gal and Ghahramani, 2016), temperature scaling (Guo et al., 2017) and KFAC-Laplace Ritter et al. (2018).

6 Conclusion

Bayesian methods were once a gold standard for inference with neural networks. Indeed, a Bayesian model average provides an entirely different mechanism for making predictions than standard training, with benefits in accuracy and uncertainty representation. However, efficient and practically useful Bayesian inference has remained a critical challenge for the exceptionally high dimensional parameter spaces in modern deep neural networks. In this paper, we have developed a subspace inference approach to side-step the dimensionality challenges in Bayesian deep learning: by constructing subspaces where the neural network has sufficient variability, we can easily apply standard approximate Bayesian inference methods, with strong practical results.

In particular, we have demonstrated that simple affine subspaces constructed from the principal components of the SGD trajectory contain enough variability for practically effective Bayesian model averaging, often out-performing full parameter space inference techniques. Such subspaces can be surprisingly low dimensional (e.g., 5 dimensions), and combined with approximate inference methods, such as slice sampling, which are good at automatically exploring posterior densities, but would be entirely intractable in the original parameter space. We further improve performance with subspaces constructed from low-loss curves connecting independently trained solutions (Garipov et al., 2018), with additional computation. Subspace inference is particularly effective at representing growing and shrinking uncertainty as we move away and towards data, which has been a particular challenge for Bayesian deep learning methods.

Crucially, our approach is modular and easily adaptable for exploring both different subspaces and approximate inference techniques, enabling fast exploration for new problems and architectures. There are many exciting directions for future work. It could be possible to explicitly train subspaces to select high variability in the functional outputs of the model, while retaining quality solutions. One could also develop Bayesian PCA approaches (Minka, 2001)

to automatically select the dimensionality of the subspace. Moreover, the intuitive behaviour of the regression uncertainty for subspace inference could be harnessed in a number of active learning problems, such as Bayesian optimization and probabilistic model-based reinforcement learning, helping to address the current issues of input dimensionality in these settings.

The ability to perform approximate Bayesian inference in low-dimensional subspaces of deep neural networks is a step towards, scalable, modular, and interpretable Bayesian deep learning.


WJM, PI, PK and AGW were supported by an Amazon Research Award, Facebook Research, and NSF IIS-1563887. WJM was additionally supported by an NSF Graduate Research Fellowship under Grant No. DGE-1650441.


Appendix A Additional Discussion of Subspace Inference

a.1 Loss of Measure

When mapping into a lower-dimensional subspace of the true parameter space, we lose the ability to invert the transform and thus measure (i.e. volume of the distribution) is lost. Consider the following simple example in . First, form a spherical density, , and then fix and along a slice, such that . The support of the resulting distribution has no area, since it represents a line with no width. For this reason, it is more correct to consider the subspace model (2

) as a different model that shares many of the same functional properties as the fully parametrized model, rather than a re-parametrized version of the same model. Indeed, we cannot construct a Jacobian matrix to represent the density in the subspace.

a.2 Potential Benefits of Subspace Inference

Quicker Exploration of the Posterior

Reducing the dimensionality of parameter space enables significantly faster mixing of MCMC chains. For example, the expected number of likelihood evaluations needed for an accepted sample drawn using Metropolis-Hastings or Hamiltonian Monte Carlo (HMC) respectively grow as and . If the dimensionality of the subspace grows as , for example, then we would expect the runtime of Metropolis-Hastings to produce independent samples to grow at a modest . We also note that the structure of the subspace may be much more amenable to exploration than the original posterior, requiring less time to effectively cover. For example, Maddox et al. (2019) show that the loss in the subspace constructed from the principal components of SGD iterates is approximately locally quadratic. On the other hand, if the subspace has a complicated structure, it can now be traversed using ambitious exploration methods which do not scale to higher dimensional spaces, such as parallel tempering (Geyer and Thompson, 1995).

Potential Lack of Degeneracies

In the subspace, the model will be absent of many of degeneracies present for most DNNs. We would expect therefore the posterior to concentrate to a single point, and be more amenable in general to approximate inference strategies.

Appendix B Approximate Inference Methods

We can use MCMC methods to approximately sample from , or we can perform a deterministic approximation , for example using Laplace or a variational approach, and then sample from . We particularly consider the following methods in our experiments, although there are many other possibilities. The inference procedure is an experimental design choice.

Slice Sampling

As the dimensionality of the subspace is low, gradient-free methods such as slice sampling (Neal et al., 2003) and elliptical slice sampling (ESS) (Murray et al., 2010) can be used to sample from the projected posterior distribution. Elliptical slice sampling is designed to have no tuning parameters, and only requires a Gaussian prior in the subspace. 777We use the Python implementation at For networks that cannot evaluate all of the training data in memory at a single time, it is easily possible to sum the loss over mini-batches computing a full log probability, without storing gradients.


The No-U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014) is an HMC method (Neal et al., 2011) that dynamically tunes the hyper-parameters (step-size and leapfrog steps) of HMC. 888Implemented in Pyro (Bingham et al., 2018). NUTS has the advantage of being nearly black-box: only a joint likelihood and its gradients need to be defined. However, full gradient calls are required, which can be difficult to cache and a constant factor slower than a full likelihood calculation.

Simple Variational Inference

One can perform variational inference in the subspace using the fully-factorized Gaussian posterior approximation family for , from which we can sample to form a Bayesian model average. Fully-factorized Gaussians are among the simplest and the most common variational families. Unlike ESS or NUTS, VI can be trained with mini-batches (Hoffman et al., 2013), but is often practically constrained in the distributions it can represent.


Normalizing flows, such as RealNVP (Dinh et al., 2017), parametrize the variational distribution family with invertible neural networks, for flexible non-Gaussian posterior approximations.

Appendix C Eigen-Gaps of the Fisher and Hessian Matrices

In Figure 6 we can see similar behavior for the eigenvalues of both the Hessian and the empirical Fisher information matrix at the end of training. To compute these eigenvalues, we used a GPU-enabled Lanczos method in GPyTorch (Gardner et al., 2018) on a pre-trained PreResNet164. Given the gap between the top eigenvalues and the rest for both the Hessian and Fisher matrices, we might expect that the parameter updates of gradient descent would primarily lie in the subspace spanned by the top eigenvalues. Li et al. (2018a) and Gur-Ari et al. (2019) empirically (and with a similar theoretical basis) found that the training dynamics of SGD (and thus the parameter updates) primarily lie within the subspace formed by the top eigenvalues (and eigenvectors). Additionally, in Figure 7 we show that the eigenvalues of the trajectory decay rapidly (on a log scale) across several different architectures; this rapid decay suggests that most of the parameter updates occur in a low dimensional subspace (at least near convergence), supporting the claims of Gur-Ari et al. (2019). In future work it would be interesting to further explore the connections between the eigenvalues of the Hessian (and Fisher) matrices of DNNs and the covariance matrix of the trajectory of the SGD iterates.

Figure 6: Plot of 300 eigenvalues of the Fisher and Hessian matrices for a PreResNet164 on CIFAR100. A clear separation exists between the top 20 or so eigenvalues and the rest, which are crowded together.
Figure 7: Eigenvalues of trajectory covariance (explained variance proportion) estimated from randomized SVD across three architectures on CIFAR-10 and CIFAR-100 plotted on a log-scale. The trajectory decays extremely quickly, decaying towards 0 at around 10-20 steps.

Appendix D Additional Regression Uncertainty Visualizations

In Figure 8 we present the predictive disribution plots for all the inference methods and subspaces. We additionally visualize the samples over poterior density surfaces for each of the methods in Figure 9.

Figure 8: Regression predictive distributions across inference methods and subspaces. Data is shown with red circles, the dark blue line shows predictive mean, the lighter blue lines show sample predictive functions, and the shaded region represents standard deviations of the predictive distribution at each point.
Figure 9: Posterior log-density surfaces and samples (magenta circles) for the synthetic regression problem across different subspaces and sampling methods.

Appendix E Uci Regression Experimental Details

e.1 Setup

In all experiments, we replicated over 20 trials reserving 90% of the data for training and the other 10% for testing, following the set-up of Bui et al. (2016) and Wilson et al. (2016).

e.1.1 Gaussian test likelihood

In Bayesian model averaging, we compute a Gaussian estimator based on sample statistics999This is the same estimator used in Wu et al. (2019) and Lakshminarayanan et al. (2017)., where , , and are samples from the approximate posterior (see Section 3.2).

e.1.2 Small Regression

For the small UCI regression datasets, we use the architecture from Wu et al. (2019) with one hidden layer with 50 units. We manually tune learning rate and weight decay, and use batch size of where

is the dataset size. All models predict heteroscedastic uncertainty (i.e. output a variance). In Table

2, we compare subspace inference methods to deterministic VI (DVI, Wu et al. (2019)) and deep Gaussian processes with expectation propagation (DGP1-50 Bui et al. (2016)). ESS and VI in the PCA subspace outperform DVI on two out of five datasets.

e.1.3 Large-Scale Regression

For the large-scale UCI regression tasks, we manually tuned hyper-parameters (batch size, learning rate, and epochs) to match the RMSE of the SGD DNN results in Table 1 of Wilson et al. (2016), starting with the parameters in the authors’ released code. We used heteroscedastic regression methods with a global variance parameter, e.g. the likelihood for a data point is given by: optimizing in tandem with the network (which outputs both the mean and variance parameters). can be viewed as a global variance parameter, analogous to optimizing the jitter in Gaussian process regression. We additionally tried fitting models without a global variance parameter (e.g. standard heteroscedastic regression as is used in the SGD networks in Wilson et al. (2016)), but found that they were typically more over-confident.

Figure 10: Coverage of 95% prediction interval for models trained on UCI datasets. In most cases, subspace inference produces closer to 95% coverage than models trained using SGD or SWAG.
boston 506 13 -2.752 0.132 -2.719 0.132 -2.716 0.133 -2.761 0.132 -2.41 0.02 -2.33 0.06 -2.43 0.03
concrete 1030 8 -3.178 0.198 -3.007 0.086 -2.994 0.095 -3.013 0.086 -3.06 0.01 -3.13 0.03 -3.04 0.02
energy 768 8 -1.736 1.613 -1.563 1.243 -1.715 1.588 -1.679 1.488 -1.01 0.06 -1.32 0.03 -2.38 0.02
naval 11934 16 6.567 0.185 6.541 0.095 6.708 0.105 6.708 0.105 6.29 0.04 3.60 0.33 5.87 0.29
yacht 308 6 -0.418 0.426 -0.225 0.400 -0.396 0.419 -0.404 0.418 -0.47 0.03 -1.39 0.14 -1.68 0.04
Table 2: Unnormalized test log-likelihoods on small UCI datasets for Subspace Inference (SI), as well as direct comparisons to the numbers reported in deterministic variational inference (DVI, Wu et al. (2019)) and Deep Gaussian Processes with expectation propagation (DGP1-50, Bui et al. (2016)), and variational inference (VI) with the re-parameterization trick (Kingma et al., 2015).
boston 3.504 0.975 3.453 0.953 3.457 0.951 3.517 0.981
concrete 5.194 0.446 5.194 0.448 5.142 0.418 5.233 0.417
energy 1.602 0.275 1.598 0.274 1.587 0.272 1.594 0.273
naval 0.001 0.000 0.001 0.000 0.001 0.000 0.001 0.000
yacht 0.973 0.374 0.972 0.375 0.973 0.375 0.973 0.375
Table 3: RMSE on small UCI datasets. Subspace Inference (SI) typically performs comparably to SGD and SWAG.

Following Wilson et al. (2016), for the UCI regression tasks with more than 6,000 data points, we used networks with the following structure: [1000, 1000, 500, 50, 2], while for skillcraft, we used a network with: [1000, 500, 50, 2]. We used a learning rate of , doubling the learning rate of bias parameters, a batch size of , momentum of , and weight decay of , training for 200 epochs. For skillcraft, we only trained for 100 epochs, using a learning rate of and for keggD, we used a learning rate of . We used a standard normal prior with variance of in the subspace.

For all of our models, the likelihood for a data point is given by: optimizing in tandem with the network (which outputs both the mean and variance parameters). can be viewed as a global variance parameter, analogous to optimizing the jitter in Gaussian process regression. We found fitting models without a global variance parameter often led to over-confident predictions.

In Table 5, we report RMSE results compared to two types of approximate Gaussian processes (Salimbeni et al., 2018; Yang et al., 2015); note that the results for OrthVGP are from Appendix Table F of Salimbeni et al. (2018) but scaled by the standard deviation of the respective dataset. For the comparisons using Bayesian final layers (Riquelme et al., 2018), we trained SGD nets with the same architecture and used the second-to-last layer (ignoring the final hidden unit layer of width two as it performed considerably worse) for the Bayesian approach and then followed the same hyper-parameter setup as in the authors’ codebase 101010 with and .

We repeated each model over 10 random train/test splits; each test set consisted of 10% of the full dataset. All data was pre-processed to have mean zero and variance one.

boston 506 13 0.986 0.018 0.985 0.017 0.984 0.017 0.986 0.018
concrete 1030 8 0.864 0.029 0.941 0.021 0.934 0.019 0.933 0.024
energy 768 8 0.947 0.026 0.953 0.027 0.949 0.027 0.951 0.027
naval 11934 16 0.948 0.051 0.978 0.006 0.967 0.008 0.967 0.008
yacht 308 6 0.895 0.069 0.948 0.040 0.898 0.067 0.898 0.067
Table 4: Calibration on small-scale UCI datasets for Subspace Inference (SI). Bolded numbers are those closest to 95% of the predicted coverage.
elevators 16599 18 0.0952
keggD 48827 20 0.1198
keggU 63608 27 0.1172
protein 45730 9 0.46071
skillcraft 3338 19
pol 15000 26 6.61749
Table 5: RMSE comparison amongst methods on larger UCI regression tasks, as well as direct comparisons to the numbers reported in deep kernel learning with a spectral mixture kernel (DKL, (Wilson et al., 2016)), orthogonally decoupled variational GPs (OrthVGP, Salimbeni et al. (2018)), FastFood kernel GPs (FF, Yang et al. (2015) from Wilson et al. (2016)), and Bayesian final layers (NL, Riquelme et al. (2018)). Subspace based inference typically outperforms SGD and approximate GPs and is competitive with DKL.
elevators 16599 18 -0.4479
keggD 48827 20 1.0224
keggU 63608 27 0.7007
protein 45730 9 -0.9138
skillcraft 3338 19
pol 15000 26 0.1586
Table 6: Normalized test log-likelihoods on larger UCI datasets. Subspace methods outperform an approximate GP approach (OrthVGP), SGD, and Bayesian final layers (NL), typically often out-performing SWAG.
elevators 16599 18
keggD 48827 20
keggU 63608 27
protein 45730 9
pol 15000 26
skillcraft 3338 19
Table 7: Calibration on large-scale UCI datasets. Bolded numbers are those closest to 95 % of the predicted coverage).
Dataset Model PCA + VI (SI) PCA + ESS (SI) SWA SWAG KFAC-Laplace SWA-Dropout SWA-Temp
CIFAR-10 PreResNet-164
CIFAR-10 WideResNet28x10
CIFAR-100 VGG-16
CIFAR-100 PreResNet-164
CIFAR-100 WideResNet28x10
Table 8: NLL for various versions of subspace inference, SWAG, temperature scaling, and dropout.
Dataset Model PCA + VI (SI) PCA + ESS (SI) SWA SWAG KFAC-Laplace SWA-Dropout SWA-Temp
CIFAR-10 PreResNet-164
CIFAR-10 WideResNet28x10
CIFAR-100 VGG-16
CIFAR-100 PreResNet-164
CIFAR-100 WideResNet28x10
Table 9: Accuracy for various versions of subspace inference, SWAG, temperature scaling, and dropout.

Appendix F Image Classification Results

For the experiments on CIFAR datasets we follow the framework of Maddox et al. (2019). We report the negative log-likelihood and accuracy for our method and baselines in Tables 8 and 9.

f.1 Effect of Temperature

We study the effect of the temperature parameter defined in (4) on the performance of subspace inference. We run elliptical slice sampling in a -dimensional PCA subspace for a PreResNet-164 on CIFAR-100. We show test performance as a function of the temperature parameter in Figure 11 panels (a) and (b). Bayesian model averaging achieves strong results in the range . We also observe that the value has a larger effect on uncertainty estimates and consequently NLL than on predictive accuracy.

We then repeat the same experiment on UCI elevators using the setting described in Section 5.2.1. We show the results in Figure 11 panels (c), (d). Again, we observe that the performance is almost constant and close to optimal in a certain range of temperatures, and the effect of temperature on likelihood is larger compared to RMSE.

Figure 11: (a): Test negative log-likelihood and (b): accuracy as a function of temperature in (4) for PreResNet-164 on CIFAR-100. (c): Test negative log-likelihood and (d): RMSE as a function of temperature for our regression architecture (see Section 5.2) on UCI Elevators. We used ESS in a -dimensional PCA subspace to construct this plot. The dark blue line and shaded region show the mean 1 standard deviation over independent runs of the procedure.