Bayesian methods were once the state-of-the-art approach for inference with neural networks (MacKay, 2003; Neal, 1996a). However, the parameter spaces for modern deep neural networks are extremely high dimensional, posing challenges to standard Bayesian inference procedures.
show the contour plots of the posterior log-density within the corresponding subspace; magenta circles represent samples from the posterior in the subspace. In the rich subspace containing the near-constant loss curve, the samples produce better uncertainty estimates and more diverse trajectories. We use a small fully-connected network withhidden layers. See Section 5.1 for more details.
In this paper, we propose a different approach to approximate Bayesian inference in deep learning models: we design a low-dimensional subspace of the weight space and perform posterior inference over the parameters within this subspace. We call this approach Subspace Inference (SI).111PyTorch code is available at https://github.com/wjmaddox/drbayes.
It is our contention that the subspace can be chosen to contain a diverse variety of representations, corresponding to different high quality predictions, over which Bayesian model averaging leads to accuracy gains and well-calibrated uncertainties.
In Figure 1, we visualize the samples from the approximate posterior and the corresponding predictive distributions in performing subspace inference for a ten-dimensional random subspace, and a rich two-dimensional subspace containing a low-loss curve between two independently trained SGD solutions (see Garipov et al., 2018) on a synthetic one-dimensional regression problem. As we can see, the predictive distribution corresponding to a random subspace does not capture a diverse set of possible functions required for greater uncertainty away from the data, but sampling from the posterior in the rich curve subspace provides meaningful uncertainty over functions.
Our paper is structured as follows. We begin with a discussion of related work in Section 2. In Section 3, we describe the proposed method for inference in low-dimensional subspaces of the parameter space. In Section 4, we discuss possible choices of the low-dimensional subspaces. In particular, we consider random subspaces, subspaces corresponding to the first principal components of the SGD trajectory (Maddox et al., 2019), and subspaces containing low-loss curves between independently trained solutions (Garipov et al., 2018).
We analyze the effects of using different subspaces and approximate inference methods, by visualizing uncertainty on a regression problem in Section 5.1. We then apply the proposed method to a range of UCI regression datasets in Section 5.2, as well as CIFAR-10 and CIFAR-100 classification problems in Section 5.3, achieving consistently strong performance in terms of both test accuracy and likelihood. Although the dimensionality of the weight space for modern neural networks is extraordinarily large, we show that surprisingly low dimensional subspaces contain a rich diversity of representations. For example, we can construct 5 dimensional subspaces where Bayesian model averaging leads to notable performance gains on a 36 million dimensional WideResNet trained on CIFAR-100.
We summarize subspace inference in Algorithm 1. We note that this procedure uses three modular steps: (1) construct a subspace; (2) posterior inference in the subspace; and (3) form a Bayesian model average. Different design choices are possible for each step. For example, choices for the subspace include a random subspace, a PCA subspace, or a mode connected subspace. Many other choices are also possible. For posterior inference, one can use deterministic approximations over the parameters in the subspace, such as a variational method, or MCMC.
2 Related Work
Maddox et al. (2019)
proposed SWAG, which forms an approximate Gaussian posterior over neural network weights, with a mean and low rank plus diagonal covariance matrix formed from a partial trajectory of the SGD iterates with a modified learning rate schedule. SWAG provides scalable Bayesian model averaging, with compelling accuracy and calibration results on CIFAR and ImageNet. The low-rank part of the SWAG covariance defines a distribution over a low-dimensional subspace spanned by the first principal components of the SGD iterates.
Silva and Kalaitzis (2015) consider the related problem of Bayesian inference using projected methods for constrained latent variable models, with applications to probabilistic PCA (Roweis, 1998; Bishop, 1999).
Pradier et al. (2018) propose to perform variational inference (VI) in a subspace formed by an auto-encoder trained on a set of models generated from fast geometric ensembling (Garipov et al., 2018); this approach requires training several models and fitting an auto-encoder, leading to limited scalability.
Similarly, Karaletsos et al. (2018) propose to use a meta-prior in a low-dimensional space to perform variational inference for BNNs. This approach can be viewed as a generalization of hyper-networks (Ha et al., 2017). Alternatively, both Titsias (2017) and Krueger et al. (2017) propose Bayesian versions of hyper-networks to store meta-models of parameters.
Patra and Dunson (2018) provide theoretical guarantees for Bayesian inference in the setting of constrained posteriors. Their method samples from the unconstrained posterior before using a mapping into the constrained parameter space. In their setting, the constraints are chosen a priori; on the other hand, we choose the constraints (e.g. the subspace) after performing unconstrained inference via SGD.
uses random projections of the data inputs in linear regression settings; both are designed for the purpose of efficient inference, but unlike our subspace inference, these methods operate solely in data space, rather than in parameter space.
3 Inference Within a Subspace
In this section we discuss how to perform Bayesian inference within a given subspace of a neural network. In Section 4 we will propose approaches for effectively constructing such subspaces.
3.1 Model Definition
We consider a model, , with weight parameters . The model has an associated likelihood for the dataset, , given by
We perform inference in a -dimensional subspace defined by
where , , . With a fixed and projection matrix , which assign the subspace, the free parameters of the model, over which we perform inference, are now simply . We describe choices for and in Section 4.
The new model has the likelihood function:
where the right-hand side represents the likelihood for the model, , with parameters and data . We can then perform Bayesian inference over the low-dimensional subspace parameters . We illustrate the subspace parameterization as well as the posterior log-density over parameters in Figure 2.
We emphasize that the new model (2) is not a reparameterization of the original model, as the mapping from the full parameter space to the subspace is not invertible. For this reason, we consider the subspace model parameterized by as a different model that shares many functional properties with the original model (see Section A.1 for an extended discussion). We discuss potential benefits of using the subspace model (2) in Section A.2.
3.2 Bayesian Model Averaging
We can sample from an induced posterior over the weight parameters in the original space by first sampling from the posterior over the parameters in the subspace , using an approximate inference method of choice, and then transforming those samples into the original space as .
To perform Bayesian model averaging on new test data points, , we can compute a Monte Carlo estimate of the integral
Using the Monte Carlo estimate of the integral in (3) produces mixtures of Gaussian predictive distributions for regression tasks with Gaussian likelihoods, and categorical distributions for classification tasks.
3.3 Approximate Inference Procedures
Our goal is to approximate the posterior over the free parameters in the subspace , in order to perform a Bayesian model average. As we can set the number of parameters, , to be much smaller than the dimensionality of the full parameter space, performing Bayesian inference becomes considerably more tractable in the subspace. We can make use of a wide range of approximate inference procedures, even if we are working with a large modern neural network.
In particular, we can use powerful and exact full-batch MCMC methods to approximately sample from , such as Hamiltonian Monte Carlo (HMC) (Neal et al., 2011) or elliptical slice sampling (ESS) (Murray et al., 2010). ESS relies heavily on prior sampling, initially introduced for sampling from posteriors with informative Gaussian process priors; however, ESS has special relevance for subspace inference, since these subspaces are specifically constructed to be centred on good regions of the loss, where a wide range of priors will provide reasonable samples. Alternatively, we can perform a deterministic approximation , for example using Laplace or a variational approach, and then sample from . The low dimensionality of the problem allows us to choose very flexible variational families such as RealNVP (Dinh et al., 2017) to approximate the posterior.
Ultimately, the inference procedure is an experimental design choice, and we are free to use a wide range of approximate inference techniques.
3.4 Prior Choice
There is a significant practical difference between Bayesian model averaging (Section 3.2) and standard training (regularized maximum likelihood estimation) for a range of priors , including vague priors. The exact specification of the prior itself, , if sufficiently diffuse, is not crucial for good performance or for the benefits of Bayesian model averaging in deep learning. What matters is not the prior over parameters in isolation, but how this prior interacts with the functional form of the model. The neural network induces a structured prior distribution over functions, even when combined with a vague prior over its parameters. For subspace inference specifically, the subspace is constructed to be centred on a good region of the loss, such that a wide range of priors will provide coverage for weights corresponding to high performing networks. We discuss reasonable choices of priors for various subspaces in Section 4.
3.5 Preventing Posterior Concentration With Fixed Temperature Posteriors
In the model proposed in Section 3.1, there are only parameters as opposed to parameters in the full weight space, while the number of observed data points is constant. In this setting, the posterior can overly concentrate around the maximum likelihood estimate (MLE), becoming too constrained by the data, leading to overconfident uncertainty estimates.
To address the issue of premature posterior concentration in the subspace, we propose to introduce a temperature hyperparameter that scales the likelihood. In particular, we use thetempered posterior:
When the true posterior is recovered, and as , the tempered posterior approaches the prior .
The temperature is a hyper-parameter that can be determined through cross-validation. We study the effect of temperature on the performance of subspace inference in Section F.1. When the temperature is close to the posterior concentrates around the MLE and subspace inference fails to improve upon maximum likelihood training. As becomes large, subspace inference produces increasingly less confident predictions. In Section F.1, good performance can be achieved with a broad range of .
4 Subspace Construction
In the previous section we showed how to perform inference in a given subspace . We now discuss various ways to construct .
4.1 Random Subspaces
To construct a simple random subspace, , we draw random in the weight space. We then rescale each of the vectors to have norm . Random subspaces require only drawing random normal numbers and so are quick to generate and form, but contain little information about the model. In related work, Li et al. (2018a) train networks from scratch in a random subspace without a shift vector, requiring projections into much higher dimensions than are considered in this paper.
We use the weights of a network pre-trained with stochastic weight averaging (SWA) (Izmailov et al., 2018) as the shift vector . In particular, we run SGD with a high constant learning rate from a pre-trained solution, and form the average from the SGD iterates .
Since the log likelihoods as a function of neural network parameters for random subspaces appear approximately quadratic (Izmailov et al., 2018), and the subspace is centred on a good solution, a reasonable prior for is .
4.2 Pca of the Sgd Trajectory
Intuitively, we want the subspace over which we perform inference to (1) contain a diverse set of models that produce meaningfully different predictions and (2) be cheap to construct. Garipov et al. (2018) and Izmailov et al. (2018) argue that the subspace spanned by the SGD trajectory satisfies both (1) and (2). They run SGD starting from a pre-trained solution with a high constant learning rate and then ensemble predictions or average the weights of the iterates. Further, Maddox et al. (2019)
showed fitting the SGD iterates with a Gaussian distribution with a low-rank plus diagonal covariance for scalable Bayesian model averaging provides well-calibrated uncertainty estimates. Finally,Li et al. (2018b) and Maddox et al. (2019) used the first few PCA components of the SGD trajectory for loss surface visualization. These observations motivate inference directly in the subspace spanned by the SGD trajectory.
We propose to use the first few PCA components of the SGD trajectory to define the basis of the subspace. As in Izmailov et al. (2018), we run SGD with a high constant learning rate from a pre-trained solution and capture snapshots of weights at the end of each of epochs. We store the deviations for the last epochs. The number here is determined by the amount of memory we can use.222We use in our experiments. To side-step any memory issues, we could use any online PCA technique instead, such as frequent directions (Ghashami et al., 2016). We then run PCA based on randomized SVD (Halko et al., 2011)333Implemented in sklearn.decomposition.TruncatedSVD. on the matrix comprised of vectors and use the first principal components to define the subspace (3.1). Like for the random subspace, we use the SWA solution (Izmailov et al., 2018) for the shift vector, We summarize this procedure in Algorithm 2.
Maddox et al. (2019) showed empirically that the log likelihood in the subspace looks locally approximately quadratic, and so a reasonable choice of prior is , when scaling PCA vectors
to have norms proportional to the singular values of the matrixas in Algorithm 2. We note that this prior would be centred around a set of good solutions because of the shift parameter in constructing the subspace.
Relationship to Eigenvalues of the Hessian
argue that the first principal components of the SGD trajectory correspond to the top eigenvectors of the Hessian of the loss, and that these eigenvectors change slowly during training. This observation suggests that these principal components captures many of the sharp directions of the loss surface, corresponding to large Hessian eigenvalues. We expect then that our PCA subspace should include variation in the type of functions that it contains. See AppendixC for more details as well as a computation of Hessian and Fisher eigenvalues through a GPU accelerated Lanczos algorithm (Gardner et al., 2018).
4.3 Curve Subspaces
Garipov et al. (2018) proposed a method to find paths of near-constant low loss (and consequently high posterior density) in the weight space between converged SGD solutions starting from different random initializations. These curves lie in -dimensional subspaces of the weight space. We visualize the loss surface in such a space for a synthetic regression problem in Figure 1 (d). This curve subspace provides an example of a rich subspace containing diverse high performing models, and stress-tests the inference procedure for effectively exploring a highly non-Gaussian distribution.
To parameterize the curve subspace we set , , where and are the endpoints, and is the midpoint of the curve.
In this case, the posterior in the subspace is clearly non-Gaussian. However, a vague but centred Gaussian prior is reasonable as a simple choice with our parameterization of the curve subspace.
4.4 Computational Cost of Subspace Construction
The random subspace (Section 4.1) is virtually free to construct, as it only requires sampling independent Gaussian vectors.
To construct the PCA subspace (Section 4.2), we run SVD on the deviation matrix , which is a one-time computation and is very fast. In particular, exact SVD takes , while randomized SVD takes (see Section 1.4.1 of Halko et al. (2011)). For our largest examples, while the number of parameters in the model is on the order of , is on the order of ; thus, taking the exact SVD is linear in the number of parameters. For example, using standard hardware (a Dell XPS-15 laptop, with Intel core i7, 16Gb RAM), it takes 4 minutes to perform exact SVD on our largest model, WideResNet on CIFAR-100, with 36 million parameters. By comparison, it takes approximately eight hours on an NVIDIA 1080Ti GPU to train the same WideResNet on CIFAR-100 to completion.
The curve subspace (Section 4.3) is the most expensive to construct, as it requires pre-training the two solutions corresponding to the endpoints of the curve, and then running the curve-finding procedure in Garipov et al. (2018), which in total is roughly the cost of training a single DNN of the same architecture.
Constructing the subspace is in general very fast and readily applicable to large deep networks, with minimal overhead compared to standard training.
We evaluate subspace inference empirically using the random, PCA, and curve subspace construction methods discussed in Section 4, in conjunction with the approximate posterior inference methods discussed in Section 3.3. In particular, we experiment with the No-U-Turn-Sampler (NUTS) (Hoffman and Gelman, 2014), Elliptical Slice Sampling (ESS) (Murray et al., 2010) and Variational Inference with fully factorized Gaussian approximation family (VI) and Real-valued Non-Volume Preserving flow (RealNVP) (Dinh et al., 2017) family. Section B contains more details on each of the approximate inference methods. We use the priors for discussed in Section 4.
We show that approximate Bayesian inference within a subspace provides good predictive uncertainties on regression problems, first visually in Section 5.1 and then quantitatively on UCI datasets in Section 5.2.
We then apply subspace inference to large-scale image classification on CIFAR-10 and CIFAR-100 and obtain results competitive with state-of-the-art scalable Bayesian deep learning methods.
5.1 Visualizing Regression Uncertainty
We want predicted uncertainty to grow as we move away from the data. Far away from the data there are many possible functions that are consistent with our observations and thus there should be greater uncertainty. However, this intuitive behaviour is difficult to achieve with Bayesian neural networks (Foong et al., 2019). Further, log-likelihoods on benchmark datasets (see Section 5.2) do not necessarily test this behaviour, where over-confident methods can obtain better likelihoods (Yao et al., 2019).
We use a fully-connected architecture with hidden layers that have neurons respectively. The network takes two inputs, and (the redundancy helps with training), and outputs a single real value . To generate the data we set the weights of the network with this same architecture randomly, and evaluate the predictions for points sampled uniformly in intervals , , . We add Gaussian noise to the outputs . We show the data with red circles in Figure 3.
We train an SWA solution (Izmailov et al., 2018), and construct three subspaces: a ten-dimensional random subspace, ten-dimensional PCA-subspace and a two-dimensional curve subspace (see Section 4). We then run each of the inference methods listed in Section B in each of the subspaces. We visualize the predictive distributions in the observed space for each combination of method and subspace in Appendix D - Figure 8 - and the posterior density overlayed by samples of the subspace parameters in Figure 9.
In Figure 9 the shape of the posterior in random and PCA subspaces is close to Gaussian, and all approximate inference methods produce reasonable samples. In the mode connecting curve subspace the posterior has a more complex shape. The variational methods were unable to produce a reasonable fit. The simple variational approach is highly constrained in its Gaussian representation of the posterior. However, RealNVP in principle has the flexibility to fit many posterior approximating distributions, but perhaps lacks the inductive biases to easily fit these types of curvy distributions in practice, especially when trained with the variational ELBO. On the other hand, certain MCMC methods, such as elliptical slice sampling, can effectively navigate these distributions.
In the top row of Figure 3
we visualize the predictive distributions for elliptical slice sampling in each of the subspaces. In order to represent this uncertainty, the posterior in the subspace must assign mass to settings of the weights that give rise to models making a diversity of predictions in these regions. In the random subspace, the predictive uncertainty does not significantly grow far away from the data, suggesting that it does not capture a diversity of models. On the other hand, the PCA subspace captures a diverse collection of models, with uncertainty growing away from the data. Finally, the predictive distribution for the curve subspace is the most adaptive, suggesting that it contains the greatest variety of models corresponding to weights with high posterior probabilities.
In the bottom row of Figure 3 we visualize the predictive distributions for simple variational inference applied in the original parameter space (as opposed to a subspace), SWAG (Maddox et al., 2019), and a Gaussian process with an RBF kernel. The SWAG predictive distribution is similar to the predictive distribution of ESS in the PCA subspace, as both methods attempt to approximate the posterior in the subspace containing the principal components of the SGD trajectory; however, ESS operates directly in the subspace, and is not constrained to a Gaussian representation. Variational inference interestingly appears to underfit the data, failing to adapt the posterior mean to structured variations in the observed points, and provides relatively homogenous uncertainty estimates, except very far from the data. We hypothesize that underfitting happens because the KL divergence term between approximate posterior and prior distributions overpowers the data fit term in the variational lower bound objective, as we have many more parameters than data points in this problem (see Blundell et al. (2015) for details about VI). In the bottom right panel of Figure 3 we also consider a Gaussian process (GP) with an RBF kernel — presently the gold standard for regression uncertainty. The GP provides a reasonable predictive distribution in a neighbourhood of the data, but arguably becomes underconfident for extrapolation, with uncertainty quickly blowing up away from the data.
5.2 Uci Regression
We next compare our subspace inference methods on UCI regression tasks to a variety of methods for approximate Bayesian inference with neural networks.444We use the pre-processing from https://github.com/hughsalimbeni/bayesian_benchmarks. To measure performance we compute Gaussian test likelihood (details in Appendix E.1.1). We follow convention and parameterize our neural network models so that for an input they produce two outputs, predictive mean
and predictive variance. For all datasets we tune the temperature in (2) by maximizing the average likelihood over random validations splits (see Appendix F.1 for a discussion of the effect of temperature).
5.2.1 Large UCI Regression Datasets
We experiment with large regression datasets from UCI: elevators, keggdirected, keggundirected, pol, protein and skillcraft. We follow the experimental framework of Wilson et al. (2016).
On all datasets except skillcraft we use a feedforward network with five hidden layers of sizes [1000, 1000, 500, 50, 2], ReLU activations and two outputsand parameterizing predictive mean and variance. On skillcraft, we use a smaller architecture [1000, 500, 50, 2] like in Wilson et al. (2016). We additionally learn a global noise variance , so that the predictive variance at is , where is the variance output from the final layer of the network. We use softplus parameterizations for both and to ensure positiveness of the variance, initializing the global variance at (the total variance in the dataset).
We compare subspace inference to deep kernel learning (DKL) (Wilson et al., 2016) with a spectral mixture base kernel (Wilson and Adams, 2013), SGD trained networks with a Bayesian final layer as in Riquelme et al. (2018) and two approximate Gaussian processes: orthogonally decoupled variational Gaussian processes (OrthVGP) (Salimbeni et al., 2018) and Fastfood approximate kernels (FF)555Results for FF are from Wilson et al. (2016). (Yang et al., 2015). We show the test log-likelihoods and RMSEs in Appendix E.1.3 - Tables 5 and 6
, and the 95% credible intervals in Table7. We summarize the test log-likelihoods in Figure 4. Subspace inference outperforms SGD by a large margin on elevators, pol and protein, and is competitive on the other datasets. Compared to SWAG, subspace inference typically improves results by a small margin.
Finally, we plot the coverage of the 95% predictive intervals in Figure 10. Again, subspace inference performs at least as well as the SGD baseline, and has substantially better calibration on elevators and protein.
5.2.2 Small UCI Regression Datasets
We compare subspace inference to the state-of-the-art approximate BNN inference methods including deterministic variational inference (DVI) (Wu et al., 2019), deep GPs (DGP) with two GP layers trained via expectation propagation (Bui et al., 2016), and variational inference with the re-parameterization trick (Kingma and Welling, 2013). We follow the set-up of Bui et al. (2016) and use a fully-connected network with a single hidden layer with 50 units. We present the test log-likelihoods, RMSEs and test calibration results in Appendix E.1.2 - Tables 2, 3 and 4. We additionally visualize test log-likelihoods in Figure 4 and calibrations in Appendix Figure 10.
Our first observation is that SGD provides a surprisingly strong baseline, sometimes outperforming DVI. The subspace methods outperform SGD and DVI on naval, concrete and yacht and are competitive on the other datasets.
5.3 Image Classification
Next, we test the proposed method on state-of-the-art convolutional networks on CIFAR datasets. Similarly to Section 5.1, we construct five-dimensional random and five-dimensional PCA subspaces around a trained SWA solution. We also construct a two-dimensional curve subspace by connecting our SWA solution to another independently trained SWA solution. We use the value for temperature in all the image classification experiments with random and PCA subspaces, and in the curve subspace (Section 3.5), from cross-validation using VGG-16 on CIFAR-100.
We visualize the samples from ESS in each of the subspaces in Figure 5. For VI we also visualize the -region of the closed-form approximate posterior in the random and PCA subspaces. As we can see, ESS is able to capture the shape of the posterior in each of the subspaces.
In the PCA subspace we also visualize the SWAG approximate posterior distribution. SWAG overestimates the variance along the first principle component (horizontal axis in Figure 5b) of the SGD trajectory666See also Figure 5 in Maddox et al. (2019).. VI in the subspace is able to provide a better fit for the posterior distribution, as it is directly approximating the posterior while SWAG is approximating the SGD trajectory.
We report the accuracy and negative log-likelihood for each of the subspaces in Table 1. Going from random, to PCA, to curve subspaces provides progressively better results, due to increasing diversity and quality of models within each subspace. In the remaining experiments we use the PCA subspace, as it generally provides good performance at a much lower computational cost than the curve subspace.
We next apply ESS and simple VI in the PCA subspace on VGG-16, PreResNet-164 and WideResNet28x10 on CIFAR-10 and CIFAR-100. We report the results in Appendix Tables 8, 9. Subspace inference is competitive with SWAG and consistently outperforms most of the other baselines, including MC-dropout (Gal and Ghahramani, 2016), temperature scaling (Guo et al., 2017) and KFAC-Laplace Ritter et al. (2018).
Bayesian methods were once a gold standard for inference with neural networks. Indeed, a Bayesian model average provides an entirely different mechanism for making predictions than standard training, with benefits in accuracy and uncertainty representation. However, efficient and practically useful Bayesian inference has remained a critical challenge for the exceptionally high dimensional parameter spaces in modern deep neural networks. In this paper, we have developed a subspace inference approach to side-step the dimensionality challenges in Bayesian deep learning: by constructing subspaces where the neural network has sufficient variability, we can easily apply standard approximate Bayesian inference methods, with strong practical results.
In particular, we have demonstrated that simple affine subspaces constructed from the principal components of the SGD trajectory contain enough variability for practically effective Bayesian model averaging, often out-performing full parameter space inference techniques. Such subspaces can be surprisingly low dimensional (e.g., 5 dimensions), and combined with approximate inference methods, such as slice sampling, which are good at automatically exploring posterior densities, but would be entirely intractable in the original parameter space. We further improve performance with subspaces constructed from low-loss curves connecting independently trained solutions (Garipov et al., 2018), with additional computation. Subspace inference is particularly effective at representing growing and shrinking uncertainty as we move away and towards data, which has been a particular challenge for Bayesian deep learning methods.
Crucially, our approach is modular and easily adaptable for exploring both different subspaces and approximate inference techniques, enabling fast exploration for new problems and architectures. There are many exciting directions for future work. It could be possible to explicitly train subspaces to select high variability in the functional outputs of the model, while retaining quality solutions. One could also develop Bayesian PCA approaches (Minka, 2001)
to automatically select the dimensionality of the subspace. Moreover, the intuitive behaviour of the regression uncertainty for subspace inference could be harnessed in a number of active learning problems, such as Bayesian optimization and probabilistic model-based reinforcement learning, helping to address the current issues of input dimensionality in these settings.
The ability to perform approximate Bayesian inference in low-dimensional subspaces of deep neural networks is a step towards, scalable, modular, and interpretable Bayesian deep learning.
WJM, PI, PK and AGW were supported by an Amazon Research Award, Facebook Research, and NSF IIS-1563887. WJM was additionally supported by an NSF Graduate Research Fellowship under Grant No. DGE-1650441.
Bingham et al. (2018)
Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N.,
Karaletsos, T., Singh, R., Szerlip, P., Horsfall, P., and Goodman, N. D.
Pyro: Deep universal probabilistic programming.
Journal of Machine Learning Research.
- Bishop (1999) Bishop, C. M. (1999). Bayesian PCA. In Advances in neural information processing systems, pages 382–388.
- Blundell et al. (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015). Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424.
- Bui et al. (2016) Bui, T., Hernández-Lobato, D., Hernandez-Lobato, J., Li, Y., and Turner, R. (2016). Deep gaussian processes for regression using approximate expectation propagation. In International Conference on Machine Learning, pages 1472–1481.
- Dinh et al. (2017) Dinh, L., Sohl-Dickstein, J., and Bengio, S. (2017). Density estimation using real nvp. In International Conference on Learning Representations.
- Foong et al. (2019) Foong, A. Y. K., Li, Y., Hernández-Lobato, J. M., and Turner, R. E. (2019). ’In-Between’ Uncertainty in Bayesian Neural Networks. arXiv e-prints, page arXiv:1906.11537.
- Gal and Ghahramani (2016) Gal, Y. and Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In International Conference on Machine Learning, pages 1050–1059.
- Gardner et al. (2018) Gardner, J., Pleiss, G., Weinberger, K. Q., Bindel, D., and Wilson, A. G. (2018). Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. In Advances in Neural Information Processing Systems, pages 7587–7597.
- Garipov et al. (2018) Garipov, T., Izmailov, P., Podoprikhin, D., Vetrov, D. P., and Wilson, A. G. (2018). Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems, volume abs/1802.10026.
Geyer and Thompson (1995)
Geyer, C. J. and Thompson, E. A. (1995).
Annealing markov chain monte carlo with applications to ancestral inference.Journal of the American Statistical Association, 90(431):909–920.
- Ghashami et al. (2016) Ghashami, M., Liberty, E., Phillips, J. M., and Woodruff, D. P. (2016). Frequent directions: Simple and deterministic matrix sketching. SIAM Journal on Computing, 45(5):1762–1792.
- Guhaniyogi and Dunson (2015) Guhaniyogi, R. and Dunson, D. B. (2015). Bayesian compressed regression. Journal of the American Statistical Association, 110(512):1500–1514.
- Guo et al. (2017) Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. (2017). On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1321–1330. JMLR. org.
- Gur-Ari et al. (2019) Gur-Ari, G., Roberts, D. A., and Dyer, E. (2019). Gradient descent happens in a tiny subspace. https://openreview.net/forum?id=ByeTHsAqtX.
- Ha et al. (2017) Ha, D., Dai, A., and Le, Q. V. (2017). Hypernetworks. In International Conference on Learning Representations. arXiv preprint arXiv:1609.09106.
- Halko et al. (2011) Halko, N., Martinsson, P.-G., and Tropp, J. A. (2011). Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288.
- Hoffman et al. (2013) Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347.
- Hoffman and Gelman (2014) Hoffman, M. D. and Gelman, A. (2014). The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. Journal of Machine Learning Research, 15(1):1593–1623.
Huggins et al. (2016)
Huggins, J., Campbell, T., and Broderick, T. (2016).
Coresets for scalable bayesian logistic regression.In Advances in Neural Information Processing Systems, pages 4080–4088.
Izmailov et al. (2018)
Izmailov, P., Podoprikhin, D., Garipov, T., Vetrov, D., and Wilson, A. G.
Averaging weights leads to wider optima and better generalization.
Uncertainty in Artificial Intelligence.
- Karaletsos et al. (2018) Karaletsos, T., Dayan, P., and Ghahramani, Z. (2018). Probabilistic meta-representations of neural networks. arXiv preprint arXiv:1810.00555.
- Kingma et al. (2015) Kingma, D. P., Salimans, T., and Welling, M. (2015). Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575–2583.
- Kingma and Welling (2013) Kingma, D. P. and Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
- Krueger et al. (2017) Krueger, D., Huang, C.-W., Islam, R., Turner, R., Lacoste, A., and Courville, A. (2017). Bayesian hypernetworks. arXiv preprint arXiv:1710.04759.
- Lakshminarayanan et al. (2017) Lakshminarayanan, B., Pritzel, A., and Blundell, C. (2017). Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pages 6402–6413.
- Li et al. (2018a) Li, C., Farkhoor, H., Liu, R., and Yosinski, J. (2018a). Measuring the intrinsic dimension of objective landscapes. In International Conference on Learning Representations. arXiv preprint arXiv:1804.08838.
- Li et al. (2018b) Li, H., Xu, Z., Taylor, G., Studer, C., and Goldstein, T. (2018b). Visualizing the Loss Landscape of Neural Nets. In Advances in Neural Information Processing Systems. arXiv: 1712.09913.
- MacKay (2003) MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge university press.
- Maddox et al. (2019) Maddox, W., Garipov, T., Izmailov, P., Vetrov, D., and Wilson, A. G. (2019). A simple baseline for bayesian uncertainty in deep learning. arXiv preprint arXiv:1902.02476.
- Minka (2001) Minka, T. P. (2001). Automatic choice of dimensionality for pca. In Advances in neural information processing systems, pages 598–604.
- Murray et al. (2010) Murray, I., Prescott Adams, R., and MacKay, D. J. (2010). Elliptical slice sampling. In Artificial Intelligence and Statistics.
- Neal (1996a) Neal, R. M. (1996a). Bayesian learning for neural networks, volume 118. Springer Science & Business Media.
- Neal (1996b) Neal, R. M. (1996b). Sampling from multimodal distributions using tempered transitions. Statistics and computing, 6(4):353–366.
- Neal et al. (2003) Neal, R. M. et al. (2003). Slice sampling. The annals of statistics, 31(3):705–767.
- Neal et al. (2011) Neal, R. M. et al. (2011). Mcmc using hamiltonian dynamics. Handbook of markov chain monte carlo, 2(11):2.
- Patra and Dunson (2018) Patra, S. and Dunson, D. B. (2018). Constrained bayesian inference through posterior projections. arXiv preprint arXiv:1812.05741.
- Pradier et al. (2018) Pradier, M. F., Pan, W., Yao, J., Ghosh, S., and Doshi-velez, F. (2018). Projected BNNs: Avoiding weight-space pathologies by learning latent representations of neural network weights. arXiv preprints, page arXiv:1811.07006.
- Riquelme et al. (2018) Riquelme, C., Tucker, G., and Snoek, J. (2018). Deep bayesian bandits showdown. In International Conference on Learning Representations.
- Ritter et al. (2018) Ritter, H., Botev, A., and Barber, D. (2018). A scalable laplace approximation for neural networks. In International Conference on Learning Representations.
- Roweis (1998) Roweis, S. T. (1998). EM algorithms for PCA and SPCA. In Advances in neural information processing systems, pages 626–632.
- Salimbeni et al. (2018) Salimbeni, H., Cheng, C.-A., Boots, B., and Deisenroth, M. (2018). Orthogonally decoupled variational gaussian processes. In Advances in Neural Information Processing Systems, pages 8725–8734.
- Silva and Kalaitzis (2015) Silva, R. and Kalaitzis, A. (2015). Bayesian inference via projections. Statistics and Computing, 25(4):739–753.
- Titsias (2017) Titsias, M. K. (2017). Learning model reparametrizations: Implicit variational inference by fitting mcmc distributions. arXiv preprint arXiv:1708.01529.
- Watanabe (2013) Watanabe, S. (2013). A widely applicable bayesian information criterion. Journal of Machine Learning Research, 14(Mar):867–897.
- Wilson and Adams (2013) Wilson, A. and Adams, R. (2013). Gaussian process kernels for pattern discovery and extrapolation. In International Conference on Machine Learning, pages 1067–1075.
- Wilson et al. (2016) Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. (2016). Deep kernel learning. In Artificial Intelligence and Statistics, pages 370–378.
- Wu et al. (2019) Wu, A., Nowozin, S., Meeds, E., Turner, R. E., Hernández-Lobato, J. M., and Gaunt, A. L. (2019). Deterministic Variational Inference for Robust Bayesian Neural Networks. In Inernational Conference on Learning Representations. arXiv:1810.03958.
- Yang et al. (2015) Yang, Z., Wilson, A., Smola, A., and Song, L. (2015). A la carte–learning fast kernels. In Artificial Intelligence and Statistics, pages 1098–1106.
- Yao et al. (2019) Yao, J., Pan, W., Ghosh, S., and Doshi-Velez, F. (2019). Quality of Uncertainty Quantification for Bayesian Neural Network Inference. arXiv preprints, page arXiv:1906.09686.
Appendix A Additional Discussion of Subspace Inference
a.1 Loss of Measure
When mapping into a lower-dimensional subspace of the true parameter space, we lose the ability to invert the transform and thus measure (i.e. volume of the distribution) is lost. Consider the following simple example in . First, form a spherical density, , and then fix and along a slice, such that . The support of the resulting distribution has no area, since it represents a line with no width. For this reason, it is more correct to consider the subspace model (2
) as a different model that shares many of the same functional properties as the fully parametrized model, rather than a re-parametrized version of the same model. Indeed, we cannot construct a Jacobian matrix to represent the density in the subspace.
a.2 Potential Benefits of Subspace Inference
Quicker Exploration of the Posterior
Reducing the dimensionality of parameter space enables significantly faster mixing of MCMC chains. For example, the expected number of likelihood evaluations needed for an accepted sample drawn using Metropolis-Hastings or Hamiltonian Monte Carlo (HMC) respectively grow as and . If the dimensionality of the subspace grows as , for example, then we would expect the runtime of Metropolis-Hastings to produce independent samples to grow at a modest . We also note that the structure of the subspace may be much more amenable to exploration than the original posterior, requiring less time to effectively cover. For example, Maddox et al. (2019) show that the loss in the subspace constructed from the principal components of SGD iterates is approximately locally quadratic. On the other hand, if the subspace has a complicated structure, it can now be traversed using ambitious exploration methods which do not scale to higher dimensional spaces, such as parallel tempering (Geyer and Thompson, 1995).
Potential Lack of Degeneracies
In the subspace, the model will be absent of many of degeneracies present for most DNNs. We would expect therefore the posterior to concentrate to a single point, and be more amenable in general to approximate inference strategies.
Appendix B Approximate Inference Methods
We can use MCMC methods to approximately sample from , or we can perform a deterministic approximation , for example using Laplace or a variational approach, and then sample from . We particularly consider the following methods in our experiments, although there are many other possibilities. The inference procedure is an experimental design choice.
As the dimensionality of the subspace is low, gradient-free methods such as slice sampling (Neal et al., 2003) and elliptical slice sampling (ESS) (Murray et al., 2010) can be used to sample from the projected posterior distribution. Elliptical slice sampling is designed to have no tuning parameters, and only requires a Gaussian prior in the subspace. 777We use the Python implementation at https://github.com/jobovy/bovy_mcmc/blob/master/bovy_mcmc/elliptical_slice.py. For networks that cannot evaluate all of the training data in memory at a single time, it is easily possible to sum the loss over mini-batches computing a full log probability, without storing gradients.
The No-U-Turn Sampler (NUTS) (Hoffman and Gelman, 2014) is an HMC method (Neal et al., 2011) that dynamically tunes the hyper-parameters (step-size and leapfrog steps) of HMC. 888Implemented in Pyro (Bingham et al., 2018). NUTS has the advantage of being nearly black-box: only a joint likelihood and its gradients need to be defined. However, full gradient calls are required, which can be difficult to cache and a constant factor slower than a full likelihood calculation.
Simple Variational Inference
One can perform variational inference in the subspace using the fully-factorized Gaussian posterior approximation family for , from which we can sample to form a Bayesian model average. Fully-factorized Gaussians are among the simplest and the most common variational families. Unlike ESS or NUTS, VI can be trained with mini-batches (Hoffman et al., 2013), but is often practically constrained in the distributions it can represent.
Normalizing flows, such as RealNVP (Dinh et al., 2017), parametrize the variational distribution family with invertible neural networks, for flexible non-Gaussian posterior approximations.
Appendix C Eigen-Gaps of the Fisher and Hessian Matrices
In Figure 6 we can see similar behavior for the eigenvalues of both the Hessian and the empirical Fisher information matrix at the end of training. To compute these eigenvalues, we used a GPU-enabled Lanczos method in GPyTorch (Gardner et al., 2018) on a pre-trained PreResNet164. Given the gap between the top eigenvalues and the rest for both the Hessian and Fisher matrices, we might expect that the parameter updates of gradient descent would primarily lie in the subspace spanned by the top eigenvalues. Li et al. (2018a) and Gur-Ari et al. (2019) empirically (and with a similar theoretical basis) found that the training dynamics of SGD (and thus the parameter updates) primarily lie within the subspace formed by the top eigenvalues (and eigenvectors). Additionally, in Figure 7 we show that the eigenvalues of the trajectory decay rapidly (on a log scale) across several different architectures; this rapid decay suggests that most of the parameter updates occur in a low dimensional subspace (at least near convergence), supporting the claims of Gur-Ari et al. (2019). In future work it would be interesting to further explore the connections between the eigenvalues of the Hessian (and Fisher) matrices of DNNs and the covariance matrix of the trajectory of the SGD iterates.
Appendix D Additional Regression Uncertainty Visualizations
In Figure 8 we present the predictive disribution plots for all the inference methods and subspaces. We additionally visualize the samples over poterior density surfaces for each of the methods in Figure 9.
Appendix E Uci Regression Experimental Details
e.1.1 Gaussian test likelihood
e.1.2 Small Regression
For the small UCI regression datasets, we use the architecture from Wu et al. (2019) with one hidden layer with 50 units. We manually tune learning rate and weight decay, and use batch size of where
is the dataset size. All models predict heteroscedastic uncertainty (i.e. output a variance). In Table2, we compare subspace inference methods to deterministic VI (DVI, Wu et al. (2019)) and deep Gaussian processes with expectation propagation (DGP1-50 Bui et al. (2016)). ESS and VI in the PCA subspace outperform DVI on two out of five datasets.
e.1.3 Large-Scale Regression
For the large-scale UCI regression tasks, we manually tuned hyper-parameters (batch size, learning rate, and epochs) to match the RMSE of the SGD DNN results in Table 1 of Wilson et al. (2016), starting with the parameters in the authors’ released code. We used heteroscedastic regression methods with a global variance parameter, e.g. the likelihood for a data point is given by: optimizing in tandem with the network (which outputs both the mean and variance parameters). can be viewed as a global variance parameter, analogous to optimizing the jitter in Gaussian process regression. We additionally tried fitting models without a global variance parameter (e.g. standard heteroscedastic regression as is used in the SGD networks in Wilson et al. (2016)), but found that they were typically more over-confident.
|dataset||N||D||SGD||PCA+ESS (SI)||PCA+VI (SI)||SWAG||DVI||DGP1-50||VI|
|boston||506||13||-2.752 0.132||-2.719 0.132||-2.716 0.133||-2.761 0.132||-2.41 0.02||-2.33 0.06||-2.43 0.03|
|concrete||1030||8||-3.178 0.198||-3.007 0.086||-2.994 0.095||-3.013 0.086||-3.06 0.01||-3.13 0.03||-3.04 0.02|
|energy||768||8||-1.736 1.613||-1.563 1.243||-1.715 1.588||-1.679 1.488||-1.01 0.06||-1.32 0.03||-2.38 0.02|
|naval||11934||16||6.567 0.185||6.541 0.095||6.708 0.105||6.708 0.105||6.29 0.04||3.60 0.33||5.87 0.29|
|yacht||308||6||-0.418 0.426||-0.225 0.400||-0.396 0.419||-0.404 0.418||-0.47 0.03||-1.39 0.14||-1.68 0.04|
|SGD||PCA+ESS (SI)||PCA+VI (SI)||SWAG|
|boston||3.504 0.975||3.453 0.953||3.457 0.951||3.517 0.981|
|concrete||5.194 0.446||5.194 0.448||5.142 0.418||5.233 0.417|
|energy||1.602 0.275||1.598 0.274||1.587 0.272||1.594 0.273|
|naval||0.001 0.000||0.001 0.000||0.001 0.000||0.001 0.000|
|yacht||0.973 0.374||0.972 0.375||0.973 0.375||0.973 0.375|
Following Wilson et al. (2016), for the UCI regression tasks with more than 6,000 data points, we used networks with the following structure: [1000, 1000, 500, 50, 2], while for skillcraft, we used a network with: [1000, 500, 50, 2]. We used a learning rate of , doubling the learning rate of bias parameters, a batch size of , momentum of , and weight decay of , training for 200 epochs. For skillcraft, we only trained for 100 epochs, using a learning rate of and for keggD, we used a learning rate of . We used a standard normal prior with variance of in the subspace.
For all of our models, the likelihood for a data point is given by: optimizing in tandem with the network (which outputs both the mean and variance parameters). can be viewed as a global variance parameter, analogous to optimizing the jitter in Gaussian process regression. We found fitting models without a global variance parameter often led to over-confident predictions.
In Table 5, we report RMSE results compared to two types of approximate Gaussian processes (Salimbeni et al., 2018; Yang et al., 2015); note that the results for OrthVGP are from Appendix Table F of Salimbeni et al. (2018) but scaled by the standard deviation of the respective dataset. For the comparisons using Bayesian final layers (Riquelme et al., 2018), we trained SGD nets with the same architecture and used the second-to-last layer (ignoring the final hidden unit layer of width two as it performed considerably worse) for the Bayesian approach and then followed the same hyper-parameter setup as in the authors’ codebase 101010https://github.com/tensorflow/models/tree/master/research/deep_contextual_bandits with and .
We repeated each model over 10 random train/test splits; each test set consisted of 10% of the full dataset. All data was pre-processed to have mean zero and variance one.
|N||D||SGD||PCA+ESS (SI)||PCA+VI (SI)||SWAG|
|boston||506||13||0.986 0.018||0.985 0.017||0.984 0.017||0.986 0.018|
|concrete||1030||8||0.864 0.029||0.941 0.021||0.934 0.019||0.933 0.024|
|energy||768||8||0.947 0.026||0.953 0.027||0.949 0.027||0.951 0.027|
|naval||11934||16||0.948 0.051||0.978 0.006||0.967 0.008||0.967 0.008|
|yacht||308||6||0.895 0.069||0.948 0.040||0.898 0.067||0.898 0.067|
|dataset||N||D||SGD||NL||PCA+ESS (SI)||PCA+VI (SI)||SWAG||DKL||OrthVGP||FF|
|dataset||N||D||SGD||PCA+ESS (SI)||PCA+VI (SI)||SWAG||OrthVGP||NL|
|dataset||N||D||SGD||NL||PCA+ESS (SI)||PCA+VI (SI)||SWAG|
|Dataset||Model||PCA + VI (SI)||PCA + ESS (SI)||SWA||SWAG||KFAC-Laplace||SWA-Dropout||SWA-Temp|
|Dataset||Model||PCA + VI (SI)||PCA + ESS (SI)||SWA||SWAG||KFAC-Laplace||SWA-Dropout||SWA-Temp|
Appendix F Image Classification Results
f.1 Effect of Temperature
We study the effect of the temperature parameter defined in (4) on the performance of subspace inference. We run elliptical slice sampling in a -dimensional PCA subspace for a PreResNet-164 on CIFAR-100. We show test performance as a function of the temperature parameter in Figure 11 panels (a) and (b). Bayesian model averaging achieves strong results in the range . We also observe that the value has a larger effect on uncertainty estimates and consequently NLL than on predictive accuracy.
We then repeat the same experiment on UCI elevators using the setting described in Section 5.2.1. We show the results in Figure 11 panels (c), (d). Again, we observe that the performance is almost constant and close to optimal in a certain range of temperatures, and the effect of temperature on likelihood is larger compared to RMSE.