Gaussian processes (GP) (Rasmussen & Williams, 2006) and their deep variants (Damianou & Lawrence, 2013) are powerful nonparametric distributions for both supervised (Williams & Rasmussen, 1996; Bernardo et al., 1998; Williams & Barber, 1998) and unsupervised machine-learning (Lawrence, 2005; Damianou et al., 2016)
. Such processes can generate smooth functions to model complex data and provide principled approaches for uncertainty quantification. Despite this, their application has been limited because Bayesian inference for such modeling requires inversion of a matrix which is computationally challenging (typicallyfor data examples).
Many methods have been proposed to tackle this issue, and they mostly rely on finding a small number of inducing points in the input111 space to reduce the cost of matrix inversion (Quiñonero-Candela & Rasmussen, 2005; Titsias, 2009)
. Methods that employ variational inference to estimate inducing points(Titsias, 2009) can scale well to large data by minibatch stochastic-gradient methods (Hensman et al., 2013), but the quality of posterior approximations obtained could be limited since the number of inducing points needs to be small, even when the data are extremely large (Cheng & Boots, 2017). Optimization of inducing points is another challenging problem when the objective is nonconvex and stochastic-gradient methods can get stuck (Bauer et al., 2016).
A relatively less-explored approach is to approximate the GP posterior distribution by using function approximators, such as deep neural networks(Sun et al., 2019). By introducing randomness in the parameters of such inference networks, it is possible to generate functions similar to those generated by the posterior distribution, as illustrated in Fig. 1. By directly approximating the functions, we can obtain more flexible alternatives to sparse variational approaches. Unfortunately, unlike sparse methods, training with minibatches is challenging in the function space. The difficulty is to be able to maintain meaningful correlations over function outputs while only looking at a handful of examples in each iteration. Scalable training of such inference networks is extremely important for them to be useful for flexible posterior inference in GP models.
In this paper, we propose an algorithm to scalably train the network by tracking an adaptive Bayesian filter defined in the function space. The filter is obtained by using a stochastic, functional mirror-descent algorithm which is guaranteed to converge to the exact posterior process but is computationally intractable. By bootstrapping from approximations given by inference networks, we can use stochastic gradients to train the network with minibatches of data. We demonstrate training of various types of networks, such as those based on random feature expansions and deep neural networks. Our results show that the problem caused by incorrectly minibatching in previous works is fixed by our method. Unlike exist works that use neural networks to parameterize deep kernel in the GP prior (Wilson et al., 2016), our variational treatment prevents overfitting and allows arbitrarily complex networks, which often achieve better performance than inducing-point approaches in regression tasks. Finally, we show that our method is a more flexible alternative for GP inference than existing sparse methods, which allows us to train deep ConvNets for the inference of GPs induced from infinite-width Bayesian ConvNets.
1.1 Related work
For a scalable inference in GP models, plenty of work has been done on developing sparse methods (Quiñonero-Candela & Rasmussen, 2005; Titsias, 2009; Hensman et al., 2013). These works have made GPs a viable choice for practical problems by making minibatch training possible. Our work takes a different approach than these methods by using inference networks for posterior approximation. The computation complexity of our method is similar to sparse methods, but our posterior approximations are much more flexible. Currently, it is challenging to train such inference networks in the function space while achieving a good performance. Our work fills this gap and shows that it is indeed possible to scalably train them and get a similar, and sometimes better, performance than sparse GP methods.
A few recent approaches have used inference networks for posterior approximations in the function space, although they have not directly applied it to inference in GP models. For example, Neural Processes (NP) (Garnelo et al., 2018a, b) uses inference networks on a model with a learned prior, while Variational Implicit Processes (VIP) (Ma et al., 2018), in a similar setup, uses a GP for approximations. Hafner et al. (2018) design a function-space prior called the noise contrastive prior. The work of Sun et al. (2019)
is perhaps the closest to our approach, but they use an heuristic procedure for minibatch training. Our work proposes a scalable minibatch training method which is useful for all of these existing works.
The idea of tracking a learning process to distill information has been used in many previous approaches, e.g., Bui et al. (2017) use streaming variational Bayes for inference in sparse GPs, while Balan et al. (2015) track stochastic gradient Langevin dynamics (SGLD) (the teacher) to distill information into a neural network (the student). Our method can also be interpreted in a teacher-student framework, where the teacher can be obtain from the student network by taking a stochastic mirror descent step, which is much cheaper than simultaneously running another inference algorithm like SGLD.
Some of our inference networks are based on random feature expansion and are related to existing work on spectrum approximations for GPs (Lázaro-Gredilla et al., 2010; Gal & Turner, 2015; Hensman et al., 2018). Among them Lázaro-Gredilla et al. (2010) does function-space inference and is similar to our work, but is a full-batch algorithm.
2 Bayesian Inference in GP Models
We start by discussing the challenges associated with existing methods for inference in GP models, and then discuss the difficulty in scalable training of inference networks.
2.1 Gaussian Processes
A Gaussian Process (GP) is a stochastic process defined using a mean function and covariance function :
A remarkable property of GPs is that, for any finite number of inputs , the marginal distribution of the function values
follow a multivariate Gaussian distribution:where are the indices of the data examples, is a vector with entries and is a matrix with ’th entry as
. This property can be utilized to obtain function approximations for the modeling of complex data. For example, for regression analysis given input features, the function values can be used to model the output mean: where
is the noise variance. Correlations in the data vectorcan now be explained through the correlation defined using the GP prior in the function space. This approach is widely used to design nonlinear methods for supervised and unsupervised learning.
Given such a prior distribution, the goal of Bayesian inference is to compute the posterior distribution over the function evaluated at arbitrary test inputs . For Gaussian likelihoods such as in regression, due to the conjugacy, the posterior distribution takes a convenient closed-form solution, thus the predictive distribution at a test location is available in closed-form:
where and . Equation 2 is difficult to compute in practice when is large. The computational cost of matrix inversion is in . This inversion is only possible when is of moderate size, usually only a few thousands.
2.2 Sparse Approximations for GPs
Sparse-approximation methods reduce the computation cost by choosing a small number of points, where , to perform the matrix inversion. These points are known as the inducing points. A variety of such methods have been proposed (Quiñonero-Candela & Rasmussen, 2005) and they mostly differ in the manner of selection of these points and the kind of approximations used for the matrix inverse. Among these, methods based on variational inference are perhaps one of the most popular (Titsias, 2009). Given inducing points and their function values , these methods approximate the posterior distribution with a variational distribution . By restricting the variational approximation to be , the variational lower bound is greatly simplified,
For GP regression, the optimal is a Gaussian whose parameters can be obtained in closed-form (Titsias, 2009). Even though the computation of scales linearly with , its covariance matrix can be inverted in which reduces the prediction cost drastically. The linear dependence on can be further reduced by using a minibatch training proposed in Hensman et al. (2013), which makes the method scale well to large data. Sparse variational methods can scale well, and also perform reasonably with enough number of properly chosen inducing-points.
In practice, however, the number of inducing points is limited to make the matrix inversion feasible. This limits the flexibility of the posterior distribution whose complexity might grow with the complexity of the problem. Finding good inducing points is another challenging issue. The landscape of the lower bound with respect to presents a difficult optimization problem, and, so far, there are no good methods for this problem.
2.3 Function Space View and Inference Networks
The sparse variational method discussed above can be derived by using a function-space view of GPs. As discussed in several recent works (Cheng & Boots, 2016, 2017; Mallasto & Feragen, 2017), a GP has a dual representation in a separable Banach space, which contains the RKHS induced by the covariance kernel. This view motivates to directly apply variational inference in the function space. Following Cheng & Boots (2016), if we denote the dual representation of the posterior and variational distributions by and respectively, then the variational objective in the function space can be written as following:
We recover the sparse variational GP problem in (3) when we restrict , where represents the function outputs not covered by . The two determinants arise from the change of measure from function to its outputs:.
We can improve the flexibility of the approximations by employing better choices of . Function approximators with random parameters, such as stochastic neural networks, are such alternatives, where we can learn to generate functions that mimic the samples from the posterior. This is illustrated in Fig. 1. Drawing an analogy to the networks used for inference in deep generative models (Kingma & Welling, 2013), we call them inference networks. When trained well, such networks can yield much more flexible posterior approximations than sparse variational approximation which is restricted by the choice of inducing points.
Unfortunately, training inference networks is much more challenging than sparse methods. The sparse approach simplifies the problem to a parametric form where we only need to deal with a small number inducing points and data examples. It is challenging to design a similar procedure without any sparse assumption on .
Existing approaches have mostly relied on heuristic procedures to solve this problem. For example, a recent approach of Sun et al. (2019) proposes to match marginal distributions of and at a finite number of measurement points sampled from a distribution , by minimizing222We drop the dependence of and on training data and sampled locations . where are the function values at
. This is a reasonable criterion to match two GPs, since they are completely determined by their first two moments and the solution is unique when(Sun et al., 2019). However, this is challenging to optimize due to the dependence of the term on the whole dataset . Sun et al. (2019) propose to pick subsets of data at each iteration but then the minimization problem no longer corresponds to matching and faithfully. Many other existing works on function-space inference with neural networks face similar challenges when it comes to minibatch training (Wang et al., 2019; Ma et al., 2018; Hafner et al., 2018). Such scalable training of inference network, while maintaining meaningful correlations in function outputs for a large dataset, remains a challenging problem.
3 Scalable Training of Inference Networks
We present a new algorithm to scalably train inference networks by using minibatches. Our main idea is to track a stochastic, functional mirror descent algorithm to enable efficient minibatch training. We start with a brief description of the mirror descent algorithm.
3.1 Stochastic Functional Mirror Descent Algorithm
). The functional mirror descent algorithm is an extension of gradient descent where gradient steps are taken in a function space and the length of the steps is measured by using a Bregman divergence (e.g., KL) instead of a Euclidean distance. A stochastic version of this algorithm is analogous to stochastic gradient descent where a minibatch of data could be used to build a stochastic approximation of the functional gradient. The method takes the following form:
where is the iteration, is the learning rate, is the previous approximation, and is an unbiased stochastic approximation of the functional gradient of at obtained by randomly sampling a data example . An attractive property is that there is a closed-form solution given as follows,
This update can be seen as an adaptive Bayesian filter where the previous posterior approximation is used to modify the prior distribution and a likelihood of the subsampled data is used to update the posterior approximation. We note that the functional mirror descent is equivalent to natural gradient for exponential family densities. In appendix A we provide an alternative derivation of (6) by taking natural gradient on the exponential family representation of GP.
Each step of this algorithm only requires subsampling a single data point, which makes it attractive for our purposes. The algorithm is also guaranteed to converge to the true posterior as discussed in Dai et al. (2016), where a particle-based approach is proposed for the update in (6). Khan & Lin (2017)
used a parametric version where the update can be performed analytically. Our case is different from these works because the random variable is a function, thus infinite-dimensional and difficult to be represented in a compact form. We instead use an inference network to implement this filter. We will use a version of (6) to compute stochastic gradients and update the parameters of the inference network. This is explained next.
3.2 Minibatch Training of Inference Networks
We propose a tractable approximation to (6) by bootstrapping from the inference network at each iteration. We denote the inference network by where is the set of parameters that we wish to estimate. We assume that we can evaluate the network at a finite set of points and that, just like a GP, the evaluated points follow a Gaussian distribution, i.e., where and are the mean and covariance that depend on the parameter . In section 4, we will give many examples of inference networks that have this property.
With such a , we hope to track (6), so that moves closer to the true posterior process than . For this purpose, an obvious solution is to replace by , i.e., we can bootstrap from the current posterior approximation given by the inference network:
The idea of bootstrap has long been used in particle filtering (Doucet et al., 2001) to obtain better posterior approximations. An attractive property of (7) for GP regression is that, given inputs , all the quantities in the right hand side follow a Gaussian distribution, therefore has a Gaussian distribution whose mean and covariance are available in closed-form. The last two terms in (7) are Gaussian and can be multiplied to get the new GP prior:
where and denotes the new mean and covariance. Using this in (7) and multiplying by the Gaussian likelihood, we get the following new GP regression problem, which has a closed-form solution similar to (2):
The marginal can be read from this directly. Though we have access to any finite marginal distribution of , mapping this to the inference network parameters is difficult. We can use the approach of Sun et al. (2019) to match the marginals of the and at finite number of measurement points sampled from a distribution , i.e., we update using the gradient of the KL divergence as shown below, where is the learning rate:
When the likelihood is non-Gaussian, does not have a closed-form expression. We propose to upper bound
with the KL divergence between the two joint distributions. Minimizing this is equivalent to maximizing:
Our method is similar in spirit to RL methods such as temporal difference learning with function approximators (Sutton & Barto, 1998), which also employs stochastic gradients to bootstrap from existing value function approximation. Their success indicates that taking a gradient step with a small step size here might ensure good performance in practice.
We name our algorithm Gaussian Process Inference Networks (GPNet), which is summarized in Algorithm 1. For each iteration of our algorithm, the stochastic mirror descent update is computed by subsampling a datapoint from the training set, then the inference network is trained to track the update at a set of measurement locations sampled from . Though we have described the algorithm using a single data example, it is straightforward to extend it to minibatches.
The computation cost in case of GP regression is the cost of matrix inversion which is . Note that, unlike sparse methods, does not have to be large for a flexible inference. The inference network can be a complex model containing neural networks which can be very flexible. Our procedure essentially uses locations to be able to compute stochastic gradients to update the parameters of the network.
Choice of :
Previous works on function-space inference (Sun et al., 2019; Wang et al., 2019; Hafner et al., 2018) have studied ways to sample the measurement points. The general approach is to apply uniform sampling in the input domain for low-dimensional problems; while for high-dimensional input space, we can sample ”near” the training data by adding noise to them. A useful trick for RBF kernels is to set as the training distribution convolved with the kernel. In applications where the input region of test points is known, we can set the to include it.
We set to ensure that the original stochastic mirror descent converges. Typical values are for , and for
. We can update GP hyperparameters when needed in an online fashion using the lower bound of minibatch log marginal likelihood:, which is similar to sparse variational methods. In our experiments the learning rate for GP hyperparameters is the same as .
4 Examples of Inference Networks for GPs
So far we have used to denote the inference networks, without discussing how to construct them. Below we explore several types of networks that can be used in GPNet.
4.1 Bayesian Neural Networks (BNN)
A well-known fact in the community is: The function defined by a single-layer fully-connected neural network (NN) with infinitely many hidden units and independent weight randomness is equivalent to a GP (Neal, 1995). Recently, the result is extended to deep NNs (Lee et al., 2018; Matthews et al., 2018; Garriga-Alonso et al., 2019; Novak et al., 2019). This equivalence has motivated the use of BNNs as inference networks to model the distribution of functions (Sun et al., 2019; Wang et al., 2019). Given a neural network , where denotes the network weights, a BNN is constructed by introducing weight randomness: . Typically is a factorized or matrix-variate Gaussian that factorizes across layers. In this case the inference network parameters should be .
However, this approach has several problems. First, the output density of BNN is intractable. In Flam-Shepherd et al. (2017); Wang et al. (2019) this difficulty is addressed by approximating the output distribution as a Gaussian and estimating the moments from samples, but drawing samples is costly due to many forward passes through the NN. The covariance estimate will present large variance for the typical sample size we can afford. In Sun et al. (2019) the situation is improved by directly estimating the gradients instead of the moments, with a low-variance but biased gradient estimator (Shi et al., 2018). However, the sample size still needs to be hundreds. Moreover, because the stochastic process defined by finite-width BNNs may not be a GP, it is unclear whether matching finite marginal distributions would suffice to match two processes.
4.2 Tractable Variants
Given the problems faced with BNNs, we explore two more types of inference networks that are equivalent to GPs. Both approaches naturally arise from the feature-space representation of GPs. It is known that for a Bayesian linear regression with input featureand Gaussian weights , the output distribution is equivalent to a GP with the covariance (Rasmussen & Williams, 2006). In general, any positive definite kernel can be written as the inner product of two feature maps and . As long as we know the that corresponds to the kernel, we can interpret our GP latent function
as a parametric model:
We could define the variational process in a similar form:
where are the parameters of the inference network.
Random Feature Expansion (RFE)
For GPs with stationary kernels (i.e., kernels that only depend on the difference between inputs,
), Bochner’s theorem guarantees that the covariance function can be written as a Fourier transform:
where is a spectral density in one-to-one correspondence with . Random Fourier features (Rahimi & Recht, 2008) is an approximation to kernel methods which gives explicit feature maps. The key observation is that eq. 14 can be approximated by Monte-Carlo:
where the imaginary part is zero. Defining , we can use it as the approximate feature map: . When using random Fourier features, the inference network
is a neural network with one hidden layer. The activation functions for the hidden layer areand . and serve as the input-to-hidden and the hidden-to-output weights, respectively. This architecture is called Random Feature Expansion in Cutajar et al. (2016), where they use a multi-layer stack to mimic a deep GP prior, though inference is still in the weight space. As done in their work, we relax to be trainable so that the inference network parameters are .
Deep Neural Networks
It is not always easy to explicitly write the inner-product form of a given kernel except the stationary case discussed. We may need a black-box approach, e.g., to parameterize
with a function approximator such as neural networks with general nonlinearities (e.g., ReLU and) and fit it during inference. In Snoek et al. (2015) a similar architecture called adaptive basis regression networks was proposed as priors in Bayesian optimization. Similar to their observations, we have found that this type of networks require significant effort to tune. Inspired by the Fisher kernel (Jaakkola & Haussler, 1999), we also considered an alternative feature map: the vector of how much information stored in each weight parameter, measured by the gradients of network outputs with respect to weights . Plugging into , we get a Neural Tangent Kernel (Jacot-Guillarmod et al., 2018): The performance of an inference network parameterized in this way is similar to a BNN because the NTK can be interpreted as introducing weight randomness on a first-order expansion of neural networks:
where if then it is equivalent to the GP:
. As seen, the mean function is as flexible as a deep NN, while the kernel utilizes the gradients as features. In some of our experiments we found it faster to converge than adaptive-basis ones, but the drawback is much higher computational cost due to backpropagation through Jacobians.
Throughout all experiments, denotes both the number of inducing points in SVGP and the number of measurement points in GPNet and FBNN (Sun et al., 2019). Implementations are based on a customized version of GPflow (de G. Matthews et al., 2017; Sun et al., 2018) and ZhuSuan (Shi et al., 2017). Code is available at https://github.com/thjashin/gp-infer-net.
5.1 Synthetic Data
We consider the inference of a GP with RBF kernel on the synthetic dataset introduced in Snelson & Ghahramani (2006). We analyze the properties of our method and compare with SVGP and FBNN (Sun et al., 2019). We fit these algorithms with minibatch size 20 on 100 data points. We ran for 40K iterations and used learning rate 0.003 for all methods. For fair comparison, for all three methods we pretrain the prior hyperparameters for 100 iterations using the GP marginal likelihood and keep them fixed thereafter. We vary in for all methods. The networks used in GPNet and FBNN are the same RFE with 20 hidden units.
Results are plotted in Fig. 2. We can see that the performance of SVGP grows with more inducing points. When , both SVGP and GPNet can recover the exact GP prediction. GPNet fits the data better when . This is because does not constrain the capacity of the inference network, though it does affect the convergence speed, i.e., smaller causes larger variance in the training. In Fig. 1(b) we take a closer look at the predictions by GPNet and FBNN near the training data. We can see that FBNN consistently overestimates the uncertainty. This effect can be well explained by their heuristic way of doing minibatch, that in each iteration they fit a different objective to match the local effect of 20 training points in a minibatch, while our stochastic mirror descent maintains a shared global objective that takes all observations into consideration.
: Test log likelihood. All results are drawn as violin plots which shows a kernel density of results on different splits, except for PBP we only have the mean and standard error, so an error bar is drawn instead.
We evaluate our method on seven standard regression benchmark datasets. We use RFE networks with 1000 hidden units for this task. Following the settings of Salimbeni & Deisenroth (2017), we compare to the strong baselines: SVGP with 100 and 500 inducing points. We also compare to FBNN using the same inference network as in GPNet333Though the algorithm of Sun et al. (2019) is designed for BNNs, it also applies to other types of inference networks.. To put the comparison into a wider context, we include the results by probabilistic backpropagation (PBP) (Hernández-Lobato & Adams, 2015), which is an effective weight-space inference method for BNNs. Details of datasets and experiment settings can be found in appendix B. Results are summarized in Fig. 3. We can see that GPNet has comparable or smaller RMSE than SVGP on most datasets, and the performance gap is often large when comparing them given . This demonstrates the effectiveness of inference networks than inducing points given similar computational complexity. The regression results on small datasets including Boston, Concrete and Energy show that overfitting is not observed with our powerful networks. Note that these three datasets only contain 1-2 minibatches of data, thus the performance of FBNN and GPNet are comparable because minibatch training is not an issue; while on larger datasets such as Kin8nm, Power and Wine, GPNet consistently outperforms FBNN. We find on Protein GPNet is slower to converge than other methods, and increasing the training iterations will give far better performance.
To demonstrate the advantage of our minibatch algorithm on large-scale datasets, we conducted experiments on the airline delay dataset, which includes 5.9 million flight records in the USA from Jan to Apr in 2018. Following the protocol in Hensman et al. (2013), we randomly take 700K points for training and 100K for testing. The results are shown in Table 1. Experiment details can be found in appendix B. We can see that GPNet achieves best RMSE among three methods and has comparable test log likelihoods with SVGP. The RMSE gap between and for SVGP is larger than that of GPNet, which again demonstrates that the power of our inference network is not limited by . Interestingly, larger seems to cause underfitting of FBNN and leads to worse RMSE, which may also be due to the minibatch issue.
Finally, we demonstrate the flexibility of GPNet by fitting a deep convolutional inference network for a CNN-GP (Garriga-Alonso et al., 2019), whose covariance kernel are derived from an infinite-width Bayesian ConvNet (see section B.3 for detailed derivation of the GP prior).
|SVGP, RBF-ARD (Krauth et al., 2016)||1.55%||-|
|Conv GP (van der Wilk et al., 2017)||1.22%||35.4%|
|SVGP, CNN-GP (Garriga-Alonso et al., 2019)||2.4%||-|
|NN-GP (Lee et al., 2018)||1.21%||44.34%|
|CNN-GP (Garriga-Alonso et al., 2019)||0.96%||-|
|ResNet-GP (Garriga-Alonso et al., 2019)||0.84%||-|
|CNN-GP (Novak et al., 2019)||0.88%||32.86%|
Previously, inference for GPs with such a kernel has only been investigated through exact prediction (Garriga-Alonso et al., 2019; Novak et al., 2019), where the classification problem is treated as regression so that eq. 2 applies. Though this is done for datasets like MNIST and CIFAR10 in recent works,
complexity is impractical for the method to be widely adopted. A scalable option would be sparse approximations. We tried SVGP for this prior. However, we found the training is unstable if we update the inducing point locations. Initializing them with data or with K-means centers both result in numerical errors that prevent the method from learning. There are no other results reported using SVGP for such kernels except inGarriga-Alonso et al. (2019), where they also fix the inducing point locations (1000 training data were used). We believe it is due to the difficulty of finding good inducing point locations in such a high-dimensional input space of images.
We test GPNet on MNIST and CIFAR10 with a CNN-GP prior. The ConvNet that defines this prior has 6 residual blocks (details in appendix B). It is natural to use the original ConvNet with trainable weight randomness as the inference network () for this CNN-GP. However, as discussed in section 4, using a BNN results in intractable output distributions which require many efforts to address. To avoid this, we use a deterministic ConvNet with an NTK on top of it defined using the fully-connected layers. This enables flexible covariance modeling while still allowing an efficient training. With the non-conjugate form of our algorithm, we are free to use a softmax likelihood, which is more suitable to classification tasks.
Results are compared to recent works in table 2. The first half of the table are approximate inference approaches with classification likelihoods, while the second half does exact prediction by GP regression. By comparing to carefully-designed inducing-point approaches such as Conv GP (van der Wilk et al., 2017), we can clearly see the advantage of our method, i.e., easily scaling up GP inference to highly-structured kernels by using flexible inference networks that match the structures, while getting superior performance than carefully-designed inducing-point methods.
We propose an algorithm to scalably train a stochastic inference network to approximate the GP posterior distribution. In the algorithm the inference network is trained by tracking a stochastic functional mirror descent update which is cheap to compute from the current approximation using a minibatch of data. Experiments show that our algorithm fixes the minibatch issue of previous works on function-space inference. Empirical comparisons to sparse variational GP methods show that our method is a more flexible alternative to GP inference.
We thank Ziyu Wang, Shengyang Sun, Ching-An Cheng for helpful discussions and Hugh Salimbeni for help with the experiments. JS was supported by a Microsoft Research Asia Fellowship. This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, 61621136008, 61571261), Beijing NSF Project (No. L172037), DITD Program JCKY2017204B064, Tiangong Institute for Intelligent Computing, NVIDIA NVAIL Program, and the projects from Siemens and Intel.
- Balan et al. (2015) Balan, A. K., Rathod, V., Murphy, K. P., and Welling, M. Bayesian dark knowledge. In Advances in Neural Information Processing Systems, pp. 3438–3446, 2015.
- Bauer et al. (2016) Bauer, M., van der Wilk, M., and Rasmussen, C. E. Understanding probabilistic sparse Gaussian process approximations. In Advances in Neural Information Processing Systems, pp. 1533–1541, 2016.
- Bernardo et al. (1998) Bernardo, J., Berger, J., Dawid, A., Smith, A., et al. Regression and classification using Gaussian process priors. Bayesian statistics, 6:475, 1998.
- Bui et al. (2017) Bui, T. D., Nguyen, C., and Turner, R. E. Streaming sparse Gaussian process approximations. In Advances in Neural Information Processing Systems, pp. 3299–3307, 2017.
- Cheng & Boots (2016) Cheng, C.-A. and Boots, B. Incremental variational sparse Gaussian process regression. In Advances in Neural Information Processing Systems, pp. 4410–4418, 2016.
- Cheng & Boots (2017) Cheng, C.-A. and Boots, B. Variational inference for Gaussian process models with linear complexity. In Advances in Neural Information Processing Systems, pp. 5184–5194, 2017.
- Cutajar et al. (2016) Cutajar, K., Bonilla, E. V., Michiardi, P., and Filippone, M. Random feature expansions for deep Gaussian processes. arXiv preprint arXiv:1610.04386, 2016.
- Dai et al. (2016) Dai, B., He, N., Dai, H., and Song, L. Provable Bayesian inference via particle mirror descent. In Artificial Intelligence and Statistics, pp. 985–994, 2016.
- Damianou & Lawrence (2013) Damianou, A. and Lawrence, N. Deep Gaussian processes. In Artificial Intelligence and Statistics, pp. 207–215, 2013.
- Damianou et al. (2016) Damianou, A. C., Titsias, M. K., and Lawrence, N. D. Variational inference for latent variables and uncertain inputs in Gaussian processes. Journal of Machine Learning Research, 17(42):1–62, 2016.
de G. Matthews et al. (2017)
de G. Matthews, A. G., van der Wilk, M., Nickson, T., Fujii, K., Boukouvalas,
A., León-Villagrá, P., Ghahramani, Z., and Hensman, J.
GPflow: A Gaussian process library using TensorFlow.Journal of Machine Learning Research, 18(40):1–6, 2017.
- Doucet et al. (2001) Doucet, A., De Freitas, N., and Gordon, N. An introduction to sequential monte carlo methods. In Sequential Monte Carlo methods in practice, pp. 3–14. Springer, 2001.
- Eldredge (2016) Eldredge, N. Analysis and probability on infinite-dimensional spaces. arXiv preprint arXiv:1607.03591, 2016.
Flam-Shepherd et al. (2017)
Flam-Shepherd, D., Requeima, J., and Duvenaud, D.
Mapping Gaussian process priors to Bayesian neural networks.
NIPS Bayesian Deep Learning Workshop, 2017.
- Gal & Turner (2015) Gal, Y. and Turner, R. Improving the Gaussian process sparse spectrum approximation by representing uncertainty in frequency inputs. In International Conference on Machine Learning, pp. 655–664, 2015.
- Garnelo et al. (2018a) Garnelo, M., Rosenbaum, D., Maddison, C., Ramalho, T., Saxton, D., Shanahan, M., Teh, Y. W., Rezende, D., and Eslami, S. A. Conditional neural processes. In International Conference on Machine Learning, pp. 1690–1699, 2018a.
- Garnelo et al. (2018b) Garnelo, M., Schwarz, J., Rosenbaum, D., Viola, F., Rezende, D. J., Eslami, S., and Teh, Y. W. Neural processes. arXiv preprint arXiv:1807.01622, 2018b.
- Garriga-Alonso et al. (2019) Garriga-Alonso, A., Rasmussen, C. E., and Aitchison, L. Deep convolutional networks as shallow Gaussian processes. In International Conference on Learning Representations, 2019.
- Hafner et al. (2018) Hafner, D., Tran, D., Irpan, A., Lillicrap, T., and Davidson, J. Reliable uncertainty estimates in deep neural networks using noise contrastive priors. arXiv preprint arXiv:1807.09289, 2018.
- Hensman et al. (2013) Hensman, J., Fusi, N., and Lawrence, N. D. Gaussian processes for big data. arXiv preprint arXiv:1309.6835, 2013.
- Hensman et al. (2018) Hensman, J., Durr, N., e, and Solin, A. Variational fourier features for Gaussian processes. Journal of Machine Learning Research, 18(151):1–52, 2018.
- Hernández-Lobato & Adams (2015) Hernández-Lobato, J. M. and Adams, R. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In International Conference on Machine Learning, pp. 1861–1869, 2015.
Jaakkola & Haussler (1999)
Jaakkola, T. and Haussler, D.
Exploiting generative models in discriminative classifiers.In Advances in Neural Information Processing Systems, pp. 487–493, 1999.
- Jacot-Guillarmod et al. (2018) Jacot-Guillarmod, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, pp. 8580–8589. 2018.
- Kanagawa et al. (2018) Kanagawa, M., Hennig, P., Sejdinovic, D., and Sriperumbudur, B. K. Gaussian processes and kernel methods: A review on connections and equivalences. arXiv preprint arXiv:1807.02582, 2018.
- Khan & Lin (2017) Khan, M. and Lin, W. Conjugate-computation variational inference: Converting variational inference in non-conjugate models to inferences in conjugate models. In Artificial Intelligence and Statistics, pp. 878–887, 2017.
- Kingma & Welling (2013) Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Krauth et al. (2016) Krauth, K., Bonilla, E. V., Cutajar, K., and Filippone, M. AutoGP: Exploring the capabilities and limitations of Gaussian process models. arXiv preprint arXiv:1610.05392, 2016.
Probabilistic non-linear principal component analysis with Gaussian process latent variable models.Journal of Machine Learning Research, 6(Nov):1783–1816, 2005.
- Lázaro-Gredilla et al. (2010) Lázaro-Gredilla, M., Quiñonero-Candela, J., Rasmussen, C. E., and Figueiras-Vidal, A. R. Sparse spectrum Gaussian process regression. Journal of Machine Learning Research, 11:1865–1881, 2010.
- Lee et al. (2018) Lee, J., Sohl-dickstein, J., Pennington, J., Novak, R., Schoenholz, S., and Bahri, Y. Deep neural networks as Gaussian processes. In International Conference on Learning Representations, 2018.
- Ma et al. (2018) Ma, C., Li, Y., and Hernández-Lobato, J. M. Variational implicit processes. arXiv preprint arXiv:1806.02390, 2018.
- MacKay (1996) MacKay, D. J. Bayesian non-linear modeling for the prediction competition. In Maximum Entropy and Bayesian Methods, pp. 221–234. Springer, 1996.
- Mallasto & Feragen (2017) Mallasto, A. and Feragen, A. Learning from uncertain curves: The 2-wasserstein metric for Gaussian processes. In Advances in Neural Information Processing Systems, pp. 5660–5670, 2017.
- Matthews et al. (2018) Matthews, A. G. d. G., Rowland, M., Hron, J., Turner, R. E., and Ghahramani, Z. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271, 2018.
- Neal (1995) Neal, R. M. Bayesian Learning for Neural Networks. PhD thesis, University of Toronto, 1995.
- Novak et al. (2019) Novak, R., Xiao, L., Bahri, Y., Lee, J., Yang, G., Abolafia, D. A., Pennington, J., and Sohl-dickstein, J. Bayesian deep convolutional networks with many channels are Gaussian processes. In International Conference on Learning Representations, 2019.
- Quiñonero-Candela & Rasmussen (2005) Quiñonero-Candela, J. and Rasmussen, C. E. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research, 6(Dec):1939–1959, 2005.
- Rahimi & Recht (2008) Rahimi, A. and Recht, B. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems, pp. 1177–1184, 2008.
- Raskutti & Mukherjee (2015) Raskutti, G. and Mukherjee, S. The information geometry of mirror descent. IEEE Transactions on Information Theory, 61(3):1451–1457, 2015.
- Rasmussen & Williams (2006) Rasmussen, C. E. and Williams, C. K. Gaussian processes for machine learning. MIT Press, 2006.
- Salimbeni & Deisenroth (2017) Salimbeni, H. and Deisenroth, M. Doubly stochastic variational inference for deep Gaussian processes. In Advances in Neural Information Processing Systems, pp. 4588–4599, 2017.
- Shi et al. (2017) Shi, J., Chen, J., Zhu, J., Sun, S., Luo, Y., Gu, Y., and Zhou, Y. ZhuSuan: A library for Bayesian deep learning. arXiv preprint arXiv:1709.05870, 2017.
- Shi et al. (2018) Shi, J., Sun, S., and Zhu, J. A spectral approach to gradient estimation for implicit distributions. In International Conference on Machine Learning, pp. 4651–4660, 2018.
- Snelson & Ghahramani (2006) Snelson, E. and Ghahramani, Z. Sparse Gaussian processes using pseudo-inputs. In Advances in Neural Information Processing Systems, pp. 1257–1264, 2006.
- Snoek et al. (2015) Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., Patwary, M., Prabhat, M., and Adams, R. Scalable Bayesian optimization using deep neural networks. In International Conference on Machine Learning, pp. 2171–2180, 2015.
- Sun et al. (2018) Sun, S., Zhang, G., Wang, C., Zeng, W., Li, J., and Grosse, R. Differentiable compositional kernel learning for Gaussian processes. In International Conference on Machine Learning, pp. 4835–4844, 2018.
- Sun et al. (2019) Sun, S., Zhang, G., Shi, J., and Grosse, R. Functional variational Bayesian neural networks. In International Conference on Learning Representations, 2019.
- Sutton & Barto (1998) Sutton, R. S. and Barto, A. G. Reinforcement learning - an introduction. MIT Press, 1998.
- Titsias (2009) Titsias, M. Variational learning of inducing variables in sparse Gaussian processes. In Artificial Intelligence and Statistics, pp. 567–574, 2009.
- van der Wilk et al. (2017) van der Wilk, M., Rasmussen, C. E., and Hensman, J. Convolutional Gaussian processes. In Advances in Neural Information Processing Systems, pp. 2849–2858, 2017.
- Wainwright et al. (2008) Wainwright, M. J., Jordan, M. I., et al. Graphical models, exponential families, and variational inference. Foundations and Trends® in Machine Learning, 1(1–2):1–305, 2008.
- Wang et al. (2019) Wang, Z., Ren, T., Zhu, J., and Zhang, B. Function space particle optimization for Bayesian neural networks. In International Conference on Learning Representations, 2019.
- Williams & Barber (1998) Williams, C. K. and Barber, D. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351, 1998.
- Williams & Rasmussen (1996) Williams, C. K. and Rasmussen, C. E. Gaussian processes for regression. In Advances in Neural Information Processing Systems, pp. 514–520, 1996.
- Wilson et al. (2016) Wilson, A. G., Hu, Z., Salakhutdinov, R., and Xing, E. P. Deep kernel learning. In Artificial Intelligence and Statistics, pp. 370–378, 2016.
Appendix A Derivation using the Dual Representation of GP and Natural Gradient
We note that the stochastic mirror descent is equivalent to natural gradient updates for exponential family densities (Raskutti & Mukherjee, 2015; Khan & Lin, 2017). To show this, we derive the same adaptive Bayesian filter in eq. 6 using the dual representation of GP as a Gaussian measure and natural gradient.
A Gaussian process has a dual representation as a Gaussian measure on a separable Banach space (Cheng & Boots, 2017; Mallasto & Feragen, 2017). There is an RKHS that corresponds to a positive definite kernel densely embedded in . The measure is constructed as follows. First define a canonical Gaussian cylinder set measure on , denoted as , where is the mean function, is a bounded positive semi-definite linear operator. They satisfy
where denotes inner product in the RKHS: . Let be the inclusion map from into . Then the measure is induced by using this map. In the measure theory of infinite-dimensional space, is known as the abstract Wiener measure. The RKHS is sometimes called the Cameron-Martin space.
The intuition for this construction is that the canonical Gaussian cylinder set measure is not a proper measure. In fact, we can show that countable additivity does not hold for this ”measure” (Eldredge, 2016).
The inclusion map here radonifies into a true measure .
One way to think about this is that functions drawn from the Gaussian process fall outside of with probability one
with probability one(Kanagawa et al., 2018), but are contained in . Despite this, as pointed out in Cheng & Boots (2017), we can conveniently work with the canonical form and get correct results as long as the conclusion is independent of the dimension of .
It is easy to check that the canonical Gaussian measure which corresponds to the GP prior is . Assuming the canonical form of the variational distribution is , we have stochastic approximation of the lower bound as:
Now that we can interpret GPs in the form of canonical Gaussian measures, we can then write and in the exponential family form to simplify the derivation:
where denotes the sufficient statistics, is the partition function. The natural parameters of and are and , respectively. Let denote the mean parameters of : . There is a dual relationship between the natural parameter and the mean parameter : . The mapping is one-to-one when the exponential family is minimal (Wainwright et al., 2008). The stochastic natural gradient of the lower bound with respect to is defined as
where is the Fisher information matrix. The natural gradient of the KL divergence term is
where we have used the fact that . We can also derive a simplified form of the natural gradient of the conditional log likelihood term, by writing it as the gradient with respect to the mean parameter :
So the natural gradient update can be written as
Reinterpreting the above equation in the density space, we have
The likelihood is said to be conjugate with the prior if it has a form as . For example, in GP regression, the likelihood is , which has an above form with the natural parameter . By plugging in , we can verify that eq. 15 is equivalent to
which turns out to be the same adaptive Bayesian filter we get in eq. 6. As for the non-conjugate case, we can view the natural gradient update as the projection of the functional mirror descent update onto exponential families, by approximating the likelihood term with the exponential family .
Appendix B Experiment Details and Additional Results
b.1 Synthetic Data
The full figures including FBNN with are shown in Fig. 4. We can see that FBNN’s problem of overestimating uncertainty is consistent with different values of . We set in this experiment.
For all regression experiments, we set the measurement points to be sampled from the empirical distribution of training data convolved with the prior RBF kernel if the prior hyperparameters are initialized by optimizing GP marginal likelihood on a random subset, otherwise we set simply to be the training data distribution. We fix and do not adapt it together with the prior kernel parameters when the prior hyperparameters are updated during training.
The GP we use has a RBF kernel with dimension-wise lengthscales, also known as Automatic Relevance Determination (ARD) (MacKay, 1996). The RFE inference network we use has 1000 features (hidden units). We initialize the lengthscales in the network using the lengthscales of the prior kernel, and initialize the frequencies (first-layer weights) with random samples from a standard Gaussian. We then train these frequencies and lengthscales as inference network parameters. For FBNN, we keep all settings (including the inference network) the same as GPNet except the training objective used. We use 20 random splits for each dataset, where we keep 90% of the dataset as the training set and use the remaining 10% for test. The inputs and outputs of all data points are scaled to have nearly zero mean and unit variance using the mean and standard deviation calculated from training data. We use minibatch size 500, learning rate for all the experiments. With each random seed we ran 10K iterations. For Boston, Concrete, Kin8nm and Protein we initialize the prior hyperparameters by maximizing GP marginal likelihood for 1K iterations on a randomly chosen subset of 1000 points. For Power and Wine we optimize the prior hyperparameters during training using the minibatch lower bound and the same learning rate as , as described in section 3.3. We set and except for Power and Protein we use and , respectively. In Table 3 and Table 4 we list the mean and standard errors of all experiments.
We use the same type of GP prior and inference networks as in benchmark datasets above. We set . For all methods we train for 10K iterations with minibatch size 500 and learning rate . For GPNet we optimize the prior hyperparameters during training with the same learning rate as , for which the objective is described in section 3.3, while we found that doing the same with FBNN seriously hurt its performance (leading to RMSE 27.186 for ), therefore we did not update hyperparameters during training for FBNN. We initialize the prior hyperparameters by maximizing GP marginal likelihood for 1K iterations on a randomly chosen subset of 1000 points. We did this for both GPNet and FBNN, though we found that for GPNet this does not improve the performance.
|Data set||N||D||SVGP, 100||GPNet, 100||SVGP, 500||GPNet, 500||FBNN, 500|
|Data set||N||D||SVGP, 100||GPNet, 100||SVGP, 500||GPNet, 500||FBNN, 500|
The prior is defined as follows. Let denote the pre-activation output of the -th layer of the ConvNet. The shape of is . Each row of it represents the flattened feature map in a channel. A hidden layer in the network makes the transformation:
where is the pseudo weight matrix that corresponds to the convolutional filter . The elements of each row in are zero except where applies. denotes the bias in the -th channel. is the ReLU activation function. Let denote the positions within a filter, independent Gaussian priors are placed over and to form a Bayesian ConvNet:
By carefully taking the limit of hidden-layer widths, one can prove that each row in form a multivariate Gaussian, and different rows (channels) are independent and identically distributed (i.i.d.), thus showing that the Bayesian ConvNet defines a GP (Garriga-Alonso et al., 2019). It is easy to show the prior mean function is zero: To determine the prior covariance kernel of the output, we can follow a recursive procedure (For simplicity, we use to denote the covariance between and ):
. The prior convnet we used is a deep convolutional neural network with 6 residual blocks, each two of them operates on a different size of feature maps, with the first two on feature maps with the same size as the original image. There are strided convolution (stride=2) between the three groups.
The inference network we used has the same structure as the prior ConvNet, except the number of convolutional filters are [64, 64, 128, 128, 256, 256]. On top of it we have a fully-connected layer of size 512 and neural tangent kernels defined by a MLP with 100 hidden units for the output of each class.
We use batch size 64 and measurement points in this experiment. We set to be the empirical distribution of the training data. In implementation this simply means that we use two different shuffles of the training dataset, and pick a minibatch from each of them. Then we use one of the two minibatches as training points, and the other as measurement points. The learning rate is . We set , and ran for 10K iterations. We did not update prior hyperparameters in this experiment.