1 Tractable evidence lower bound for GP-classification
In the previous section we described the
svi method. It is based on stochastic optimization of the lower bound (11) for marginal likelihood and the lower bound itself is computed in . But the bound depends on parameters which makes the optimization problem hard to solve when a big number of inducing points is needed.
For GP-regression the situation is similar. Paper  describes a method analogical to the
svi method for classification. The only difference is that the lower bound becomes tractable in case of regression. Then the paper  tries to solve the problem of big number of parameters in the algorithm from  in the following way. In case of regression the lower bound (11) can be analytically optimised with respect to variational parameters . Doing so and substituting the optimal values back into the lower bound, one can obtain a new lower bound to the marginal likelihood that depends solely on kernel hyper-parameters . This simplifies the optimization problem by dramatically reducing the number of optimization parameters. Unfortunately, this new bound doesn’t have a form of ’’sum over objects’’, hence stochastic optimization methods are no longer applicable here. However, in our experiments we’ve found that even for fairly big datasets the method from  outperforms  despite the lack of stochastic optimization.
In the following subsection we devise an approach similar to the method of  for the case of classification. We provide a tractable evidence lower bound and analytically maximize it with respect to variational parameters and . Substituting the optimal values of these parameters back into the lower bound we obtain a new lower bound, that depends only on kernel hyper-parameters .
Global evidence lower bound
In order to derive a tractable lower bound for (11), we seek a quadratic approximation to the log-logistic function , where . The paper  provides a global parametric quadratic lower bound for this function:
where and is a parameter of the bound. This bound is tight when .
Substituting this bound back to (11) with separate values for every data point, we obtain a tractable lower bound
Differentiating with respect to and and setting the derivatives to zero, we obtain
Substituting the optimal values of variational parameters back to the lower bound and omitting the terms not depending on and , we obtain a compact lower bound
In the following we consider three different methods for maximizing the lower bound .
Note, that given the values of , , , we can maximize with respect to analytically. The optimal values for are given by
The values and were defined in (10). In our first method we use analytical formulas (15) to recompute the values of and use gradient-based optimization to maximize the bound with respect to . The pseudocode is given in Alg. 1. We will refer to this method as
JJ stands for Jaakkola and Jordan, the authors of . Note that the computational complexity of one iteration of this method is , the same as for the
Our second method uses gradient-based optimization to maximize with respect to both and . Note that in this method we don’t have to recompute and at each iteration which makes the methods iterations empirically faster for big values of . We refer to this method as
vi-JJ-hybrid is a combination of the two methods described above. The general scheme of this method is the same as
vi-JJ. In the
vi-JJ-hybrid method we use analytical formulas to recompute as we do in the
vi-JJ method at stage , but at stage we use gradient-based optimization with respect to both and . The virtues of this method will be described in the experiments section.
Tractable local approximation to the evidence lower bound
Another way to obtain a tractable approximation to the lower bound (11) is to use a local quadratic approximation for the log-logistic function . In this way we perform a second-order Taylor expansion of this function at points :
Here is a diagonal matrix
Differentiating the approximate bound with respect to , , and , and setting the derivatives to zero, we obtain the following formulas for optimal values of these parameters:
and is a vector, composed of
For maximizing the approximate lower bound (19) we consider a method, analogical to
vi-JJ. In order to specify this method we simply substitute the bound by in the second stage in Alg. 1. We will refer to this method as
vi-Taylor. The computational complexity of one iteration of this method is once again .
In this section we empirically compare the derived
vi-JJ-hybrid methods with
svi. Below we describe the setting of the experiments and discuss their results.
In our experiments we compared methods for variational inducing point GP-classification:
vi-JJwas described in section ;
vi-Taylorwas described in section ;
vi-JJ-fullwas described in section ;
vi-JJ-hybridwas described in section .
We also tried using deterministic L-BFGS-B optimization method for maximizing the evidence lower bound (11
), but it worked substantially worse than all the other methods. Note, that all the methods have the same complexity of epochs. Table 1 shows which variables are optimized numerically and which are optimized analytically for each method.
In our experiments we didn’t optimize the lower bound with respect to the positions of inducing points. Instead we used -means clustering procedure with equal to the number of inducing inputs and took clusters centres as . Also we used the squared exponential covariance function (see section Model adaptation) in all experiments with a Gaussian noise term.
The stochastic method
svi-AdaDelta requires the user to manually specify the learning rate and the batch size for the optimization method. For the former we had to run the method with different learning rates and choose the value that resulted in the fastest convergence. We used the learning rates from a fixed grid with a step of . It always happened, that for the largest value from the grid the method diverged, and for the smallest the method converged slower, than for some medium value, verifying, that the optimal learning rate was somewhere in the range. To choose the batch size we used the following convention. For small german and svmguide datasets we’ve set the batch size to . For other datasets we used approximately as the batch size, where is the size of the training set.
vi-JJ-hybrid in all of the experiments on every iteration we recomputed the values of , , and times ( in algorithm 1). To tune , on every iteration we’ve run L-BFGS-B optimization method constrained to do no more than evaluations of the lower bound and it’s gradient. We found that these values of parameters work well for all the datasets we experimented with.
svi-AdaDelta method we used optimization w.r.t. Cholesky factor of the matrix in order to maintain it’s positive definiteness, as described in . We used AdaDelta optimization method implementation from the climin toolbox  as is done in the original paper.
For every dataset we experimented with a number of inducing points to verify that the results of the methods are close to optimal.
We evaluate the methods plotting the accuracy of their predictions on the test data against time. All of the plots have titles of the following format.
We also preprocessed all the datasets by normalizing the features setting the mean of all features to and the variance to . For datasets without available test data, we used of the data as a test set and as a train set.
Results and discussion
We compared the methods’ performance on seven datasets. Here we discuss the results.
Fig. 4 provides the results for german and svmguide datasets. As we can see, on the small german dataset the stochastic
svi-AdaDelta method struggles, and it takes it longer to achieve the optimal quality, than for all the other methods, which show similar results. On the svmguide dataset it takes
svi-AdaDelta a little bit longer to converge, while the other three methods show roughly the same performance.
The results on magic telescope and ijcnn datasets are provided in fig. 5. On the magic telescope dataset the
vi-Taylor show poor quality on the first iterations, but still manage to converge faster, than
svi-AdaDelta. On both datasets the
vi-JJ-hybrid method works similar to
vi-Taylor, but shows better quality on first iterations on the magic telescope data.
vi-JJ-full can’t converge to a reasonable quality on both datasets.
Fig. 6 provides the results on big cod-rna and skin_nonskin datasets. On these datasets the
vi-JJ-full once again fails to achieve a reasonable quality, while the other methods work similarly.
Finally, the results on a8a data are provided in fig. 7. Here we use a rather big amount of inducing inputs. As we can see, the
vi-JJ-hybrid are the fastest to achieve the optimal quality. The
svi-AdaDelta method also converges reasonably fast, while the
vi-Taylor struggle a little bit.
vi-JJ-hybrid methods perform similar to
svi-AdaDelta method. On the big dataset skin_nonskin with only features the
vi-JJ-hybrid is a little bit slower than the stochastic
svi-AdaDelta, but on all the other datasets it is better. The
vi-JJ struggle with a8a, but are otherwise comparable to
vi-JJ-hybrid. The stochastic
svi-AdaDelta method performs poorly on small datasets and even on the big skin_nonskin data doesn’t manage to substantially outperform the other methods, even provided a good value of learning rate. Finally,
vi-JJ-full works well on small data and on the a8a, but on all the other datasets it doesn’t manage to achieve a reasonable quality.
In this paper we presented a new approach to training variational inducing input Gaussian process classification. We derived two new tractable evidence lower bounds and described several ways to maximize them. The resulting methods
vi-Taylor are similar to the method of  for GP-regression.
We provided an experimental comparison of our methods with the current state-of-the-art method
svi-AdaDelta of . In our experimental setting, our approach proved to be more practical, as it converges to the optimal quality as fast as the
svi-AdaDelta method without requiring the user to manually choose the parameters of the optimization method.
The four described
vi methods showed similar performance and it’s hard to distinguish them. However, note that the
vi-Taylor approach is more general and can be applied to the likelihood functions, that are not logistic. We could also easily derive a method, similar to
vi-JJ-full for the non-logistic case, but it is out of scope of this paper.
J. Hensman, N. Fusi, and N. D. Lawrence. 2013. Gaussian processes for big data. Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence. P. 282–290.
-  J. Hensman, G. Matthews, Z. Ghahramani. 2015. Scalable Variational Gaussian Process Classification. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics.
-  T. Jaakkola. 2000. Tutorial on Variational Approximation Methods. Advanced Mean Field Methods: Theory and Practice. P. 129–159.
T. Jaakkola, M. Jordan. 1997. A Variational Approach to Bayesian Logistic Regression Models and Their Extensions. Proceedings of the 1997 Conference on Artificial Intelligence and Statistics.
C. E. Rasmussen and C. K. I. Williams. 2006. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA.
-  M. Titsias. 2009. Variational learning of inducing variables in sparse Gaussian processes. Proceedings of the Twelfth International Workshop on Artificial Intelligence and Statistics. vol. 5, P. 567–574.
-  Climin. URL: http://github.com/BRML/climin.
-  A. Smola, P. Bartlett 2001. Sparse greedy Gaussian process regression. Neural Information Processing Systems.
-  J. Quinonero-Candela, C. Rasmussen 2005. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research. vol. 6, P. 1939–1959.
-  L. Csato, M. Opper 2002. Sparse online Gaussian processes. Neural Computation. vol. 14, P. 641–668.