1 Tractable evidence lower bound for GPclassification
In the previous section we described the svi
method. It is based on stochastic optimization of the lower bound (11) for marginal likelihood and the lower bound itself is computed in . But the bound depends on parameters which makes the optimization problem hard to solve when a big number of inducing points is needed.
For GPregression the situation is similar. Paper [1] describes a method analogical to the svi
method for classification. The only difference is that the lower bound becomes tractable in case of regression. Then the paper [6] tries to solve the problem of big number of parameters in the algorithm from [1] in the following way. In case of regression the lower bound (11) can be analytically optimised with respect to variational parameters . Doing so and substituting the optimal values back into the lower bound, one can obtain a new lower bound to the marginal likelihood that depends solely on kernel hyperparameters . This simplifies the optimization problem by dramatically reducing the number of optimization parameters. Unfortunately, this new bound doesn’t have a form of ’’sum over objects’’, hence stochastic optimization methods are no longer applicable here. However, in our experiments we’ve found that even for fairly big datasets the method from [6] outperforms [1] despite the lack of stochastic optimization.
In the following subsection we devise an approach similar to the method of [6] for the case of classification. We provide a tractable evidence lower bound and analytically maximize it with respect to variational parameters and . Substituting the optimal values of these parameters back into the lower bound we obtain a new lower bound, that depends only on kernel hyperparameters .
Global evidence lower bound
In order to derive a tractable lower bound for (11), we seek a quadratic approximation to the loglogistic function , where . The paper [4] provides a global parametric quadratic lower bound for this function:
where and is a parameter of the bound. This bound is tight when .
Substituting this bound back to (11) with separate values for every data point, we obtain a tractable lower bound
where
Differentiating with respect to and and setting the derivatives to zero, we obtain
(12) 
(13) 
Substituting the optimal values of variational parameters back to the lower bound and omitting the terms not depending on and , we obtain a compact lower bound
(14) 
where
In the following we consider three different methods for maximizing the lower bound .
Note, that given the values of , , , we can maximize with respect to analytically. The optimal values for are given by
(15) 
The values and were defined in (10). In our first method we use analytical formulas (15) to recompute the values of and use gradientbased optimization to maximize the bound with respect to . The pseudocode is given in Alg. 1. We will refer to this method as viJJ
, where JJ
stands for Jaakkola and Jordan, the authors of [4]. Note that the computational complexity of one iteration of this method is , the same as for the svi
method.
Our second method uses gradientbased optimization to maximize with respect to both and . Note that in this method we don’t have to recompute and at each iteration which makes the methods iterations empirically faster for big values of . We refer to this method as viJJfull
.
Finally, viJJhybrid
is a combination of the two methods described above. The general scheme of this method is the same as viJJ
. In the viJJhybrid
method we use analytical formulas to recompute as we do in the viJJ
method at stage , but at stage we use gradientbased optimization with respect to both and . The virtues of this method will be described in the experiments section.
Tractable local approximation to the evidence lower bound
Another way to obtain a tractable approximation to the lower bound (11) is to use a local quadratic approximation for the loglogistic function . In this way we perform a secondorder Taylor expansion of this function at points :
(16) 
The following derivation is analogical to the derivation in the previous section. Substituting the approximation (16) into the lower bound (11), we obtain
Here is a diagonal matrix
where .
Differentiating the approximate bound with respect to , , and , and setting the derivatives to zero, we obtain the following formulas for optimal values of these parameters:
(17) 
(18) 
Here
and is a vector, composed of
Substituting the optimal values for and , given by (17), (18), back into the approximate bound, and omiting the terms, that do not depend on , we obtain the following approximate lower bound:
(19) 
where
Note that the lower bound (19) is not a global lower bound for the logevidence . However, locally we get a good approximation of the evidence lower bound (11).
For maximizing the approximate lower bound (19) we consider a method, analogical to viJJ
. In order to specify this method we simply substitute the bound by in the second stage in Alg. 1. We will refer to this method as viTaylor
. The computational complexity of one iteration of this method is once again .
2 Experiments
In this section we empirically compare the derived viJJ
, viTaylor
, viJJfull
and viJJhybrid
methods with svi
. Below we describe the setting of the experiments and discuss their results.
Experimental setting
In our experiments we compared methods for variational inducing point GPclassification:

viJJ
was described in section ; 
viTaylor
was described in section ; 
viJJfull
was described in section ; 
viJJhybrid
was described in section .
We also tried using deterministic LBFGSB optimization method for maximizing the evidence lower bound (11
), but it worked substantially worse than all the other methods. Note, that all the methods have the same complexity of epochs
. Table 1 shows which variables are optimized numerically and which are optimized analytically for each method.Method 




sviAdaDelta 
, ,  
viJJ , viTaylor

, ,  
viJJhybrid 
,  , ,  
viJJfull 
,  ,  
In our experiments we didn’t optimize the lower bound with respect to the positions of inducing points. Instead we used means clustering procedure with equal to the number of inducing inputs and took clusters centres as . Also we used the squared exponential covariance function (see section Model adaptation) in all experiments with a Gaussian noise term.
The stochastic method sviAdaDelta
requires the user to manually specify the learning rate and the batch size for the optimization method. For the former we had to run the method with different learning rates and choose the value that resulted in the fastest convergence. We used the learning rates from a fixed grid with a step of . It always happened, that for the largest value from the grid the method diverged, and for the smallest the method converged slower, than for some medium value, verifying, that the optimal learning rate was somewhere in the range. To choose the batch size we used the following convention. For small german and svmguide datasets we’ve set the batch size to . For other datasets we used approximately as the batch size, where is the size of the training set.
For the viJJ
, viTaylor
and viJJhybrid
in all of the experiments on every iteration we recomputed the values of , , and times ( in algorithm 1). To tune , on every iteration we’ve run LBFGSB optimization method constrained to do no more than evaluations of the lower bound and it’s gradient. We found that these values of parameters work well for all the datasets we experimented with.
For the sviAdaDelta
method we used optimization w.r.t. Cholesky factor of the matrix in order to maintain it’s positive definiteness, as described in [2]. We used AdaDelta optimization method implementation from the climin toolbox [7] as is done in the original paper.
For every dataset we experimented with a number of inducing points to verify that the results of the methods are close to optimal.
We evaluate the methods plotting the accuracy of their predictions on the test data against time. All of the plots have titles of the following format.
We also preprocessed all the datasets by normalizing the features setting the mean of all features to and the variance to . For datasets without available test data, we used of the data as a test set and as a train set.
Results and discussion
We compared the methods’ performance on seven datasets. Here we discuss the results.
Fig. 4 provides the results for german and svmguide datasets. As we can see, on the small german dataset the stochastic sviAdaDelta
method struggles, and it takes it longer to achieve the optimal quality, than for all the other methods, which show similar results. On the svmguide dataset it takes viJJfull
and sviAdaDelta
a little bit longer to converge, while the other three methods show roughly the same performance.
The results on magic telescope and ijcnn datasets are provided in fig. 5. On the magic telescope dataset the viJJ
and viTaylor
show poor quality on the first iterations, but still manage to converge faster, than sviAdaDelta
. On both datasets the viJJhybrid
method works similar to viJJ
and viTaylor
, but shows better quality on first iterations on the magic telescope data. viJJfull
can’t converge to a reasonable quality on both datasets.
Fig. 6 provides the results on big codrna and skin_nonskin datasets. On these datasets the viJJfull
once again fails to achieve a reasonable quality, while the other methods work similarly.
Finally, the results on a8a data are provided in fig. 7. Here we use a rather big amount of inducing inputs. As we can see, the viJJfull
and viJJhybrid
are the fastest to achieve the optimal quality. The sviAdaDelta
method also converges reasonably fast, while the viJJ
and viTaylor
struggle a little bit.
In general viJJ
, viTaylor
and viJJhybrid
methods perform similar to sviAdaDelta
method. On the big dataset skin_nonskin with only features the viJJhybrid
is a little bit slower than the stochastic sviAdaDelta
, but on all the other datasets it is better. The viTaylor
and viJJ
struggle with a8a, but are otherwise comparable to viJJhybrid
. The stochastic sviAdaDelta
method performs poorly on small datasets and even on the big skin_nonskin data doesn’t manage to substantially outperform the other methods, even provided a good value of learning rate. Finally, viJJfull
works well on small data and on the a8a, but on all the other datasets it doesn’t manage to achieve a reasonable quality.
3 Conclusion
In this paper we presented a new approach to training variational inducing input Gaussian process classification. We derived two new tractable evidence lower bounds and described several ways to maximize them. The resulting methods viJJ
, viJJfull
, viJJhybrid
and viTaylor
are similar to the method of [6] for GPregression.
We provided an experimental comparison of our methods with the current stateoftheart method sviAdaDelta
of [2]. In our experimental setting, our approach proved to be more practical, as it converges to the optimal quality as fast as the sviAdaDelta
method without requiring the user to manually choose the parameters of the optimization method.
The four described vi
methods showed similar performance and it’s hard to distinguish them. However, note that the viTaylor
approach is more general and can be applied to the likelihood functions, that are not logistic. We could also easily derive a method, similar to viJJhybrid
and viJJfull
for the nonlogistic case, but it is out of scope of this paper.
Список литературы

[1]
J. Hensman, N. Fusi, and N. D. Lawrence. 2013. Gaussian processes for big data. Proceedings of the TwentyNinth Conference on Uncertainty in Artificial Intelligence. P. 282–290.
 [2] J. Hensman, G. Matthews, Z. Ghahramani. 2015. Scalable Variational Gaussian Process Classification. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics.
 [3] T. Jaakkola. 2000. Tutorial on Variational Approximation Methods. Advanced Mean Field Methods: Theory and Practice. P. 129–159.

[4]
T. Jaakkola, M. Jordan. 1997. A Variational Approach to Bayesian Logistic Regression Models and Their Extensions. Proceedings of the 1997 Conference on Artificial Intelligence and Statistics.

[5]
C. E. Rasmussen and C. K. I. Williams. 2006. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA.
 [6] M. Titsias. 2009. Variational learning of inducing variables in sparse Gaussian processes. Proceedings of the Twelfth International Workshop on Artificial Intelligence and Statistics. vol. 5, P. 567–574.
 [7] Climin. URL: http://github.com/BRML/climin.
 [8] A. Smola, P. Bartlett 2001. Sparse greedy Gaussian process regression. Neural Information Processing Systems.
 [9] J. QuinoneroCandela, C. Rasmussen 2005. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research. vol. 6, P. 1939–1959.
 [10] L. Csato, M. Opper 2002. Sparse online Gaussian processes. Neural Computation. vol. 14, P. 641–668.
Comments
There are no comments yet.