Kernel Distillation for Gaussian Processes

01/31/2018
by   Congzheng Song, et al.
cornell university
0

Gaussian processes (GPs) are flexible models that can capture complex structure in large-scale dataset due to their non-parametric nature. However, the usage of GPs in real-world application is limited due to their high computational cost at inference time. In this paper, we introduce a new framework, kernel distillation, for kernel matrix approximation. The idea adopts from knowledge distillation in deep learning community, where we approximate a fully trained teacher kernel matrix of size n× n with a student kernel matrix. We combine inducing points method with sparse low-rank approximation in the distillation procedure. The distilled student kernel matrix only cost O(m^2) storage where m is the number of inducing points and m ≪ n. We also show that one application of kernel distillation is for fast GP prediction, where we demonstrate empirically that our approximation provide better balance between the prediction time and the predictive performance compared to the alternatives.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

03/03/2015

Kernel Interpolation for Scalable Structured Gaussian Processes (KISS-GP)

We introduce a new structured kernel interpolation (SKI) framework, whic...
07/15/2021

Input Dependent Sparse Gaussian Processes

Gaussian Processes (GPs) are Bayesian models that provide uncertainty es...
09/30/2020

Efficient Kernel Transfer in Knowledge Distillation

Knowledge distillation is an effective way for model compression in deep...
07/05/2018

Scalable Gaussian Processes with Grid-Structured Eigenfunctions (GP-GRIEF)

We introduce a kernel approximation strategy that enables computation of...
04/22/2017

Asynchronous Distributed Variational Gaussian Processes for Regression

Gaussian processes (GPs) are powerful non-parametric function estimators...
08/28/2020

Locally induced Gaussian processes for large-scale simulation experiments

Gaussian processes (GPs) serve as flexible surrogates for complex surfac...
06/01/2021

Gaussian Processes with Differential Privacy

Gaussian processes (GPs) are non-parametric Bayesian models that are wid...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Gaussian Processes (GPs) rasmussen2004gaussian are powerful tools for regression and classification problems as these models are able to learn complex representation of data through expressive covariance kernels. However, the application of GPs in real-world is limited due to their poor scalability during inference time. For a training data of size , GPs requires computation and storage for inference a single test point. Previous way for scaling GP inference is either through inducing points methods seeger2003fast ; titsias2009variational ; lawrence2003fast or structure exploitation saatcci2012scalable ; wilson2014fast

. More recently, the structured kernel interpolation (SKI) framework and KISS-GP 

wilson2015kernel further improve the scalability of GPs by unifying inducing points methods and structure exploitation. These methods can suffer from degradation of test performance or require input data to have special grid structure.

All the previous solutions for scaling GPs focus on training GPs from scratch. In this paper, we focus on a different setting where we have enough resource for training exact GPs and want to apply the trained model for inference on resource-limited devices such as mobile phone or robotics deisenroth2015gaussian . We wish to investigate the possibility of compressing a large trained exact GP model to a smaller and faster approximate GP model while preserve the predictive power of the exact model. This paper proposes kernel distillation, a general framework to approximate a trained GP model. Kernel distillation extends inducing point methods with insights from SKI framework and utilizes the knowledges from a trained model.

In particular, we approximate the exact kernel matrix with a sparse and low-rank structured matrix. We formulate kernel distillation as a constrained -norm minimization problem, leading to more accurate kernel approximation compared to previous approximation approaches. Our method is a general purpose kernel approximation method and does not require kernel function to be separable or stationary and input data to have any special structure. We evaluate our approach on various real-world datasets, and the empirical results evidence that kernel distillation can better preserving the predictive power of a fully trained GP model and improving the speed simultaneously compared to other alternatives.

2 Kernel Distillation

Background.

We focus on GP regression problem. Denote the dataset as

which consists of input feature vectors

and real-value targets . GP models a distribution over functions

, where any set of function values forms a joint Gaussian distribution characterized by mean function

and kernel function where

is the set of hyper-parameters to be trained. Based on Gaussian Identity, we can arrive at posterior predictive distribution for inference 

rasmussen2006gaussian :

The matrix is the covariance measured between and

. The prediction for mean and variance cost

in time and in storage per test point.

The computational and storage bottleneck is the exact kernel matrix . KISS-GP wilson2015kernel is a inducing point method for approximating the kernel matrix and thus scaling training of GPs. Given a set of inducing points , KISS-GP approximates the kernel matrix where is locally interpolated with and is the interpolation weights.

Formulation.

The goal of kernel distillation is to compress a fully trained GP model to an approximate GP model to be used for inference on a resource-limited devices. We assume that we have access to a trained exact GP with full kernel matrix and all the training data during distillation. Algorithm 1 in Appendix A outlines our distillation procedure.

We propose to use a student kernel matrix with a sparse and low-rank structure, to approximate a fully trained kernel matrix . is a sparse matrix and is the covariance evaluated at a set of inducing points . Similar to KISS-GP wilson2015kernel , we approximate with . In KISS-GP, is calculated using cubic interpolation on grid-structured inducing points. The number of inducing points grows exponentially as the dimension of input data grows, limiting KISS-GP applicable to low-dimensional data. Instead of enforcing inducing points to be on grid, we choose

centroids from the results of K-means clustering

as the inducing points . In addition, we store in KD-tree for fast nearest neighbor search which will be used in later optimization.

In kernel distillation, we find optimal through a constrained optimization problem. We constrain each row of to have at most non-zero entries. We set the objective function to be the -norm error between teacher kernel and student kernel:

subject to

where denotes the number of non-zero entries at row of .

Initializing .

The initial values of are crucial for the later optimization. We initialize with optimal solution to with the sparsity constraint. More specifically, for each in , we find its nearest points in by querying . We denote the indices of these neighbors as . We then initialize each row of by solving the following linear least square problem:

where denotes the entries in row indexed by and denotes the rows of indexed by . The entries in with index not in are set to zero.

Optimizing .

After is initialized, we solve the -norm minimization problem using standard gradient descent. To satisfy the sparsity constraint, in each iteration, we project each row of the gradient to -sparse space according the indices , and then update accordingly.

Fast prediction.

One direct application of kernel distillation is for fast prediction with approximated kernel matrix. Given a test point , we follow similar approximation scheme in the distillation at test time where we try to approximate :

where is forced to be sparse for efficiency. Then the mean and variance prediction can be approximated by:

where both and can be precomputed during distillation.

To compute efficiently, we start by finding nearest neighbors of in (indexed by ) and set the entries in whose indices are not in to 0. For entries with indices in , we solve the following least square problem to get the optimal values for :

It takes to query the nearest neighbors, to get and and for mean and variance prediction respectively. The prediction time complexity is in total. As for storage complexity, we need to store precomputed vector for mean prediction and diagonal of matrix for variance prediction which cost . Table 1 provides comparison of time and storage complexity for different GP approximation approaches.

Methods Mean Prediction Variance Prediction Storage
FITC snelson2005sparse
KISS-GP wilson2015kernel
Kernel distillation (this work)
Table 1: Time and storage complexity for prediction for FITC, KISS and Distillation. is the number of training data, is the number of inducing points, is the dimension of input data and is the sparsity constraint in kernel distillation.

3 Experiments

We evaluate kernel distillation on the ability to approximate the exact kernel, the predictive power and the speed at inference time. In particular, we compare our approach to FITC and KISS-GP as they are the most popular approaches and are closely related to kernel distillation. The simulation experiments for reconstructing kernel and comparing predictive power are demonstrated in Appendix B.

Empirical Study.

We evaluate the performance of kernel distillation on several benchmark regression data sets. A summary of the datasets is given in Table 2. The detailed setup of experiments is in Appendix C.

We start by evaluating how well kernel distillation can preserve the predictive performance of the teacher kernel. The metrics we use for evaluation is the standardized mean square error (SMSE) defined as for true label and model prediction . Table 2 summarizes the results. We can see that exact GPs achieve lowest errors on all of the datasets. FITC gets second lowest error on almost all datasets except for Boston Housing. Errors with kernel distillation are very close to FITC while KISS-GP has the largest errors on every dataset. The poor performance of KISS-GP might be resulted from the loss of information through the projection of input data to low dimension.

Dataset # train # test Exact FITC KISS-GP Distill
Boston Housing 13 455 51 0.076 0.103 0.095 0.091
Abalone 8 3,133 1,044 0.434 0.438 0.446 0.439
PUMADYM32N 32 7,168 1,024 0.044 0.044 1.001 0.069
KIN40K 8 10,000 30,000 0.013 0.030 0.386 0.173
Table 2: SMSE Results Comparison. is the dimension of the input data. Number of inducing points (on 2D grid) for KISS-GP are 4,900, 10K, 90K, 250K, and number of inducing points for FITC and kernel distillation are 70, 200, 1K, 1K for the for datasets respectively. The sparsity is set to 20 for Boston Housing and 30 for all other datasets.
(a) Boston Mean (b) Boston Variance (c) Abalone Mean (d) Abalone Variance
Figure 1: Test error and variance comparison on Boston Housing (a-b) and Abalone (c-d) under different choices of sparsity constraint on . For variance prediction comparison, we calculate the root square mean error between variance of exact GPs and approximate GPs (KISS-GP and kernel distillation).
Dataset FITC KISS-GP Distill
Boston Housing 0.0081 0.00061 0.0017
Abalone 0.0631 0.00018 0.0020
PUMADYM32N 1.3414 0.0011 0.0035
KIN40K 1.7606 0.0029 0.0034
Table 3: Average prediction time in seconds for 1K test data points

We further study the effects of sparsity on predictive performance. We choose to be range from [5, 10, , 40] and compare the test error and variance prediction for KISS-GP and kernel distillation on Boston Housing and Abalone datasets. The results are shown in Figure 1. As expected, the error for kernel distillation decreases as the sparsity increases and we only need to be 15 or 20 to outperform KISS-GP. As for variance prediction, we plot the error between outputs from exact GPs and approximate GPs. We can see that kernel distillation always provides more reliable variance output than KISS-GP on every level of sparsity.

Finally, we evaluate the speed of prediction with kernel distillation. Again, we compare the speed with FITC and KISS-GP. The setup for the approximate models is the same as the predictive performance comparison experiment. For each dataset, we run the test prediction on 1000 points and report the average prediction time in seconds. Table 3 summarizes the results on speed. It shows that both KISS-GP and kernel distillation are much faster in prediction time compared to FITC for all datasets. Though kernel distillation is slightly slower than KISS-GP, considering the improvement in accuracy and more reliable uncertainty measurements, the cost in prediction time is acceptable. Also, though KISS-GP claims to have constant prediction time complexity in theory wilson2015thoughts , the actual implementation still is data-dependent and the speed varies on different datasets. In general, kernel distillation provides a better trade-off between predictive power and scalability than its alternatives.

Conclusion.

We proposed a general framework, kernel distillation, for compressing a trained exact GPs kernel into a student kernel with low-rank and sparse structure. Our framework does not assume any special structure on input data or kernel function, and thus can be applied "out-of-box" on any datasets. Kernel distillation framework formulates the approximation as a constrained -norm minimization between exact teacher kernel and approximate student kernel.

The distilled kernel matrix reduces the storage cost to compared to for other inducing point methods. Moreover, we show one application of kernel distillation is for fast and accurate GP prediction. Kernel distillation can produce more accurate results than KISS-GP and the prediction time is much faster than FITC. Overall, our method provide a better balance between speed and predictive performance than other approximate GP approaches.

References

  • [1] Carl Edward Rasmussen.

    Gaussian processes in machine learning.

    In Advanced lectures on machine learning, pages 63–71. Springer, 2004.
  • [2] Matthias Seeger, Christopher Williams, and Neil Lawrence. Fast forward selection to speed up sparse gaussian process regression. In Artificial Intelligence and Statistics 9, number EPFL-CONF-161318, 2003.
  • [3] Michalis K Titsias. Variational learning of inducing variables in sparse gaussian processes. In AISTATS, volume 12, pages 567–574, 2009.
  • [4] Neil Lawrence, Matthias Seeger, and Ralf Herbrich. Fast sparse gaussian process methods: The informative vector machine. In Proceedings of the 16th Annual Conference on Neural Information Processing Systems, number EPFL-CONF-161319, pages 609–616, 2003.
  • [5] Yunus Saatçi. Scalable inference for structured Gaussian process models. PhD thesis, University of Cambridge, 2012.
  • [6] Andrew Wilson, Elad Gilboa, John P Cunningham, and Arye Nehorai. Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, pages 3626–3634, 2014.
  • [7] Andrew Wilson and Hannes Nickisch. Kernel interpolation for scalable structured gaussian processes (kiss-gp). In Proceedings of The 32nd International Conference on Machine Learning, pages 1775–1784, 2015.
  • [8] Marc Peter Deisenroth, Dieter Fox, and Carl Edward Rasmussen. Gaussian processes for data-efficient learning in robotics and control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):408–423, 2015.
  • [9] Carl Edward Rasmussen and Christopher KI Williams. Gaussian processes for machine learning. 2006.
  • [10] Edward Snelson and Zoubin Ghahramani. Sparse gaussian processes using pseudo-inputs. In Advances in neural information processing systems, pages 1257–1264, 2005.
  • [11] Andrew Gordon Wilson, Christoph Dann, and Hannes Nickisch. Thoughts on massively scalable gaussian processes. arXiv preprint arXiv:1511.01870, 2015.

Appendix A Sparse Low-rank Kernel Approximation

Algorithm 1 outlines our distillation approach.

1:Input: A well trained kernel function , training feature vectors and targets , step size , number of iterations and sparsity .
2:Output: Approximated kernel matrix
3:
4:
5:
6:
7:Step 1: Initialization
8:
9:for each in  do
10:     
11:     
12:end for
13:Step 2: Gradient Descent
14:for  to  do
15:     
16:     
17:     
18:     Project each row of to -sparse space
19:     Update
20:end for
Algorithm 1 Sparse Low-rank Kernel Approximation

Appendix B Simulation Experiment

Kernel Reconstruction.

(a) (b) (c) (d) Error v.s.
Figure 2: Kernel Reconstruction Experiments. (a) - (c) Absolute error matrix for reconstructing with kernel distillation, KISS-GP and SoR respectively. (d) -norm error for reconstructing with distillation under different setting of (sparsity constraint) for .

We first study how well can kernel distillation reconstruct the full teacher kernel matrix. We generate a 1000 1000 kernel matrix from RBF kernel evaluated at (sorted) inputs randomly sampled from . We compare kernel distillation against KISS-GP and SoR (FITC is essentially SoR with diagonal correction as mentioned in Section 2). We set number of grid points for KISS-GP as 400 and number of inducing points for SoR is set to 200 and kernel distillation to 100. We set the sparsity to 6 for kernel distillation.

The -norm for errors for are , , for KISS-GP, SoR and kernel distillation respectively. Kernel distillation achieves lowest -norm error compared to FITC and KISS-GP even the number of inducing points is much fewer for kernel distillation. Moreover, from the absolute error matrices (Figure 2 a-c), we can see errors are more evenly distributed for kernel distillation, while there seems to exist a strong error pattern for the other two.

We also show how the sparsity parameter affect the approximation quality. We evaluate the error with different choices for as shown in Figure 2 (d). We observe that the error converges when the sparsity is above 5 in this example. This shows our structured student kernel can approximate the full teacher kernel reasonably well even when is extremely sparse.

Toy 1D Example.

(a) Mean (b) Variance
Figure 3: Mean (a) and variance (b) prediction comparison for KISS-GP and Kernel Distillation on 1D example.

To evaluate our distilled model’s predictive ability, we set up the following experiment. We sample data points uniformly from [-10, 10]. We set our response with . We train an exact GP with RBF kernel as teacher first then apply kernel distillation with number of inducing points set to 100 and sparsity set to 10. We compare mean and variance prediction of kernel distillation with KISS-GP trained with 400 grid inducing points.

The results are showed in Figure 3. As we can see, mean predictions of kernel distillation are indistinguishable from exact GP and KISS-GP. As for variance, kernel distillation’s predictions are much closer to the variance outputs from exact GP, while the variance outputs predicted by KISS-GP are far away from the exact solution.

This experiment shows a potential problem in KISS-GP, where it sacrifices its ability to provide uncertainty measurements, which is a crucial property of Bayesian modeling, for exchanging massive scalability. On the other hand, kernel distillation can honestly provide uncertainty prediction close to the exact GP model.

Appendix C Experiment Setup

We compare kernel distillation with teacher kernel (exact GP), FITC as well as KISS-GP. We use the same inducing points selected by K-Means for both FITC and kernel distillation. For KISS-GP, as all the datasets do not lie in lower dimension, we project the input to 2D and construct 2D grid data as the inducing points. Number of inducing points (on 2D grid) for KISS-GP are set to 4,900 (70 per grid dimension) for Boston Housing, 10K for Abalone, 90K for PUMADYM32N, 250K for KIN40K. The number of inducing points for FITC and kernel distillation are 70 for Boston Housing, 200 for Abalone, 1k for PUMADYM32N and KINK40K. The sparsity in kernel distillation is set to 20 for Boston Housing and 30 for other datasets. For all methods, we choose ARD kernel as the kernel function, which is defined as:

where is the dimension of the input data and ’s are the hyper-parameters to learn.

All the experiments were conducted on a PC laptop with Intel Core(TM) i7-6700HQ CPU @ 2.6GHZ and 16.0 GB RAM.