Nystrom Method for Approximating the GMM Kernel

07/12/2016
by   Ping Li, et al.
Rutgers University
0

The GMM (generalized min-max) kernel was recently proposed (Li, 2016) as a measure of data similarity and was demonstrated effective in machine learning tasks. In order to use the GMM kernel for large-scale datasets, the prior work resorted to the (generalized) consistent weighted sampling (GCWS) to convert the GMM kernel to linear kernel. We call this approach as "GMM-GCWS". In the machine learning literature, there is a popular algorithm which we call "RBF-RFF". That is, one can use the "random Fourier features" (RFF) to convert the "radial basis function" (RBF) kernel to linear kernel. It was empirically shown in (Li, 2016) that RBF-RFF typically requires substantially more samples than GMM-GCWS in order to achieve comparable accuracies. The Nystrom method is a general tool for computing nonlinear kernels, which again converts nonlinear kernels into linear kernels. We apply the Nystrom method for approximating the GMM kernel, a strategy which we name as "GMM-NYS". In this study, our extensive experiments on a set of fairly large datasets confirm that GMM-NYS is also a strong competitor of RBF-RFF.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/18/2016

Linearized GMM Kernels and Normalized Random Fourier Features

The method of "random Fourier features (RFF)" has become a popular tool ...
03/21/2016

A Comparison Study of Nonlinear Kernels

In this paper, we compare 5 different nonlinear kernels: min-max, RBF, f...
05/04/2018

A brief introduction to the Grey Machine Learning

This paper presents a brief introduction to the key points of the Grey M...
05/08/2018

Several Tunable GMM Kernels

While tree methods have been popular in practice, researchers and practi...
09/24/2015

Linear-time Learning on Distributions with Approximate Kernel Embeddings

Many interesting machine learning problems are best posed by considering...
02/26/2019

Implicit Kernel Learning

Kernels are powerful and versatile tools in machine learning and statist...
01/09/2017

Tunable GMM Kernels

The recently proposed "generalized min-max" (GMM) kernel can be efficien...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The “generalized min-max” (GMM) kernel was recently proposed in [5] as an effective measure of data similarity. Consider the original (

-dim) data vector

, to . The first step is to expand the data vector to a vector of dimensions:

(1)

For example, if , then the transformed data vector becomes . After the transformation, the GMM similarity between two vectors is defined as

(2)

It was shown in [5], through extensive experiments on a large collection of publicly available datasets, that using the GMM kernel can often produce excellent results in classification tasks. On the other hand, it is generally nontrivial to scale nonlinear kernels for large data [1]. In a sense, it is not practically meaningful to discuss nonlinear kernels without knowing how to compute them efficiently (e.g., via hashing). [5] focused on the generalized consistent weighted sampling (GCWS).

1.1 Generalized Consistent Weighted Sampling (GCWS) and 0-bit GCWS

Algorithm 1 summarizes the “(0-bit) generalized consistent weighted sampling” (GCWS). Given two data vectors and , we transform them into nonnegative vectors and as in (1). We then apply the “(0-bit) consistent weighted sampling” (0-bit CWS) [8, 3, 4] to generate random integers: , . According to the result in [4], the following approximation

(3)

is accurate in practical settings and makes the implementation convenient.

Input: Data vector = ( to )

Generate vector in -dim by (1)

For from 1 to

,  ,

,  

End For

Output:

Algorithm 1 (0-Bit) Generalized Consistent Weighted Sampling (GCWS)

For each data vector , we obtain random samples , to . We store only the lowest bits of , based on the idea of [7]. We need to view those integers as locations (of the nonzeros) instead of numerical values. For example, when , we should view as a vector of length . If , then we code it as ; if , we code it as , etc. We concatenate all such vectors into a binary vector of length , which contains exactly

1’s. After we have generated such new data vectors for all data points, we feed them to a linear SVM or logistic regression solver. We can, of course, also use the new data for many other tasks including clustering, regression, and near neighbor search.

Note that for linear learning methods, the storage and computational cost is largely determined by the number of nonzeros in each data vector, i.e., the in our case. It is thus crucial not to use a too large . For the other parameter , we recommend to use a fairly large value if possible.

1.2 The RBF Kernel and Random Fourier Features (RFF)

The natural competitor of the GMM kernel is the RBF (radial basis function) kernel, whose definition involves a crucial tuning parameter . In this study, for convenience (e.g., parameter tuning), we use the following version of the RBF kernel:

(4)

Based on Bochner’s Theorem [11], it is known [10] that, if we sample , i.i.d., and let , , where , then we have

(5)

This provides an elegant mechanism for linearizing the RBF kernel and the so-called RFF method has become popular in machine learning, computer vision, and beyond.

It turns out that, for nonnegative data, one can simplify (5

) by removing the random variable

, due to the following fact:

(6)

which is monotonic when . This creates a new nonlinear kernel called “folded RBF” (fRBF).

A major issue with the RFF method is the high variance. Typically a large number of samples (i.e., large

) would be needed in order to reach a satisfactory accuracy, as validated in [5]. Usually, “GMM-GCWS” (i.e., the GCWS algorithm for approximating the GMM kernel) requires substantially fewer samples than “RBF-RFF” (i.e., the RFF method for approximating the RBF kernel).

In this paper, we will introduce the Nystrom method [9] for approximating the GMM kernel, which we call “GMM-NYS”. We will show that GMM-NYS is also a strong competitor of RBF-RFF.

2 The Nystrom Method for Kernel Approximation

The Nystrom method [9] is a sampling scheme for kernel approximation [12]. For example, [13] applied the Nystrom method for approximating the RBF kernel, which we call “RBF-NYS”. Analogously, we propose “GMM-NYS”, which is the use of the Nystrom method for approximating the GMM kernel. This paper will show that GMM-NYS is a strong competitor of RBF-RFF.

To help interested readers repeat our experiments, here we post the matlab script for generating samples using the Nystrom method. This piece of code contains various small tricks to make the implementation fairly efficient without hurting its readability.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function Hash = GenNysGmm(k, X1, X2)
% k = number of samples
% X1 = data matrix to be sampled from
% X2 = data matrix to be hashed

Xs = X1(randsample(size(X1,1),k),:);
KernelXs = zeros(size(Xs,1),size(Xs,1));
for i = 1:size(Xs,1)
    U = sparse(ones(size(Xs,1),1))*Xs(i,:);
    Min = min(U, Xs);    Max = max(U, Xs);
    KernelXs(i,:) = sum(Min,2)./(sum(Max,2) + eps);
end

[v,d] = eig(KernelXs); T = inv(d.^0.5)*v’;
Kernel = zeros(size(Xs,1),size(X2,1));
for i = 1:size(Xs,1)
    U = sparse(ones(size(X2,1),1))*Xs(i,:);
    Min = min(U, X2);    Max = max(U, X2);
    Kernel(i,:) = sum(Min,2)./(sum(Max,2) + eps);
end
Hash = Kernel’*T’;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Firstly, we randomly sample data points from the (training) data matrix. Then we generate a

data kernel matrix and compute its eigenvalues and eigenvectors. We then produce a new representation of given data matrix based on the eigenvalues and eigenvectors of the sampled kernel matrix. The new representation will be of exactly

dimensions (i.e., nonzeros per data point).

In this paper, we will show that GMM-NYS is a strong competitor of RBF-RFF. This is actually not surprising. Random projection based algorithms always have (very) high variances and often do not perform as well as sampling based algorithms [6].

3 Experiments

We provide an extensive experimental study on a collection of fairly large classification datasets which are publicly available. We report the classification results for four methods: 1) GMM-GCWS, 2) GMM-NYS, 3) RBF-NYS, 4) RBF-RFF. Even though we focus on reporting classification results, we should mention that our methods generate new data presentations and can be used for (e.g.,) classification, regression, clustering. Note that, due to the discrete nature of the hashed values, GMM-GCWS can also be directly used for building hash tables and efficient near neighbor search.

3.1 Datasets

Table 1 summarizes the datasets for our experimental study. Because the Nystrom method is a sampling algorithm, we know that it will recover the original kernel result if the number of samples () approaches the number of training examples. Thus, it makes sense to compare the algorithms on larger datasets. Nevertheless, we still provide the experimental reuslt on SVMGuide3, a tiny dataset with merely training data points, merely for a sanity check.

Dataset # train # test # dim linear (%) RBF (%) () GMM (%)
SVMGuide3 1,243 41 21 36.5 100 (120) 100
Letter 15,000 5,000 16 61.7 97.4 (11) 97.3
Covertype25k 25,000 25,000 54 71.5 84.7 (150) 84.5
SensIT 78,823 19,705 100 80.5 – (0.1)
Webspam 175,000 175,000 254 93.3 – (35)
PAMAP105 185,548 185,548 51 83.4 – (18)
PAMAP101 186,581 186,580 51 79.2 – (1.5)
Covertype 290,506 290,506 54 71.3 – (150)
RCV1 338,699 338,700 47,236 97.7 – (2.0)
Table 1: Public datasets and kernel SVM results. The datasets were downloaded from the UCI repository or the LIBSVM website. For the first 3 datasets (which are small enough), we report the test classification accuracies for the linear kernel, the RBF kernel (with the best value in parentheses), and the GMM kernel, at the best SVM -regularization values. For GMM and RBF, We use LIBSVM pre-computed kernel functionality. For the other datasets, we only report the linear SVM results and the best values obtained from a sub-sample of each dataset.

When using modern linear algorithms (especially online learning), the storage and computational cost are mainly determined by the number of nonzero entries per data point. In our study, after hashing (sampling), the storage and computational cost are mainly dominated by , the number of samples. We will report the experimental results for , as we believe that for most practical applications, would not be so desirable (and takes too much time/space to complete the experiments). Nevertheless, for RCV1, we also report the results for , for the comparison purpose (and for our curiosity). We always use the LIBLINEAR package [2] for training linear SVMs on the original data as well as the hashed data.

3.2 Experimental Results

Figure 1 reports the test classification accuracies on SVMGuide3, for 6 different samples sizes () and 4 different algorithms: 1) GMM-GCWS, 2) GMM-NYS, 3) RBF-NYS, 4) RBF-RFF. Again, we should emphasize that the storage and computational cost are largely determined by the sample size , which is also the number of nonzero entries per data vector in the transformed space.

Because the Nystrom method is based on sampling, we know that if is large enough, the performance will approach that of the original kernel method. In particular, for this tiny dataset, since there are only 1,243 training data points, it is expected that when , the classification results of GMM-NYS and RBF-NYS should be (close to) , as validated in the upper left panel of Figure 1. It is thus more meaningful to examine the classification results for smaller values.

Figure 1: SVMGuide3: Test classification accuracies for 6 different values and 4 different hashing algorithms: 1) GMM-GCWS, 2) GMM-NYS, 3) RBF-NYS, 4) RBF-RFF. After the hashed data are generated, we use the LIBLINEAR package [2] for training linear SVMs for a wide range of -regularization values (i.e., the x-axis). The classification results are averaged from 100 repetitions (at each and ). We can see that, at the same sample size (and when is not too large), GMM-NYS produces substantially more accurate results than RBF-RFF.

From Figure 1, it is obvious that when is not too large, GMM-NYS performs substantially better than RBF-RFF. Note that in order to show reliable (and smooth) curves, for this (tiny) dataset, we repeat each experiment (at each and each ) 100 times and we report the averaged results. For other datasets, we report the averaged results from 10 repetitions.

Note that for SVMGuide3, the original classification accuracy using linear SVM is low (, see Table 1). These hashing methods produce substantially better results even when only.

Figure 2 reports the test classification results for Letter, which also confirm that GMM-NYS produces substantially more accurate results than RBF-RFF. Again, while the original test classification accuracy using linear SVM is low (, see Table 1), GMM-NYS with merely samples already becomes more accurate.

Figure 2: Letter: Test classification accuracies for 6 different values and 4 different algorithms.

Figure 3 reports the test classification accuracies for Covertype25k. Once again, the results confirm that GMM-NYS produces substantially more accurate results than RBF-RFF. GMM-NYS becomes more accurate than linear SVM on the original data after .

Figures 4, 5, 6, 7, 8 report the test classification accuracies for SensIT, Webspam, PAMAP105, PAMAP101, Covertype, respectively. As these datasets are fairly large, we could not report nonlinear kernel (GMM and RBF) SVM results, although we can still report linear SVM results; see Table 1. Again, these figures confirm that 1) GMM-NYS is substantially more accurate than RBF-RFF; and 2) GMM-NYS becomes more accurate than linear SVM once is large enough.

Finally, Figures 9,  10, and  11 report the test classification results on the RCV1 dataset. Because the performance of RBF-RFF is so much worse than other methods, we report in Figure 9 only the results for GMM-GCWS, GMM-NYS, and RBF-NYS, for better clarity. In addition, we report the results for in Figure 11, to provide a more comprehensive comparison study. All these results confirm that GMM-NYS is a strong competitor of RBF-RFF.

Figure 3: Covertype25k: Test classification accuracies for 6 values and 4 different algorithms.

Figure 4: SensIT:  Test classification accuracies for 6 values and 4 different algorithms.

Figure 5: Webspam: Test classification accuracies for 6 values and 4 different algorithms

Figure 6: PAMAP105: Test classification accuracies for 6 values and 4 different algorithms.

Figure 7: PAMAP101: Test classification accuracies for 6 values and 4 different algorithms.

Figure 8: Covertype: Test classification accuracies for 6 values and 4 different algorithms.

Figure 9: RCV1:  Test classification accuracies for 6 values. For better clarify we did not display the results for RBF-RFF because they are much worse than the results of other methods. See Figure 10 for the results of RBF-RFF.

Figure 10: RCV1: Test classification accuracies for 6 values and 4 different algorithms.

Figure 11: RCV1: Test classification accuracies for and 4 different algorithms.

4 Conclusion

The recently proposed GMM kernel has proven effective as a measure of data similarity, through extensive experiments in the prior work [5]. For large-scale machine learning, it is crucial to be able to linearize nonlinear kernels. The work [5] demonstrated that the GCWS hashing method for linearizing the GMM kernel (GMM-GCWS) typically produces substantially more accurate results than the well-known random Fourier feature (RFF) approach for linearizing the RBF kernel (RBF-RFF).

In this study, we apply the general and well-known Nystrom method for approximating the GMM kernel (GMM-NYS) and we show, through extensive experiments, that the results produced by GMM-NYS are substantially more accurate than the results obtained using RBF-RFF. This phenomenon is largely expected because random projection based algorithms often have much larger variances than sampling based methods [6].

References

  • [1] L. Bottou, O. Chapelle, D. DeCoste, and J. Weston, editors. Large-Scale Kernel Machines. The MIT Press, Cambridge, MA, 2007.
  • [2] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008.
  • [3] S. Ioffe. Improved consistent sampling, weighted minhash and L1 sketching. In ICDM, pages 246–255, Sydney, AU, 2010.
  • [4] P. Li. 0-bit consistent weighted sampling. In KDD, Sydney, Australia, 2015.
  • [5] P. Li. Generalized min-max kernel and generalized consistent weighted sampling. Technical report, arXiv:1605.05721, 2016.
  • [6] P. Li, K. W. Church, and T. J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages 873–880, Vancouver, BC, Canada, 2006.
  • [7] P. Li, A. Shrivastava, J. Moore, and A. C. König. Hashing algorithms for large-scale learning. In NIPS, Granada, Spain, 2011.
  • [8] M. Manasse, F. McSherry, and K. Talwar. Consistent weighted sampling. Technical Report MSR-TR-2010-73, Microsoft Research, 2010.
  • [9] E. J. Nyström. Über die praktische auflösung von integralgleichungen mit anwendungen auf randwertaufgaben. Acta Mathematica, 54(1):185–204, 1930.
  • [10] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
  • [11] W. Rudin. Fourier Analysis on Groups. John Wiley & Sons, New York, NY, 1990.
  • [12] C. K. I. Williams and M. Seeger. Using the nyström method to speed up kernel machines. In NIPS, pages 682–688. 2001.
  • [13] T. Yang, Y.-f. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nyström method vs random fourier features: A theoretical and empirical comparison. In NIPS, pages 476–484. 2012.