A la Carte - Learning Fast Kernels

12/19/2014 ∙ by Zichao Yang, et al. ∙ 0

Kernel methods have great promise for learning rich statistical representations of large modern datasets. However, compared to neural networks, kernel methods have been perceived as lacking in scalability and flexibility. We introduce a family of fast, flexible, lightly parametrized and general purpose kernel learning methods, derived from Fastfood basis function expansions. We provide mechanisms to learn the properties of groups of spectral frequencies in these expansions, which require only O(mlogd) time and O(m) memory, for m basis functions and d input dimensions. We show that the proposed methods can learn a wide class of kernels, outperforming the alternatives in accuracy, speed, and memory consumption.



page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The generalisation properties of a kernel method are entirely controlled by a kernel function, which represents an inner product of arbitrarily many basis functions. Kernel methods typically face a tradeoff between speed and flexibility. Methods which learn a kernel lead to slow and expensive to compute function classes, whereas many fast function classes are not adaptive. This problem is compounded by the fact that expressive kernel learning methods are most needed on large modern datasets, which provide unprecedented opportunities to automatically learn rich statistical representations.

For example, the recent spectral kernels proposed by Wilson and Adams (2013)

are flexible, but require an arbitrarily large number of basis functions, combined with many free hyperparameters, which can lead to major computational restrictions. Conversely, the recent Random Kitchen Sinks of

Rahimi and Recht (2009) and Fastfood (Le et al., 2013) methods offer efficient finite basis function expansions, but only for known kernels, a priori hand chosen by the user. These methods do not address the fundamental issue that it is exceptionally difficult to know a-priori which kernel might perform well; indeed, an appropriate kernel might not even be available in closed form.

We introduce a family of kernel learning methods which are expressive, scalable, and general purpose. In particular, we introduce flexible kernels, including a novel piecewise radial kernel, and derive Fastfood basis function expansions for these kernels. We observe that the frequencies in these expansions can in fact be adjusted, and provide a mechanism for automatically learning these frequencies via marginal likelihood optimisation. Individually adjusting these frequencies provides the flexibility to learn any translation invariant kernel. However, such a procedure has as many free parameters as basis functions, which can lead to over-fitting, troublesome local optima, and computational limitations. We therefore further introduce algorithms which can control the scales, spread, and locations of groups of frequencies. These methods are computationally efficient, and allow for great flexibility, with a minimal number of free parameters requiring training. By controlling groups of spectral frequencies, we can use arbitrarily many basis functions with no risk of over-fitting. Furthermore, these methods do not require the input data have any special structure (e.g., regular sampling intervals).

Overall, we introduce four new kernel learning methods with distinct properties, and evaluate each of these methods on a wide range of real datasets. We show major advantages in accuracy, speed, and memory consumption. We begin by describing related work in more detail in section 2. We then provide additional background on kernel methods, including basic properties and Fastfood approximations, in section 3. In section 4 we introduce a number of new tools for kernel learning. Section 5 contains an evaluation of the proposed techniques on many datasets. We conclude with a discussion in section 6.

2 Related Work

Rahimi and Recht (2008) introduced Random Kitchen Sinks finite Fourier basis function approximations to fixed stationary kernels, using a Monte Carlo sum obtained by sampling from spectral densities. For greater flexibility, one can consider a weighted sum of random kitchen sink expansions of Rahimi and Recht (2009). In this case, the expansions are fixed, corresponding to a-priori chosen kernels, but the weighting can be learned from the data.

Recently, Lu et al. (2014)

have shown how weighted sums of random kitchen sinks can be incorporated into scalable logistic regression models. First, they separately learn the parameters of multiple logistic regression models, each of which uses a separate random kitchen sinks expansion, enabling parallelization. They then jointly learn the weightings of each expansion. Learning proceeds through stochastic gradient descent.

Lu et al. (2014) achieve promising performance on acoustic modelling problems, in some instances outperforming deep neural networks.

Alternatively, Lázaro-Gredilla et al. (2010) considered optimizing the locations of all spectral frequencies in Random Kitchen Sinks expansions, as part of a sparse spectrum Gaussian process formalism (SSGPR).

For further gains in scalability, Le et al. (2013) approximate the sampling step in Random Kitchen Sinks by a combination of matrices which enable fast computation. The resulting Fastfood expansions perform similarly to Random Kitchen Sinks expansions (Le et al., 2013), but can be computed more efficiently. In particular, the Fastfood expansion requires computations and memory, for basis functions and input dimensions.

To allow for highly flexible kernel learning, Wilson and Adams (2013) proposed spectral mixture kernels, derived by modelling a spectral density by a scale-location mixture of Gaussians. These kernels can be computationally expensive, as they require arbitrarily many basis functions combined with many free hyperparameters. Recently, Wilson et al. (2014)

modified spectral mixture kernels for Kronecker structure, and generalised scalable Kronecker (Tensor product) based learning and inference procedures to incomplete grids. Combining these kernels and inference procedures in a method called

GPatt, Wilson et al. (2014) show how to learn rich statistical representations of large datasets with kernels, naturally enabling extrapolation on problems involving images, video, and spatiotemporal statistics. Indeed the flexibility of spectral mixture kernels makes them ideally suited to large datasets. However, GPatt requires that the input domain of the data has at least partial grid structure in order to see efficiency gains.

In our paper, we consider weighted mixtures of Fastfood expansions, where we propose to learn both the weighting of the expansions and the properties of the expansions themselves. We propose several approaches under this framework. We consider learning all of the spectral properties of a Fastfood expansion. We also consider learning the properties of groups of spectral frequencies, for lighter parametrisations and useful inductive biases, while retaining flexibility. For this purpose, we show how to incorporate Gaussian spectral mixtures into our framework, and also introduce novel piecewise linear radial kernels. Overall, we show how to perform simultaneously flexible and scalable kernel learning, with interpretable, lightly parametrised and general purpose models, requiring no special structure in the data. We focus on regression for clarity, but our models extend to classification and non-Gaussian likelihoods without additional methodological innovation.

3 Kernel Methods

3.1 Basic Properties

Denote by the domain of covariates and by the domain of labels. Moreover, denote and

data drawn from a joint distribution

over . Finally, let be a Hilbert Schmidt kernel (Mercer, 1909). Loosely speaking we require that be symmetric, satisfying that every matrix be positive semidefinite, .

The key idea in kernel methods is that they allow one to represent inner products in a high-dimensional feature space implicitly using


While the existence of such a mapping is guaranteed by the theorem of Mercer (1909), manipulation of is not generally desirable since it might be infinite dimensional. Instead, one uses the representer theorem (Kimeldorf and Wahba, 1970; Schölkopf et al., 2001) to show that when solving regularized risk minimization problems, the optimal solution can be found as linear combination of kernel functions:

While this expansion is beneficial for small amounts of data, it creates an unreasonable burden when the number of datapoints is large. This problem can be overcome by computing approximate expansions.

3.2 Fastfood

The key idea in accelerating is to find an explicit feature map such that can be approximated by in a manner that is both fast and memory efficient. Following the spectral approach proposed by Rahimi and Recht (2009) one exploits that for translation invariant kernels we have


Here to ensure that the imaginary parts of the integral vanish. Without loss of generality we assume that is normalized, e.g. . A similar spectral decomposition holds for inner product kernels (Le et al., 2013; Schoenberg, 1942).

Rahimi and Recht (2009) suggested to sample from the spectral distribution for a Monte Carlo approximation to the integral in (2

). For example, the Fourier transform of the popular Gaussian kernel is also Gaussian, and thus samples from a normal distribution for

can be used to approximate a Gaussian (RBF) kernel.

This procedure was refined by Le et al. (2013) by approximating the sampling step with a combination of matrices that admit fast computation. They show that one may compute Fastfood approximate kernel expansions via


The random matrices are chosen such as to provide a sufficient degree of randomness while also allowing for efficient computation.

Binary decorrelation

The entries of this diagonal matrix are drawn uniformly from . This ensures that the data have zero mean in expectation over all matrices .

Hadamard matrix

It is defined recursively via

The recursion shows that the dense matrix admits fast multiplication in time, i.e. as efficiently as the FFT allows.

Permutation matrix

This decorrelates the eigensystems of subsequent Hadamard matrices. Generating such a random permutation (and executing it) can be achieved by reservoir sampling, which amounts to in-place pairwise swaps. It ensures that the spaces of both permutation matrices are effectively uncorrelated.

Gaussian matrix

It is a diagonal matrix with Gaussian entries drawn iid via . The result of using it is that each of the rows of

consist of iid Gaussian random variables. Note, though, that the rows of this matrix are not quite independent.

Scaling matrix

This diagonal matrix encodes the spectral properties of the associated kernel. Consider of (2). There we draw from the spherically symmetric distribution defined by and use its length to rescale via

It is straightforward to change kernels, for example, by adjusting . Moreover, all the computational benefits of decomposing terms via (3) remain even after adjusting . Therefore we can customize kernels for the problem at hand rather than applying a generic kernel, without incurring additional computational expenses.

4 À la Carte

In keeping with the culinary metaphor of Fastfood, we now introduce a flexible and efficient approach to kernel learning à la carte. That is, we will adjust the spectrum of a kernel in such a way as to allow for a wide range of translation-invariant kernels. Note that unlike previous approaches, this can be accomplished without any additional cost since these kernels only differ in terms of their choice of scaling.

In Random Kitchen Sinks and Fastfood, the frequencies are sampled from the spectral density . One could instead learn the frequencies using a kernel learning objective function. Moreover, with enough spectral frequencies, such an approach could learn any stationary (translation invariant) kernel. This is because each spectral frequency corresponds to a point mass on the spectral density in (2), and point masses can model any density function.

However, since there are as many spectral frequencies as there are basis functions, individually optimizing over all the frequencies can still be computationally expensive, and susceptible to over-fitting and many undesirable local optima. In particular, we want to enforce smoothness over the spectral distribution. We therefore also propose to learn the scales, spread, and locations of groups of spectral frequencies, in a procedure that modifies the expansion (3

) for fast kernel learning. This procedure results in efficient, expressive, and lightly parametrized models.

In sections 4.1 and 4.2 we describe a procedure for learning the free parameters of these models, assuming we already have a Fastfood expansion. Next we introduce four new models under this efficient framework – a Gaussian spectral mixture model in section 4.3, a piecewise linear radial model in section 4.4, and models which learn the scaling (), Gaussian (), and binary decorrelation () matrices in Fastfood in section 4.5.

4.1 Learning the Kernel

We use a Gaussian process (GP) formalism for kernel learning. For an introduction to Gaussian processes, see Rasmussen and Williams (2006), for example. Here we assume we have an efficient Fastfood basis function expansion for kernels of interest; in the next sections we derive such expansions.

For clarity, we focus on regression, but we note our methods can be used for classification and non-Gaussian likelihoods without additional methodological innovation. When using Gaussian processes for classification, for example, one could use standard approximate Bayesian inference to represent an approximate marginal likelihood

(Rasmussen and Williams, 2006). A primary goal of this paper is to demonstrate how Fastfood can be extended to learn a kernel, independently of a specific kernel learning objective. However, the marginal likelihood of a Gaussian process provides a general purpose probabilistic framework for kernel learning, particularly suited to training highly expressive kernels (Wilson, 2014). Note that there are many other choices for kernel learning objectives. For instance, Ong et al. (2003) provide a rather encyclopedic list of alternatives.

Denote by an index set with drawn from it. We assume that the observations are given by


Here is drawn from a Gaussian process . Eq. (4) means that any finite dimensional realization is drawn from a normal distribution , where denotes the associated kernel matrix. This also means that , since the additive noise is drawn iid for all . For finite-dimensional feature spaces we can equivalently use the representation of Williams (1998):

The kernel of the Gaussian process is parametrized by . Learning the kernel therefore involves learning and from the data, or equivalently, inferring the structure of the feature map .

Our working assumption is that corresponds to a -component mixture model of kernels with associated weights . Moreover we assume that we have access to a Fastfood expansion for each of the components into terms. This leads to

This kernel is parametrized by . Here are mixture weights and are parameters of the (non-linear) basis functions .

4.2 Marginal Likelihood

We can marginalise the Gaussian process governing by integrating away the variables above to express the marginal likelihood of the data solely in terms of the kernel hyperparameters

and noise variance


Denote by the design matrix, as parametrized by , from evaluating the functions on . Moreover, denote by the diagonal scaling matrix obtained from via


Since and are independent, their covariances are additive. For training datapoints , indexed by , we therefore obtain the marginal likelihood


and hence the negative log marginal likelihood is

To learn the kernel we minimize the negative log marginal likelihood of (4.2) with respect to and . Similarly, the predictive distribution at a test input can be evaluated using



denotes the vector of cross covariances between the test point

and and the training points in . On closer inspection we note that these expressions can be simplified greatly in terms of and since

with an analogous expression for . More importantly, instead of solving the problem in terms of the kernel matrix we can perform inference in terms of directly. This has immediate benefits:

  • Storing the solution only requires parameters regardless of , provided that can be stored and computed efficiently, e.g. by Fastfood.

  • Computation of the predictive variance is equally efficient: has at most rank , hence the evaluation of can be accomplished via the Sherman-Morrison-Woodbury formula, thus requiring only computations. Moreover, randomized low-rank approximations of

    using e.g. randomized projections, as proposed by Halko et al. (2009), allow for even more efficient computation.

Overall, standard Gaussian process kernel representations (Rasmussen and Williams, 2006) require computations and memory. Therefore, when using a Gaussian process kernel learning formalism, the expansion in this section, using , is computationally preferable whenever .

4.3 Gaussian Spectral Mixture Models

For the Gaussian Spectral Mixture kernels of Wilson and Adams (2013), translation invariance holds, yet rotation invariance is violated: the kernels satisfy for all ; however, in general rotations do not leave invariant, i.e. . These kernels have the following explicit representation in terms of their Fourier transform

In other words, rather than choosing a spherically symmetric representation as typical for (2), Wilson and Adams (2013) pick a mixture of Gaussians with mean frequency and variance that satisfy the symmetry condition but not rotation invariance. By the linearity of the Fourier transform, we can apply the inverse transform component-wise to obtain

Lemma 1 (Universal Basis)

The expansion (8) can approximate any translation-invariant kernel by approximating its spectral density.


This follows since mixtures of Gaussians are universal approximators for densities (Silverman, 1986)

, as is well known in the kernel-density estimation literature. By the Fourier-Plancherel theorem, approximation in Fourier domain amounts to approximation in the original domain, hence the result applies to the kernel. ∎

Note that the expression in (8) is not directly amenable to the fast expansions provided by Fastfood since the distributions are shifted. However, a small modification allows us to efficiently compute kernels of the form of (8). The key insight is that shifts in Fourier space by are accomplished by multiplication by . Here the inner product can be precomputed, which costs only operations. Moreover, multiplications by induce multiplication by in the original domain, which can be accomplished as preprocessing. For diagonal the cost is .

In order to preserve translation invariance we compute a symmetrized set of features. We have the following algorithm (we assume diagonal — otherwise simply precompute and scale ):

  Preprocessing — Input
  for each generate random matrices
  Combine group scaling
  Feature Computation — Input
  for  to  do
      (Fastfood product)
     Compute features
  end for

To learn the kernel we learn the weights , dispersion and locations of spectral frequencies via marginal likelihood optimization, as described in section 4.2. This results in a kernel learning approach which is similar in flexibility to individually learning all spectral frequencies and is less prone to over-fitting and local optima. In practice, this can mean optimizing over about free parameters instead of free parameters, with improved predictive performance and efficiency. See section 5 for more detail.

4.4 Piecewise Linear Radial Kernel

In some cases the freedom afforded by a mixture of Gaussians in frequency space may be more than what is needed. In particular, there exist many cases where we want to retain invariance under rotations while simultaneously being able to adjust the spectrum according to the data at hand. For this purpose we introduce a novel piecewise linear radial kernel.

Recall (2) governs the regularization properties of . We require for rotation invariance. For instance, for the Gaussian RBF kernel we have


For high dimensional inputs, the RBF kernel suffers from a concentration of measure problem (Le et al., 2013), where samples are tightly concentrated at the maximum of , . A fix is relatively easy, since we are at liberty to pick any nonnegative in designing kernels. This procedure is flexible but leads to intractable integrals: the Hankel transform of , i.e. the radial part of the Fourier transform, needs to be analytic if we want to compute in closed form.

Figure 1:

Piecewise linear functions. Left: single basis function. Right: linear combination of three functions. Additional degrees of freedom are fixed by


However, if we remain in the Fourier domain, we can use and sample directly from it. This strategy kills two birds with one stone: we do not need to compute the inverse Fourier transform and we have a readily available sampling distribution at our disposal for the Fastfood expansion coefficients . All that is needed is to find an efficient parametrization of .

We begin by providing an explicit expression for piecewise linear functions such that with discontinuities only at and . In other words, is a ‘hat’ function with its mode at and range . It is parametrized as

By construction each basis function is piecewise linear with and moreover for all .

Lemma 2

Denote by a sequence of locations with and . Moreover, let . Then for all if and only if for all . Moreover, parametrizes all piecewise linear functions with discontinuities at .

Now that we have a parametrization we only need to discuss how to draw from . We have several strategies at our disposal:

  • can be normalized explicitly via

    Since each segment

    occurs with probability

    we first sample the segment and then sample from

    explicitly by inverting the associated cumulative distribution function (it is piecewise quadratic).

  • Note that sampling can induce considerable variance in the choice of locations. An alternative is to invert the cumulative distribution function and pick locations equidistantly at locations where . This approach is commonly used in particle filtering (Doucet et al., 2001). It is equally cheap yet substantially reduces the variance when sampling, hence we choose this strategy.

The basis functions are computed as follows:

  Generate random matrices
  Update scaling
  Sample from as above
  Feature Computation()

The rescaling matrix is introduced to incorporate automatic relevance determination into the model. Like with the Gaussian spectral mixture model, we can use a mixture of piecewise linear radial kernels to approximate any radial kernel. Supposing there are components of the piecewise linear function, we can repeat the proposed algorithm times to generate all the required basis functions.

4.5 Fastfood Kernels

The efficiency of Fastfood is partly obtained by approximating Gaussian random matrices with a product of matrices described in section 3.2. Here we propose several expressive and efficient kernel learning algorithms obtained by optimizing the marginal likelihood of the data in Eq. (4.2) with respect to these matrices:


The scaling matrix represents the spectral properties of the associated kernel. For the RBF kernel,

is sampled from a chi-squared distribution. We can simply change the kernel by adjusting

. By varying , we can approximate any radial kernel. We learn the diagonal matrix via marginal likelihood optimization. We combine this procedure with Automatic Relevance Determination of Neal (1998) – learning the scale of the input space – to obtain the FSARD kernel.


We can further generalize FSARD by additionally optimizing marginal likelihood with respect to the diagional matrices and in Fastfood to represent a wider class of kernels.

In both FSARD and FSGBARD the Hadamard matrix is retained, preserving all the computational benefits of Fastfood. That is, we only modify the scaling matrices while keeping the main computational drivers such as the fast matrix multiplication and the Fourier basis unchanged.

5 Experiments

We evaluate the proposed kernel learning algorithms on many regression problems from the UCI repository. We show that the proposed methods are flexible, scalable, and applicable to a large and diverse collection of data, of varying sizes and properties. In particular, we demonstrate scaling to more than 2 million datapoints (in general, Gaussian processes are intractable beyond datapoints); secondly, the proposed algorithms significantly outperform standard exact kernel methods, and with only a few hyperparameters are even competitive with alternative methods that involve training orders of magnitude more hyperparameters.111GM, PWL, FSARD, and FSGBARD are novel contributions of this paper, while RBF and ARD are popular alternatives, and SSGPR is a recently proposed state of the art kernel learning approach. The results are shown in Table 1. All experiments are performed on an Intel Xeon E5620 PC, operating at 2.4GHz with 32GB RAM.

5.1 Experimental Details

To learn the parameters of the kernels, we optimize over the marginal likelihood objective function described in Section 3.1, using LBFGS.222http://www.di.ens.fr/m̃schmidt/Software/minFunc.html

The datasets are divided into three groups: SMALL and MEDIUM and LARGE . All methods – RBF, ARD, FSARD, GM, PWL and SSGPR – are tested on each grouping. For SMALL data, we use an exact RBF and ARD kernel. All the datasets are divided into partitions. Every time, we pick one partition as test data and train all methods on the remaining partitions. The reported result is based on the averaged RMSE of partitions.



The RBF kernel has the form , where and

are signal standard deviation and length-scale (rescale) hyperparameters. The ARD kernel has the form

. ARD kernels use Automatic Relevance Determination (Neal, 1998) to adjust the scales of each input dimension individually. On smaller datasets, with fewer than training examples, where exact methods are tractable, we use exact Gaussian RBF and ARD kernels with hyperparameters learned via marginal likelihood optimization. Since these exact methods become intractable on larger datasets, we use Fastfood basis function expansions of these kernels for .


For Gaussian Mixtures we compute a mixture of Gaussians in frequency domain, as described in section 4.3. As before, optimization is carried out with regard to marginal likelihood.


For rotation invariance, we use the novel piecewise linear radial kernels described in section 4.4. PWL has a simple and scalable parametrization in terms of the radius of the spectrum.


Sparse Spectrum Gaussian Process Regression is a kitchen sinks (RKS) based model which individually optimizes the locations of all spectral frequencies (Lázaro-Gredilla et al., 2010). We note that SSGPR is heavily parametrized. Also note that SSGPR is a special case of the proposed GM model if for GM the number of components , and we set all bandwidths to and weigh all terms equally.


As described in section 4.5, these methods respectively learn the and matrices in the Fastfood representation of section 3.2, through marginal likelihood optimisation (section 4.2).



We randomly pick pairs of data and compute the distance of these pairs. These distances are sorted and we pick the quantiles as length-scale (aka rescale) initializations. We run optimization iterations starting with these initializations and then pick the one with the minimum negative log marginal likelihood, and then continue to optimize for iterations. We initialize the signal and noise standard deviations as and , respectively, where is the data vector.


For each dimension of the input, , we initialize each length-scale parameter as , where . We pick the best initialization from random restarts of a iteration optimization run. We multiply the scale by , the total number of input dimensions.


We use the same technique as in ARD to initialize the rescale parameters. We set the matrix to be , where is a

dimensional random vector with standard Gaussian distribution.


We initialize as in FSARD. is drawn uniformly from and from draws of a standard Gaussian distribution.


We use a Gaussian distribution with diagonal covariance matrices to model the spectral density . Assuming there are mixture components, the scale of each component is initialized to be . We reuse the same technique to initialize the rescale matrices as that of ARD. We initialize the shift to be close to .


We make use of a special case of a piecewise linear function, the hat function, in the experiments. The hat function is parameterized by and . controls the width of the hat function and control the distance of the hat function from the origin. We also incorporate ARD in the kernel, and use the same initialization techniques. For the RBF kernel we compute the distance of random pairs and sort these distances. Then we get a distance sample with a uniform random quantile within . is like the bandwidth of an RBF kernel. Then and . We make use of this technique because for RBF kernel, the maximum points is at and the width of the radial distribution is about .


For the rescale parameters, we follow the same procedure as for the ARD kernel. We initialize the projection matrix as a random Gaussian matrix.

Figure 2: We analyze how the accuracy depends on the number of clusters (left) and the number of basis functions (right). More specifically, for variable the number of basis functions per group is fixed to . For variable the number of clusters is fixed to . FRBF and FARD are Fastfood expansions of RBF and ARD kernels, respectively.
Figure 3: We compare all methods in terms of accuracy, training time (left) and memory (right). To make the methods comparable we compute the accuracy score of a given method as the improvement relative to the baseline (FRBF). That is, we compute . By construction, FRBF has an accuracy score of , and larger values of accuracy correspond to better algorithms. For runtime and memory we take the reciprocal of the analogous metric, so that a lower score corresponds to better performance. For instance, . denotes an average of the log scores, across all datasets.
Figure 4: Test time (left) and training time (right), as a function of the method used. This also includes the average log training time for the exact ARD and RBF kernels across the smallest five medium datasets; Note that these methods are intractable on any larger datasets, hence we did not compare them on any of the larger datasets.
Challenger 23 4 0.630.26 0.630.26 0.720.34 0.610.27 0.680.28 0.770.39 0.750.38
Fertility 100 9 0.190.04 0.210.05 0.210.05 0.190.05 0.190.05 0.200.04 0.200.06
Slump 103 7 4.492.22 4.722.42 3.972.54 2.991.14 3.451.66 5.402.44 6.253.70
Automobile 159 15 0.140.04 0.180.07 0.180.05 0.150.0311 0.140.03 0.220.08 0.170.06
Servo 167 4 0.290.07 0.280.09 0.290.08 0.270.07 0.280.09 0.440.10 0.380.08
Cancer 194 34 324 354 439 314 356 345 335
Hardware 209 7 0.440.06 0.430.04 0.440.06 0.440.04 0.420.04 0.420.08 0.440.1
Yacht 308 7 0.290.14 0.160.11 0.130.06 0.130.08 0.120.07 0.120.06 0.140.10
Auto MPG 392 7 2.910.30 2.630.38 2.750.43 2.550.55 2.640.52 3.30 0.69 3.190.56
Housing 509 13 3.330.74 2.910.54 3.390.74 2.930.83 2.900.78 4.620.85 4.490.69
Forest fires 517 12 1.390.15 1.390.16 1.590.12 1.400.17 1.410.16 2.640.25 2.010.41
Stock 536 11 0.0160.002 0.0050.001 0.0050.001 0.0050.001 0.0050.001 0.0060.001 0.0060.001
Pendulum 630 9 2.770.59 1.060.35 1.760.31 1.060.27 1.160.29 1.220.26 1.090.33
Energy 768 8 0.470.08 0.460.07 0.470.07 0.310.07 0.360.08 0.390.06 0.420.08
Concrete 1,030 8 5.420.80 4.950.77 5.430.76 3.670.71 3.760.59 4.860.97 5.031.35
Solar flare 1,066 10 0.780.19 0.830.20 0.870.19 0.820.19 0.820.18 0.910.19 0.890.20
Airfoil 1,503 5 4.130.79 1.690.27 2.000.38 1.380.21 1.490.18 1.930.38 1.650.20
Wine 1,599 11 0.550.03 0.470.08 0.500.05 0.530.11 0.480.03 0.570.04 0.660.06
Gas sensor 2,565 128 0.210.07 0.120.08 0.130.06 0.140.08 0.120.07 0.140.07 0.140.08
Skillcraft 3,338 19 1.263.14 0.250.02 0.250.02 0.250.02 0.250.02 0.290.02 0.280.01
SML 4,137 26 6.940.51 0.330.11 0.260.04 0.270.03 0.310.06 0.310.06 0.340.05
Parkinsons 5,875 20 3.941.31 0.010.00 0.020.01 0.000.00 0.040.03 0.020.00 0.080.19
Pumadyn 8,192 32 1.000.00 0.200.00 0.220.03 0.210.00 0.200.00 0.210.00 0.210.00
Pole Tele 15,000 26 12.60.3 7.00.3 6.10.3 5.40.7 6.60.3 4.70.2 4.30.2
Elevators 16,599 18 0.120.00 0.0900.001 0.0890.002 0.0890.002 0.0890.002 0.0860.002 0.0880.002
Kin40k 40,000 8 0.340.01 0.280.01 0.230.01 0.190.02 0.230.00 0.080.00 0.060.00
Protein 45,730 9 1.641.66 0.530.01 0.520.01 0.500.02 0.520.01 0.480.01 0.470.01
KEGG 48,827 22 0.330.17 0.120.01 0.120.01 0.120.01 0.120.01 0.120.01 0.120.01
CT slice 53,500 385 7.130.11 4.000.12 3.600.09 2.210.06 3.350.08 2.560.12 0.590.07
KEGGU 63,608 27 0.290.12 0.120.00 0.120.00 0.120.00 0.120.00 0.120.00 0.120.00
3D road 434,874 3 12.860.09 10.910.05 10.290.12 10.340.19 11.260.22 9.900.10 10.120.28
Song 515,345 90 0.550.00 0.490.00 0.470.00 0.460.00 0.470.00 0.460.00 0.450.00
Buzz 583,250 77 0.880.01 0.590.02 0.540.01 0.510.01 0.540.01 0.520.01 0.540.01
Electric 2,049,280 11 0.230.00 0.120.12 0.060.01 0.050.00 0.070.04 0.050.00 0.050.00
Table 1: Comparative RMSE performance on all datasets, with training points and the input dimensions. The results are averaged over 10 equal partitions of the data 1 standard deviation. We use exact RBF and ARD kernels on the small datasets, and Fastfood expansions of these kernels on the medium and large datasets. For GM and PWL, we set and use sample points for each component. For FSARD and SSGPR, we set . For medium datasets, we set and for GM and PWL. On the large datasets 3D Road network and Buzz, GM and PWL use and , and and for Song and Household electric. All other methods use basis functions. The terms ‘small’, ‘medium’ and ‘large’ are meant to be taken in the context of Gaussian process regression, which is typically intractable for when kernel learning is taking place.
Figure 5: To assess the relative performance of the algorithms with respect to another, we compare their relative error rank as a function of training time (left) and memory footprint (right).
Figure 6: Accuracy of the algorithms as a function of training time (left) and memory (right).

We use the same number of basis functions for all methods. We use to denote the number of components in GM and PWL and to denote the number of basis functions in each component. For all other methods, we use basis functions. For the largest datasets in Table 1 we favoured larger values of , as the flexibility of having more components in GM and PWL becomes more valuable when there are many datapoints; although we attempted to choose sensible and combinations for a particular model and number of datapoints , these parameters were not fine tuned. We choose to be as large as is practical given computational constraints, and SSGPR is allowed a significantly larger parametrization.

Indeed SSPGR is allowed free parameters to learn, and we set . This setup gives SSGPR a significant advantage over our proposed models. However, we wish to emphasize that the GM, PWL, and FSGBARD models are competitive with SSGPR, even in the adversarial situation when SSGPR has many orders of magnitude more free parameters than GM or PWL. For comparison, the RBF, ARD, PWL, GM, FSARD and FSGBARD methods respectively require and hyperparameters.

Gaussian processes are most commonly implemented with exact RBF and ARD kernels, which we run on the smaller () datasets in Table 1, where the proposed GM and PWL approaches generally perform better than all alternatives. On the larger datasets, exact ARD and RBF kernels are entirely intractable, so we compare to Fastfood expansions. That is, GM and PWL are both more expressive and profoundly more scalable than exact ARD and RBF kernels, far and above the most popular alternative approaches.

In Figure 2 we investigate how RMSE performance changes as we vary and . The GM and PWL models continue to increase in performance as more basis functions are used. This trend is not present with SSGPR or FSGBARD, which unlike GM and PWL, becomes more susceptible to over-fitting as we increase the number of basis functions. Indeed, in SSGPR, and in FSGBARD and FSARD to a lesser extent, more basis functions means more parameters to optimize, which is not true with the GM and PWL models.

To further investigate the performance of all methods, we compare each of the seven tested methods over all experiments, in terms of average normalised log predictive accuracy, training time, testing time, and memory consumption, shown in Figures 3 and 4 (higher accuracy scores and lower training time, test time, and memory scores, correspond to better performance). Despite the reduced parametrization, GM and PWL outperform all alternatives in accuracy, yet require similar memory and runtime to the much less expressive FARD model, a Fastfood expansion of the ARD kernel. Although SSGPR performs third best in accuracy, it requires more memory, training time, testing runtime (as shown in Fig 4), than all other models. FSGBARD performs similar in accuracy to SSGPR, but is significantly more time and memory efficient, because it leverages a Fastfood representation. For clarity, we have so far considered log plots. If we view the results without a log transformation, as in Fig 6

(supplement) we see that GM and SSGPR are outliers: on average GM greatly outperforms all other methods in predictive accuracy, and SSGPR requires profoundly more memory than all other methods.

6 Discussion

Kernel learning methods are typically intractable on large datasets, even though their flexibility is most valuable on large scale problems. We have introduced a family of flexible, scalable, general purpose and lightly parametrized kernel methods, which learn the properties of groups of spectral frequencies in Fastfood basis function expansions. We find, with a minimal parametrization, that the proposed methods have impressive performance on a large and diverse collection of problems – in terms of predictive accuracy, training and test runtime, and memory consumption. In the future, we expect additional performance and efficiency gains by automatically learning the relative numbers of spectral frequencies to assign to each group.

In short, we have shown that we can have kernel methods which are simultaneously scalable and expressive. Indeed, we hope that this work will help unify efforts in enhancing scalability and flexibility for kernel methods. In a sense, flexibility and scalability are one and the same problem: we want the most expressive methods for the biggest datasets.


  • Doucet et al. [2001] A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001.
  • Halko et al. [2009] N. Halko, P.G. Martinsson, and J. A. Tropp. Finding structure with randomness: Stochastic algorithms for constructing approximate matrix decompositions, 2009. URL http://arxiv.org/abs/0909.4061. oai:arXiv.org:0909.4061.
  • Kimeldorf and Wahba [1970] G. S. Kimeldorf and G. Wahba. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics, 41:495–502, 1970.
  • Lázaro-Gredilla et al. [2010] M. Lázaro-Gredilla, J. Quiñonero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse spectrum gaussian process regression.

    The Journal of Machine Learning Research

    , 99:1865–1881, 2010.
  • Le et al. [2013] Q.V. Le, T. Sarlos, and A. J. Smola. Fastfood — computing hilbert space expansions in loglinear time. In International Conference on Machine Learning, 2013.
  • Lu et al. [2014] Z. Lu, M. May, K. Liu, A.B. Garakani, Guo D., A. Bellet, L. Fan, M. Collins, B. Kingsbury, M. Picheny, and F. Sha. How to scale up kernel methods to be as good as deep neural nets. Technical Report 1411.4000, arXiv, November 2014. http://arxiv.org/abs/1411.4000.
  • Mercer [1909] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philos. Trans. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., A 209:415–446, 1909.
  • Neal [1998] R. M. Neal. Assessing relevance determination methods using delve. In C. M. Bishop, editor, Neural Networks and Machine Learning, pages 97–129. Springer, 1998.
  • Ong et al. [2003] C. S. Ong, A. J. Smola, and R. C. Williamson. Hyperkernels. In S. Thrun S. Becker and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 478–485. MIT Press, Cambridge, MA, 2003.
  • Rahimi and Recht [2008] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008.
  • Rahimi and Recht [2009] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Neural Information Processing Systems, 2009.
  • Rasmussen and Williams [2006] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006.
  • Schoenberg [1942] I. Schoenberg. Positive definite functions on spheres. Duke Math. J., 9:96–108, 1942.
  • Schölkopf et al. [2001] B. Schölkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In D. P. Helmbold and B. Williamson, editors,

    Proc. Annual Conf. Computational Learning Theory

    , number 2111 in Lecture Notes in Comput. Sci., pages 416–426, London, UK, 2001. Springer-Verlag.
  • Silverman [1986] B. W. Silverman. Density Estimation for Statistical and Data Analysis. Monographs on statistics and applied probability. Chapman and Hall, London, 1986.
  • Williams [1998] C. K. I. Williams.

    Prediction with Gaussian processes: From linear regression to linear prediction and beyond.

    In M. I. Jordan, editor, Learning and Inference in Graphical Models, pages 599–621. Kluwer Academic, 1998.
  • Wilson and Adams [2013] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. In Proceedings of the 30th International Conference on Machine Learning, 2013.
  • Wilson [2014] A.G. Wilson. Covariance Kernels for Fast Automatic Pattern Discovery and Extrapolation with Gaussian Processes. PhD thesis, University of Cambridge, 2014.
  • Wilson et al. [2014] A.G. Wilson, E. Gilboa, A. Nehorai, and J.P. Cunningham. Fast kernel learning for multidimensional pattern extrapolation. In Advances in Neural Information Processing Systems, 2014.