Just Interpolate: Kernel "Ridgeless" Regression Can Generalize

08/01/2018 ∙ by Tengyuan Liang, et al. ∙ MIT The University of Chicago Booth School of Business 0

In the absence of explicit regularization, Kernel "Ridgeless" Regression with nonlinear kernels has the potential to fit the training data perfectly. It has been observed empirically, however, that such interpolated solutions can still generalize well on test data. We isolate a phenomenon of implicit regularization for minimum-norm interpolated solutions which is due to a combination of high dimensionality of the input data, curvature of the kernel function, and favorable geometric properties of the data such as an eigenvalue decay of the empirical covariance and kernel matrices. In addition to deriving a data-dependent upper bound on the out-of-sample error, we present experimental evidence suggesting that the phenomenon occurs in the MNIST dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

According to conventional wisdom, explicit regularization should be added to the least-squares objective when the Hilbert space is high- or infinite-dimensional (Golub et al., 1979; Wahba, 1990; Smola and Schölkopf, 1998; Shawe-Taylor and Cristianini, 2004; Evgeniou et al., 2000; De Vito et al., 2005; Alvarez et al., 2012):

(1.1)

The regularization term is introduced to avoid “overfitting” since kernels provide enough flexibility to fit training data exactly (i.e. interpolate it). From the theoretical point of view, the regularization parameter

is a knob for balancing bias and variance, and should be chosen judiciously. Yet, as noted by a number of researchers in the last few years,

111In particular, we thank M. Belkin, B. Recht, L. Rosasco, and N. Srebro for highlighting this phenomenon. the best out-of-sample performance, empirically, is often attained by setting the regularization parameter to zero and finding the minimum-norm solution among those that interpolate the training data. The mechanism for good out-of-sample performance of this interpolation method has been largely unclear (Zhang et al., 2016; Belkin et al., 2018b).

As a concrete motivating example, consider the prediction performance of Kernel Ridge Regression for various values

222We take . of the regularization parameter on subsets of the MNIST dataset. For virtually all pairs of digits, the best out-of-sample mean squared error is achieved at . Contrary to the standard bias-variance-tradeoffs picture we have in mind, the test error is monotonically decreasing as we decrease (see Figure 1 and further details in Section 6).

Figure 1: Test performance of Kernel Ridge Regression on pairs of MNIST digits for various values of regularization parameter , normalized by variance of in the test set (for visualization purposes).

We isolate what appears to be a new phenomenon of implicit regularization

for interpolated minimum-norm solutions in Kernel “Ridgeless” Regression. This regularization is due to the curvature of the kernel function and “kicks in” only for high-dimensional data and for “favorable” data geometry. We provide out-of-sample statistical guarantees in terms of spectral decay of the empirical kernel matrix and the empirical covariance matrix, under additional technical assumptions.

Our analysis rests on the recent work in random matrix theory. In particular, we use a suitable adaptation of the argument of

(El Karoui, 2010) who showed that high-dimensional random kernel matrices can be approximated in spectral norm by linear kernel matrices plus a scaled identity. While the message of (El Karoui, 2010) is often taken as “kernels do not help in high dimensions,” we show that such a random matrix analysis helps in explaining the good performance of interpolation in Kernel “Ridgeless” Regression.

1.1 Literature Review

Grace Wahba (Wahba, 1990) pioneered the study of nonparametric regression in reproducing kernel Hilbert spaces (RKHS) from the computational and statistical perspectives. One of the key aspects in that work is the role of the decay of eigenvalues of the kernel (at the population level) in rates of convergence. The analysis relies on explicit regularization (ridge parameter ) for the bias-variance trade-off. The parameter is either chosen to reflect the knowledge of the spectral decay at the population level (De Vito et al., 2005) (typically unknown to statistician), or by the means of cross-validation (Golub et al., 1979)

. Interestingly, the explicit formula of Kernel Ridge Regression has been introduced as “kriging” in the literature before, and was widely used in Bayesian statistics

(Cressie, 1990; Wahba, 1990).

In the learning theory community, Kernel Ridge Regression is known as a special case of Support Vector Regression

(Vapnik, 1998; Shawe-Taylor and Cristianini, 2004; Vovk, 2013). Notions like metric entropy (Cucker and Smale, 2002) or “effective dimension” (Caponnetto and De Vito, 2007) were employed to analyze the guarantees on the excess loss of Kernel Ridge Regression, even when the model is misspecified. We refer the readers to Györfi et al. (2006) for more details. Again, the analysis leans crucially on the explicit regularization, as given by a careful choice of , for the model complexity and approximation trade-off, and mostly focusing on the fixed dimension and large sample size setting. However, to the best of our knowledge, the literature stays relatively quiet in terms of what happens to the minimum norm interpolation rules, i.e., . As pointed out by (Belkin et al., 2018b, a), the existing bounds in nonparametric statistics and learning theory do not apply to interpolated solution either in the regression or the classification setting. In this paper, we aim to answer when and why interpolation in RKHS works, as a starting point for explaining the good empirical performance of interpolation using kernels in practice (Zhang et al., 2016; Belkin et al., 2018b).

2 Preliminaries

2.1 Problem Formulation

Suppose we observe i.i.d. pairs , , where are the covariates with values in a compact domain and are the responses (or, labels). Suppose the

pairs are drawn from an unknown probability distribution

. We are interested in estimating the conditional expectation function

, which is assumed to lie in a Reproducing Kernel Hilbert Space (RKHS) . Suppose the RKHS is endowed with the norm and corresponding positive definite kernel . The interpolation estimator studied in this paper is defined as

(2.1)

Let be the matrix with rows and let be the vector of values . Slightly abusing the notation, we let be the kernel matrix. Extending this definition, for we denote by the matrix of values . When is invertible, solution to (2.1) can be written in the closed form:

(2.2)

In this paper we study the case when is full rank, taking (2.2) as the starting point. For this interpolating estimator, we provide high-probability (with respect to a draw of ) upper bounds on the integrated squared risk of the form

(2.3)

Here the expectation is over and , and is a data-dependent upper bound. We remark that upper bounds of the form (2.3) also imply prediction loss bounds for excess square loss with respect to the class , as .

2.2 Notation and Background on RKHS

For an operator , its adjoint is denoted by . For real matrices, the adjoint is the transpose. For any , let be such that

(2.4)

It follows that for any

(2.5)

Let us introduce the integral operator with respect to the marginal measure :

(2.6)

and denote the set of eigenfunctions of this integral operator by

, where could be . We have that

(2.7)

Denote as the collection of non-negative eigenvalues. Adopting the spectral notation,

Via this spectral characterization, the interpolation estimator (2.1) takes the following form

(2.8)

Extending the definition of , it is natural to define the operator . Denote the sample version of the kernel operator to be

(2.9)

and the associated eigenvalues to be , indexed by . The eigenvalues are the same as those of . It is sometimes convenient to express as the linear operator under the basis of eigenfunctions, in the following matrix sense

We write to denote the expectation with respect to the marginal . Furthermore, we denote by

the squared norm with respect to the marginal distribution. The expectation denotes the expectation over conditionally on .

3 Main Result

We impose the following assumptions:

  1. High dimensionality: there exists universal constants such that . Denote by the covariance matrix, assume that the operator norm .

  2. -moments:

    , , are i.i.d. random vectors. Furthermore, the entries are i.i.d. from a distribution with and , for some .

  3. Noise condition: there exists a such that for all .

  4. Non-linear kernel: for any , . Furthermore, we consider the inner-product kernels of the form

    (3.1)

    for a non-linear smooth function in a neighborhood of .

While we state the main theorem for inner product kernels, the results follow under suitable modifications333We refer the readers to El Karoui (2010) for explicit extensions to RBF kernels.

for Radial Basis Function (RBF) kernels of the form

(3.2)

We postpone the discussion of the assumptions until after the statement of the main theorem.

Let us first define the following quantities related to curvature of :

(3.3)
Theorem 1.

Define

(3.4)

Under the assumptions (A.1)-(A.4) and for large enough, with probability at least (with respect to a draw of design matrix ), the interpolation estimator (2.2) satisfies

(3.5)

Here the remainder term .

A few remarks are in order. First, the upper bound is data-dependent and can serve as a certificate (assuming that an upper bound on can be guessed) that interpolation will succeed. The bound also suggests the regimes when the interpolation method should work. The two terms in the estimate of Theorem 1 represent upper bounds on the variance and bias of the interpolation estimator, respectively. Unlike the explicit regularization analysis (e.g. (Caponnetto and De Vito, 2007)), the two terms are not controlled by a tunable parameter . Rather, the choice of the non-linear kernel itself leads to an implicit control of the two terms through curvature of the kernel function, favorable properties of the data, and high dimensionality. We remark that for the linear kernel (), we have , and the bound on the variance term can become very large in the presence of small eigenvalues. In contrast, curvature of introduces regularization through a non-zero value of . We also remark that the bound “kicks in” in the high-dimensional regime: the error term decays with both and .

We refer to the favorable structure of eigenvalues of the data covariance matrix as favorable geometric properties of the data. The first term (variance) is small when the data matrix enjoys certain decay of the eigenvalues, thanks to the implicit regularization . The second term (bias) is small when the eigenvalues of the kernel matrix decay fast or the kernel matrix is effectively low rank. Note that the quantities are constants, and scales with . We will provide a detailed discussion on the trade-off between the bias and variance terms for concrete examples in Section 4.

We left the upper bound of Theorem 1 in a data-dependent form for two reasons. First, an explicit dependence on the data tells us whether interpolation can be statistically sound on the given dataset. Second, for general spectral decay, current random matrix theory falls short of characterizing the spectral density non-asymptotically except for special cases (Bose et al., 2003; El Karoui, 2010).

Discussion of the assumptions

  • The assumption in (A.1) that emphasizes that we work in a high-dimensional regime where scales on the order of . This assumption is used in the proof of (El Karoui, 2010), and the particular dependence on can be traced in that work if desired. Rather than doing so, we “folded” these constants into mild additional power of . The same goes for the assumption on the scaling of the trace of the population covariance matrix.

  • The assumption in (A.2) that are i.i.d. across is a strong assumption that is required to ensure the favorable high-dimensional effect. Relaxing this assumption is left for future work.

  • The existence of -moments for is enough to ensure for almost surely (see, Lemma 2.2 in Yin et al. (1988)). Remark that the assumption of existence of -moments in (A.2) is relatively weak. In particular, for bounded or subgaussian variables, and the error term scales as , up to log factors. See Lemma B.1 for an explicit calculation in the Gaussian case.

  • Finally, as already mentioned, the main result is stated for the inner product kernel, but can be extended to the RBF kernel using an adaptation of the analysis in (El Karoui, 2010).

4 Behavior of the Data-dependent Bound

In this section, we estimate, both numerically and theoretically, the non-asymptotic data-dependent upper bound in Theorem 1 in several regimes. To illustrate the various trade-offs, we divide the discussion into two main regimes: and . Without loss of generality, we take as an illustration the non-linearity and , with the implicit regularization . In our discussion, we take both and large enough so that the residuals in Theorem 1 are negligible. The main theoretical results in this section, Corollaries 4.1 and 4.2, are direct consequences of the data-dependent bound in Theorem 1.

Case

We can further bound the variance and the bias, with the choice , as

(4.1)
(4.2)

We first illustrate numerically the bias-variance trade-off by varying the geometric properties of the data in terms of the population spectral decay of . We shall parametrize the eigenvalues of the covariance, for , as

The parameter controls approximate “low-rankness” of the data: the closer is to , the faster does the spectrum of the data decay. This is illustrated in the top row of Figure 6 on page 6. By letting , can be arbitrary small, as

We will focus on three cases, , for the decay parameter, and values , . The data-dependent upper bounds on and are summarized in Table 1. More detailed plots are postponed to Figure 6 (in this figure, we plot the ordered eigenvalues and the spectral density for both the population and empirical covariances). Table 1 shows that as increases (a slower spectral decay), the implicit regularization parameter becomes larger, resulting in a decreasing variance and an increasing bias.

We also perform simulations to demonstrate the trade-off between bias and variance in the generalization error. The result is shown in Figure 2. For each choice of pair, we vary the spectral decay of the kernel by changing gradually , and plot the generalization error on the log scale. We postpone the experiment details to Section 6.2, but point out important phenomenona observed in Figures 2-3: (1) an extremely fast spectral decay (small ) will generate insufficient implicit regularization that would hurt the generalization performance due to a large variance term; (2) a very slow spectral decay (large ) will result in a large bias, which can also hurt the generalization performance; (3) certain favorable spectral decay achieves the best trade-off, resulting in the best generalization error.

Case : variance bound (4.1), bias bound (4.2)
Spectral Decay Implicit Reg
0.005418 14.2864 0.07898 9.4980 0.07891
0.2525 0.4496 0.7535 0.1748 0.7538
0.7501 0.1868 1.6167 0.05835 1.6165
Table 1:
Figure 2: Generalization error as a function of varying spectral decay. Here , .

We now theoretically demonstrate scalings within the regime when both and vanish. For simplicity, we consider Gaussian .

Corollary 4.1 (General spectral decay: ).

Consider general eigenvalue decay with . Then with high probability,

To illustrate the behavior of the estimates in Corollary 4.1, consider the following assumptions on the population covariance matrix:

Example 4.1 (Low rank).

Let with ones, . In this case , and with high probability by standard results in random matrix theory. Then

Therefore, as , both terms vanish for .

Example 4.2 (Approx. low rank).

Let for small . In this case, and with high probability. Then

For instance, for , both terms vanish for .

Example 4.3 (Nonparametric slow decay).

Consider for . Then . One can bound w.h.p. (see (B.4))

Balancing the two terms, one obtains a nonparametric upper bound A similar analysis can be carried out for .

Case

In this case, we can further bound the variance and the bias, with the choice , as

(4.3)
(4.4)

We first numerically illustrate the trade-off between the variance and the bias upper bounds. We consider three cases , and , . As before, we find a trade-off between and with varying ; the results are summarized in Table 2. Additionally, Figure 7 provides a plot of the ordered eigenvalues, as well as spectral density for both the population and empirical covariances. As one can see, for a general eigenvalue decay, the spectral density of the population and the empirical covariance can be quite distinct. We again plot the generalization error in Figure 3 as a function of .

Case : variance bound (4.3), bias bound (4.4)
Spectral Decay Implicit Reg
0.005028 3.9801 0.07603 0.7073 0.07591
0.2503 0.1746 0.7513 0.04438 0.7502
0.7466 0.06329 1.6106 0.01646 1.6102
Table 2:
Figure 3: Generalization error as a function of varying spectral decay. Here , .

We now theoretically showcase an example in the regime where both and vanish. Again consider being Gaussian for simplicity.

Corollary 4.2 (General spectral decay: ).

With high probability, it holds that

The variance bound follows from the fact that for all .

Example 4.4 (Favorable spectral decay for ).

Recall . With the choice , both terms vanish for as

In this case, the spectrum satisfies .

5 Proofs

To prove Theorem 1, we decompose the mean square error into the bias and variance terms (Lemma 5.1), and provide data-dependent bound for each (Sections 5.2 and 5.3).

5.1 Bias-Variance Decomposition

The following is a standard bias-variance decomposition for an estimator. We remark that it is an equality, and both terms have to be small to ensure the desired convergence.

Lemma 5.1.

The following decomposition for the interpolation estimator (2.2) holds

(5.1)

where

(5.2)
(5.3)
Proof of Lemma 5.1.

Recall the closed form solution of the interpolation estimator:

Define . Since , we have

Using Fubini’s Theorem,

5.2 Variance

In this section, we provide upper estimates on the variance part in (5.2).

Theorem 2 (Variance).

Let . Under the assumptions (A.1)-(A.4), with probability at least with respect to a draw of ,

(5.4)

for and for large enough.

Remark 5.1.

Let us discuss the first term in Eq. (5.4) and its role in implicit regularization induced by the curvature of the kernel, eigenvalue decay, and high dimensionality. In practice, the data matrix is typically centered, so . Therefore the first term is effectively

This formula explains the effect of implicit regularization, and captures the “effective rank” of the training data . We would like to emphasize that this measure of complexity is distinct from the classical notion of effective rank for regularized kernel regression (Caponnetto and De Vito, 2007), where the “effective rank” takes the form with , with is the eigenvalue of the population integral operator .

Proof of Theorem 2.

From the definition of and ,

Due to the fact that for , and , we have that and thus

Let us introduce two quantities for the ease of derivation. For defined in (3), let

(5.5)
(5.6)

and being the transpose of . By Proposition A.2, with probability at least , for the following holds

As a direct consequence, one can see that

(5.7)
(5.8)

provided is large enough, in the sense that

By Lemma B.2 (for Gaussian case, Lemma B.1),

(5.9)

Let us proceed with the bound

where the the third inequality relies on (5.9) and (5.7), and the fourth inequality follows from (5.8).

One can further show that

We conclude that with probability at least ,

(5.10)
(5.11)

for large enough. ∎

5.3 Bias

Theorem 3 (Bias).

Let . The bias, under the only assumptions that for , and ’s are i.i.d. random vectors, is upper bounded as

(5.12)

with probability at least .

Proof of Theorem 3.

In this proof, when there is no confusion, we use where denotes the coefficients of under the basis . Adopting this notation, we can write where also denotes a possibly infinite vector. For the bias, it is easier to work in the frequency domain using the spectral decomposition. Recalling the spectral characterization in the preliminary section,