## Fitting very flexible models: Linear regression with large numbers of parameters^{1}^{1}1It is a pleasure to thank
Jed Brown (CU Boulder),
Dan Foreman-Mackey (Flatiron),
Alessandro Gentilini,
Teresa Huang (JHU),
Sam Roweis (deceased),
Adrian Price-Whelan (Flatiron),
Bernhard Schölkopf (MPI-IS),
Kate Storey-Fisher (NYU),
Rachel Ward (UT Austin), and
Lily Zhao (Yale)
for valuable conversations and input.
SV is partially funded by NSF DMS 2044349, EOARD FA9550-18-1-7007, and the NSF–Simons Research Collaboration on the Mathematical and Scientific Foundations of Deep Learning (MoDL) (NSF DMS-2031985).

David W. Hogg

Flatiron Institute, a division of the Simons Foundation

Center for Cosmology and Particle Physics, Department of Physics, New York University

Center for Data Science, New York University

Max-Planck-Institut für Astronomie, Heidelberg

Soledad Villar

Department of Applied Mathematics & Statistics, Johns Hopkins University

Mathematical Institute for Data Science, Johns Hopkins University

### Abstract:

There are many uses for linear fitting; the context here is interpolation and denoising of data, as when you have calibration data and you want to fit a smooth, flexible function to those data.
Or you want to fit a flexible function to de-trend a time series or normalize a spectrum.
In these contexts, investigators often choose a polynomial basis, or a Fourier basis, or wavelets, or something equally general.
They also choose an order, or number of basis functions to fit, and (often) some kind of regularization.
We discuss how this basis-function fitting is done, with ordinary least squares and extensions thereof.
We emphasize that it is often valuable to choose *far more parameters than data points*, despite folk rules to the contrary:
Suitably regularized models with enormous numbers of parameters generalize well and make good predictions for held-out data; over-fitting is not (mainly) a problem of having too many parameters.
It is even possible to take the limit of infinite parameters, at which, if the basis and regularization are chosen correctly, the least-squares fit becomes the mean of a Gaussian process.
We recommend cross-validation as a good empirical method for model selection (for example, setting the number of parameters and the form of the regularization),
and jackknife resampling as a good empirical method for estimating the uncertainties of the predictions made by the model.
We also give advice for building stable computational implementations.

## Section 1 Introduction

In contexts in which we want to fit a flexible function to data, for interpolation or denoising, we often perform linear fitting in a generic basis, such as polynomials, Fourier modes, wavelets, or spherical harmonics. This kind of linear fitting arises in astronomy when, for example, we want to calibrate the relationship between wavelength and position on the detector in a spectrograph: We have noisy measurements of calibration data and we want to fit a smooth function of position that denoises and interpolates the calibration data. It also arises when we want to make a data-driven interpolation, extrapolation, or local averaging of data, as with light-curve de-trending, continuum estimation, and interpolation or extrapolation of instrument housekeeping (or other) data.

When faced with problems of this kind, investigators have three general kinds of choices that they have to make: They have to choose the basis in which they are working (Fourier, polynomial, wavelet, etc.). They have to choose to what order they extend the basis—that is, how many components to use in the fit. And they have to decide how (or whether) to regularize the fit, or discourage fit coefficients from getting out of line when the data are noisy or the basis functions are close to (or strictly) degenerate.

If you have encountered and solved problems like these, you have made these three kinds of choices (sometimes implicitly). The second choice—about the number of coefficients to fit—is usually made heuristically, and often subject to the strongly believed opinion that you

*must have fewer parameters than data points*. Here we are going to show that this folk rule is not valid; you can go to extremely large numbers of coefficients without trouble. But like most folk rules, it has a strong basis in reality: There are extremely bad choices possible for the number of coefficients, and especially when the number of parameters is close or comparable to the number of data points. As we will discuss below, these choices ought to be made with care.

In many cases the third kind of choice—about regularization—is made implicitly, not explicitly. Here we are going to emphasize this choice and its importance, and its value in improving your results.

Alternatively, if you are unhappy with the three choices of basis, order, and regularization, you might decide to avoid such decisions and go fully *non-parametric*:
Instead of fitting a basis expansion, you can use a strict interpolator (like a cubic spline), or you can fit a Gaussian process to your data.
Here we will show that the choice of any Gaussian process kernel function is equivalent to choosing a basis and a regularization and letting the number of fit components go to infinity.
That is, going non-parametric doesn’t really get you out of making these choices.
It just makes these choices more implicit.
And it is a pleasure to note that any time you have gone non-parametric you have implicitly chosen to use a basis with way more fit parameters than data points!
The fact that non-parametrics work so well is strong evidence against the folk rule about the number of parameters needing to be less than the number of data points.

An important assumption or setting for the problems we are addressing in this Note will be that you care about predicting new data or interpolating the data, but you explicitly *don’t care* about the parameters of the fit or the weights of the basis functions per se.
In this setting there are no important meanings to the components of the model.
That is—for us—only the data exist.
The details of the model are just choices that permit high-quality interpolations and predictions in the space of the data.

In what follows, we will look at applications that look like interpolation; there are many other contexts in which regressions have gone to very large numbers of parameters.
For example, there are contexts in which there are enormous numbers of possible natural features, like for instance when the data are images or videos: Every pixel of every frame of the video—or any linear (or nonlinear) combination of pixels—can become a feature for the regression.
Also, there are contexts in which features are generated randomly from, say, a space of functions that can act on the natural features (rahimi2007random).
These settings are not what we are addressing here, but they are relevant and related
(many good books exist, for example, bishop, esl, agresti, gelman).
In some ways, the most flexible of models currently in use are deep networks, where it is both the case that the input data often have enormous numbers of natural features, *and* the deep network is capable of generating (effectively) far more, internally.

In some contexts you have strong beliefs about the noise affecting your measurements. In other cases you don’t. In some cases you have strong reasons to use a particular basis. In other cases you don’t. The differences in these beliefs and the differences in your objectives will change what methods you choose and how you use and analyze them. We’ll try to be useful to you no matter where you’re at. This document does not deliver new research results on the mathematics or statistics of regression. It is novel only in that it makes very specific the connection between regularized linear regression and Gaussian processes.

Ordinary least squares is reviewed in Section 2

. The extensions of weighted least squares and ridge regression are shown in Section

3. The over-parameterized case (more parameters than data) is discussed in Section 4. The concept of feature weighting for controlling regularization in over-parameterized fits is introduced in Section 5. Cross-validation is explained and used to choose the number of parameters in Section 6, and the double descent phenomenon is shown. The Gaussian Process appears as the limit of infinite parameters in Section 7. Jackknife resampling is explained and used to estimate uncertainties in Section 8. Numerical implementation considerations are discussed in Section 9 and some final remarks are made in Section 10.## Section 2 Standard linear fitting: Ordinary least squares with a feature embedding

Our setup will be that there are scalar data points . Each of these data points has an associated coordinate or location

. In the machine-learning lexicon, these will be our “training data”. The location

could be thought of as a time at which the data point was taken, or a position, or it can be a higher dimensional vector or blob of housekeeping data associated with the data point. Critically, we are going to imagine that the

are known very well (not very uncertain or noisy), while the are possibly very uncertain or noisy measurements. We’ll return to these assumptions at the end.We are going to fit these data with a linear sum of basis functions . These basis functions are functions of the coordinates . That is, our implicit generative model is

(1) |

where the values are parameters or coefficients of the linear fit. We can assemble the evaluations of the functions at the data coordinates into a design matrix or feature matrix such that

(2) |

The transformation of the locations into a matrix is the opposite of a dimensionality reduction.
It is called variously a *feature map* or a *feature embedding*.
We like “embedding” because it (almost always) raises the dimensionality of the locations into the -dimensional space of the rows of .

For a concrete example, one common choice is to make the feature embedding functions terms in a Fourier series

(3) | ||||

(4) |

where is a (large) length-scale in the coordinate space ( space) and indicates the floor of (integer division). Example functions from this basis are shown in Figure 1. Alternative common choices would be to make the embedding functions polynomials or other kinds of ordered basis functions, such as wavelets or spherical harmonics (the latter if, say, the are positions on the sphere). Another choice that isn’t common in the natural sciences, but studied in machine learning (for example, rahimi2007random), is to choose the features randomly from a distribution (rather than on a regular grid in frequency, as we do here). That is beyond our scope, but we come back to it in Section 10.

The idea of least-squares fitting is that the “best” values of the parameters are the values that minimize the sum of squares of the differences between the data and the linear combination of features:

(5) |

where is the -vector (column vector) of the best-fit values of the parameters , and denotes the squared L2-norm, or the *sum of squares* of the components of a vector

(6) |

This optimization objective (5) is convex and the optimization problem has a solution in closed form^{2}^{2}2
Note that so critical points are such that . Since the objective is convex all these critical points are global minima. Further analysis is given in Appendix A.
as long as the number of parameters (and features) is less than the number of training points (and the matrix is invertible):

(7) |

We will treat the case in which the matrix is not invertible below. When the investigator knows uncertainties on the training data , this expression will change a bit; we begin to discuss that in the next Section.

But recall our setting: We are using the linear fit to interpolate the data, or de-noise the data, or predict new data. In these contexts, we don’t care about the parameter vector itself. We care only about the predictions at a set of new “test” locations , which will usually be different from the training locations . From the new test times we create the test feature matrix (by the same feature embedding functions ). The prediction for the values at the test locations becomes

(8) |

Examples of OLS applied to some toy data^{3}^{3}3The algorithm by which the toy data were made—and indeed all of the code used to make the figures for this Note—is available online at https://github.com/davidwhogg/FlexibleLinearModels. are shown in Figure 2.
This form (8) of the prediction of new test data is called ordinary least squares (OLS). It has many good properties, some of which are encoded in the Gauss–Markov theorem.
In particular, if the noise contributions in the model given in equation (1

) are uncorrelated, have zero mean, and equal variances for

then the Gauss–Markov theorem states that the OLS estimator has the lowest variance within the class of unbiased estimators that are linear in

(see, for example, esl, Ch. 3).## Section 3 Discussion and extensions of OLS

The prediction (8) when (the under-parameterized or traditional regime) is *affine invariant* in that -dimensional rotations or rescalings of the rectangular feature matrix do not affect predictions.
That is, if is an invertible matrix, the prediction using will be identical to the prediction using the original .
This affine invariance will be modified in the over-parameterized regime, below.

Although the OLS prediction (8) is affine invariant with respect to -dimensional transformations, it is *not* affine invariant with respect to -dimensional transformations, such as a re-weighting of the input data. Indeed, if you know weights or inverse variances for your data points, conceptually you can put them into a weight matrix (written this way to emphasize that reweighting is usually inverse-variance weighting) and write

(9) |

where the weight matrix is (and often diagonal in standard applications). The weight matrix

is sometimes called the information tensor (or information matrix) and its inverse

is often called the covariance matrix or the noise variance tensor.This form (9) of least squares is called *weighted least squares* (WLS) because of the data weighting (not to be confused with *feature* weighting, to appear below).
It is also called *chi-squared fitting* because it optimizes the scalar objective commonly called chi-squared:

(10) |

(If that isn’t obviously chi-squared to you, recall that is often diagonal and has the inverses of the squares of the data uncertainties on that diagonal.) In Figure 3 a comparison of OLS and WLS is shown, for a case of non-trivial data weights, where the data weights are set to be the inverse squares of individual data-point uncertainties.

We will say more about noisy data and the propagation of uncertainties below in Section 8.
It might be crossing your mind that there are uncertainties not just in the data points , but also often in the *locations* of the data as well. It turns out that taking the latter into account is a much harder problem; we will discuss this briefly in Section 8.

It is common to include a regularization that discourages the fit from making use of large amplitudes . There are many options, but the simplest is ridge regression (or Tikhonov regularization or L2 regularization), which (in the form that doesn’t have data weights) looks like

(11) |

where is a regularization parameter that penalizes large values for elements of the parameter vector. This optimization is also convex. The ridge-regularized prediction for new data looks like

(12) |

where is the identity (see Appendix A

). In the language of Bayesian inference,

can be seen as the inverse of a prior variance for the parameters . The salutary effect of the ridge regularization is shown in Figure 4.The ridge brings with it a choice: How to set the hyper-parameter ? We generally recommend cross-validation, to be discussed below in Section 6. Ridge regression is not affine invariant in the sense that the standard OLS prediction (8

) is; that is, the effect of the regularization depends on the amplitudes and linear combinations of features placed in the feature matrix. This will become important later when we consider feature weights below. It is also the case that the regularization need not be proportional to the identity matrix: In principle any positive definite matrix

could be used in place of ; this makes sense to consider when you have detailed prior beliefs about all the parameters or if those parameters are measured with different units (say).You can combine both the point weighting from WLS and the generalized ridge regression into a weighted ridge that looks like

(13) |

This form has good properties for many real-world physics applications, where data-point error bars are often known, and functions are often expected to be smooth.
We’ll discuss this form (13) more below in Section 5,
but briefly we can say here that it appears in Bayesian inference contexts where the data points are treated as having Gaussian noise associated with them (with known variance tensor or covariance matrix ) and there is a Gaussian prior on the parameter vector (with known variance ).
We have discussed that model elsewhere (products).^{4}^{4}4In our previous discussion of this product of Gaussians, the notation differs. What’s called here is called in products.
It is also useful sometimes to think of the “units” or dimensions of the quantities in (13): In the Bayesian setting, the units of would be the square of the units of and the units of would be the square of the units of the ratio (the square of the units of ).

## Section 4 Over-parameterization

We are taught folklore, at a young age, that we can never fit for more parameters than we have data. That is, we can never work at .
This isn’t true!
Not only is it the case that we *can* work at , in many cases we *should* work at , and many real-world regressions do.
But it is true that the regime is indeed strange!
In the over-parameterized case, there are typically many settings of the parameters that will literally zero out the differences between the data and the linear prediction .
The OLS solution, in this case, is defined (somewhat arbitrarily) to be the minimum-norm parameter vector that interpolates the data:

(14) |

Technically this formulation depends on an additional assumption that the feature matrix is full rank or that the data lie in the subspace spanned by . This is true almost always when

. This optimization is again convex and has a unique solution, although that solution will depend on feature weights that we discuss in a moment. When the investigator knows uncertainties on the training data points

, this expression will change a bit; we return to that in the next Section. The under-parameterized and over-parameterized optimization statements (5) and (14) can be unified into one form by considering the limit of light L2 regularization:(15) |

in the limit, this delivers the OLS solution in either case and doesn’t require the constraint in (14) to be satisfied.

In the over-parameterized case (), the prediction looks like

(16) |

provided that is invertible (which will usually be the case).
Like (8), this prediction is *also* called ordinary least squares (OLS), or sometimes “min-norm least-squares” to emphasize the point that it is making the minimum-norm choice of among many degenerate solutions.
Examples of OLS fits in the over-parameterized regime are shown in Figure 5.
Note that the fits with different numbers of parameters lead to very different predictions, but they all go through the data exactly.

It is slightly off-topic to notice that in Figure 5, as gets very large, the OLS solution approaches

everywhere that it can. This behavior is a direct consequence of Plancharel’s theorem, which states that the Fourier transform is unitary (see for instance

folland2016course). This means that the integral over frequency of the square of the Fourier transform is equal to the integral over location of the square of the original function. Thus the min-norm solution delivered by OLS, which chooses the interpolating function that minimizes the squares of the component amplitudes in the Fourier basis used to make Figure 5, will choose the interpolating function that minimizes the mean square of the value of the function in the location space. It will try to stay as close toas possible. That’s probably not desirable in most applications! We will fix that problem in the next Section.

The two equations for OLS—(8) and (16)—can be unified into one equation (and also generalized to handle non-invertible matrices and ) if we define the pseudo-inverse :

(17) |

The pseudo-inverse of a diagonal matrix is defined as the diagonal matrix made by inverting the non-zero diagonal entries. And for any non-diagonal (or any non-square) matrix

the pseudo-inverse is defined by taking the singular-value decomposition (SVD) of

, , with diagonal and orthogonal, then .## Section 5 Feature weighting

The OLS prediction for is *not* affine invariant with respect to -dimensional rotations or rescalings.
That is, rotations and scalings in the feature space *will* affect predictions.
It behooves us to re-scale the features (the -vectors of the feature matrix ) in a sensible way, like for instance, to encourage the fit to use low-frequency features more than high-frequency features (xie2020weighted, bah2016sample, rauhut2016interpolation).
We can encode these feature weights in a diagonal weight matrix and the prediction becomes

(18) |

This can be seen as the prediction for new data resulting from the following optimization (again, assuming the data can be interpolated):

(19) |

which—in analogy to the optimizations (14) and (15)—can also be written as

(20) |

This doesn’t require the constraints in (19) to be satisfied.
This optimization^{5}^{5}5If you are a physicist and you don’t like the mathematical notation—if you prefer to look at quantities that are obvious scalar forms—then recall that .
That right-hand-side object is a gauge-invariant object so long as and are also gauge-invariant objects themselves.
Yes, data analysis can have this kind of geometric structure!
penalizes more strongly the parameters corresponding to features with larger values of .

In the Fourier case this re-weighting can be very straightforward: Each of the embedding functions has an associated frequency ; we can control the fit by weighting the features by a function . For demonstration purposes, we can choose

(21) |

where is a hyper-parameter controlling the (inverse) width of the weighting function in frequency space, and the frequency input will be for each feature . That is,

(22) |

We have chosen this form (21) for for specific reasons that will become obvious below. The introduction of the feature weights dramatically changes the predictions; this is shown in Figure 6. High frequencies are suppressed and the prediction becomes smooth.

This all illustrates that, while OLS is affine invariant in the under-parameterized setting, the affine non-invariance in the over-parameterized setting is a property that can be exploited. When the different features are weighted or normalized differently, the amplitudes of the components of the parameter vector have to change in response, which in turn changes its norm . That is, the details of the min-norm data-fitting parameter vector depends on the details of how the features are normalized or weighted. In feature-weighted OLS, this property can be exploited to make the fits smooth, or meet other desiderata.

If we think of the OLS choice —the min-norm vector among all vectors that thread the data (satisfy )—as the result of a kind of light regularization, then the feature weighting is an adjustment of the form of that regularization: It asks the optimization to pull some components of towards zero harder than others. In our view, feature weighting should be considered part of the investigator’s choice of regularization. The feature weighting can be thought of as altering the details of that choice, or it can be thought of as making the standard min-norm choice but in a carefully chosen, rescaled basis.

In the most general case, you have not just a set of feature weights , you also have a set of data-point weights (which, in standard settings, would be the inverses of the variances of the noise affecting the data points ). When you put these all together, the feature-weighted, data-weighted least squares predictions are given by either of these two equivalent expressions:

(23) | ||||

(24) |

The equivalence of these is due to the Woodbury matrix identity (henderson1981deriving). These expressions appear in Bayesian-inference contexts (see, for example, products) when is the variance of the prior on the parameter vector and is the variance of the noise on the data vector . They are solutions to the optimization

(25) |

The units of must be the units of squared, and the units of must be the square of the ratio of the units of to the units of (the square of the units of ). These weighted-feature, weighted-data least-square forms (23) and (24) are good because (if the weightings are chosen appropriately) they lead to smooth solutions that are not required to pass precisely through every data point. This is appropriate in common, real situations in which data are noisy and the world is smooth. Which form you choose, between (23) and (24), depends on a few things, but primarily . If it’s both faster and more stable to use (23); if it’s faster and more stable to use (24). We show a toy example of a fit with both feature weights and data weights in Figure 7.

## Section 6 How to set the number of parameters (and other hyper-parameters)

There is a lot of literature analyzing the performance of linear regressions as a function of the sizes and , regularization strengths and forms, and so on (for example, bartlett2020benign, hastie2019surprises). They often refer to the “risk”, which is a statistics term for the expected squared error (mistake) made when predicting new data not in your training set. In order to deliver values or bounds on the risk, this literature depends on knowing how the data were generated, or the family of distributions from which the data ( and in our nomenclature) were drawn. In the Real World (tm), you don’t get this luxury. The data are given to you without documentation! Indeed, understanding the generating process of the world is the goal of investigations in the natural sciences (such as astronomy); the investigator does not know the generating process at the outset.

Given a data set (location–data pairs ), what is the best way to empirically estimate, from those data, the out-of-sample prediction error? That is, how do you estimate how well you are likely to predict new data? A reasonable answer to this is cross-validation: In cross-validation, we leave out a part of the data (in the most extreme form, leave out one single data point at a time), train the model using all but the left-out part, and predict the left-out part. Then this process is iterated over all choices of what part (or point) to leave out. This process is illustrated in Figure 8, where we show fits, each of which is trained to the 22 points remaining when we leave one of the points out. Also shown is the prediction, in each leave-one-out fit, for the left-out point.

The prediction for the mean-squared error (MSE) for new data is the mean of the square of the differences between the leave-one-out predictions and the left-out data. This leave-one-out cross-validation MSE (CVMSE) will be different for different choices for the basis, size (), and regularization (including feature-weight functions) of the fits you do. It is a fairly reliable and well-studied method for assessing predictive accuracy (see, for example, cv), although it does depend on some assumptions (for example, that data points are not duplicated or strongly corrrelated, and that the predictive information in the data is distributed among multiple individual data points). An example of CVMSE for two models (the feature-weighted OLS and the ridge regression) are shown for our toy data, as a function of the number of features , in Figure 9. The predictive accuracy is indeed a very strong function of .

In particular, the CVMSE for the feature-weighted OLS fits is bad when , and much better at and .
This phenomenology is not particular to this problem.
It is extremely general.
There is an effect known in many kinds of regression called “double descent” or “peaking phenomenon” or “jamming” in which predictive accuracy becomes very poor when the number of free parameters comes close to the number of data points (jain198239, spigler2018jamming, geiger2019jamming). For linear models, the “risk”—the out-of-sample prediction error—blows up when the model capacity just becomes excessive at (hastie2019surprises).
The fundamental reason for this phenomenon is that the ordinary least-squares estimates (8) and (16) require computation of the inverses and respectively, which are very badly conditioned around .
This translates to a large variance for the estimator which implies a large risk.
One way to think about this is that when the condition number of the matrix or is large, some directions in the data space—some linear combinations of the elements of the data vector —are very strongly amplified.^{6}^{6}6The condition number comes back up below in Section 9. It turns out that when the condition number is large, not only do the output predictions become very sensitive to the input training data, but also the numerical (computational) stability of the linear algebra can also be badly affected. But we emphasize here that the risk goes bad when the condition number is large *even if* it is possible to perform the linear algebra correctly at high precision.
This makes the regression unreasonably sensitive to noise in the data.
The high-risk behavior at disappears with certain kinds of regularization.
These include the ridge regularization shown in Figure 9 (also known as Tikhonov regression), but also early stopping (hastie2019surprises), and dimensionality reduction (huang2020dimensionality).

Sometimes the word “over-fitting” is used to describe models that are too flexible. In our view, a model is properly called “over-fit” when the prediction of the regression on held-out data—what the statisticians call “risk”—is bad. So over-fitting happens not universally when the number of parameters is large, but instead when the number of parameters is comparable to the number of data points, and (also) the regularization is inappropriate. Figure 9 shows that models with both small and large numbers of parameters can make good predictions for held-out data, and that regularization can also protect a model from over-fitting, even when . We don’t mean to imply that over-fitting is not a problem in regression; it can be! Our recommendation is just that the problem of over-fitting be analyzed empirically through cross-validation, not through intuitive ideas about the number of parameters.

The CVMSE gets better and worse at different values of . That’s not surprising; there are places where the fit basis and size and feature weighting are all more appropriate for your specific problem. These good-CVMSE places are the best places to work, if you can afford to search for them and find them. In general, if you care about predictive accuracy, it is worth doing a search for the values of (and other hyper-parameters, like regularization strengths and feature-weighting function parameters) that minimize the CVMSE. The minimum-CVMSE choices for and the hyper-parameters will generally be very close to the choices that lead to the best predictions for new data.

That said, there is one more point to make about over-fitting, which is that, when your data set is small, it is possible to over-fit even your cross-validation. This problem is beyond the scope of this Note; all we will say here is that it is not possible to make precise settings of many hyper-parameters through cross validation. Because in cross validation you are making use of your data to set the properties of your regression, there are dangers of over-adapting your method to your data. It is safer to have a fully independent validation data set, but this is rarely practical (and it doesn’t completely protect you either; see cifar10).

By the way, you might have felt uncomfortable with dropping individual points in this problem, since the points are sort-of regularly spaced in the location space (the space) to begin with, and become more irregularly spaced when you leave one out.
How much does this matter?
It probably *does* matter in detail, but you don’t have much choice in problems like this.
All empirical measures of predictive accuracy have these kinds of problems.

## Section 7 The Gaussian process: The limit of infinite features

The Gaussian process (GP; see gpml for a complete introduction) is a non-parametric regression that takes training data at coordinates , plus a kernel function, and makes predictions for new data at new positions . The Gaussian process mean prediction looks like

(26) |

where is a square kernel matrix between training locations and themselves

(27) |

and is the same except it is the rectangular kernel matrix between test locations and training locations ; the function is a positive semi-definite kernel function. Stated this simply, the GP looks like magic; our goal here is to connect this to the feature-weighted OLS from Section 5.

First, what does it mean for a kernel function to be positive semi-definite? It means that no matter what set of locations you choose, the kernel matrix made from the function according to (27

) has only non-negative, real eigenvalues. The kernel function must be positive semi-definite because, among other things, it is describing the variance of a process, and variances are always non-negative. It turns out that you can guarantee that a kernel function is positive semi-definite if you can show that it is, itself, the Fourier transform of a non-negative function (Bochner’s Theorem, see for example

folland2016course).^{7}

^{7}7It is

*not*required that the function itself be non-negative; non-negative definiteness is a different condition entirely from non-negativity.

Although the kernel matrix is, by construction, always positive semi-definite, it can have extremely bad or even infinite condition number. That is, it can lead your code or implementation into very unstable linear-algebra operations. We discuss how to handle these issues below in Section 9.

Above, we called the GP prediction the “mean”. This is because the Gaussian process is a model for a mean *and variance* in function space.
In addition to the mean predicted^{8}^{8}8There is an additional point worthy of mention here, which is that the expression (26) is implicitly for a GP with a zero prior mean. There is a more general expression that subtracts a prior mean function from the values and adds the prior mean back in to the values. See gpml for all the math. in (26), the GP also predicts a *variance* in the space around that mean.
In this Note, we are going to treat the GP as producing only a mean prediction, where the shape of the kernel function matters, but the amplitude of the kernel function does not (the prediction is a ratio of kernel matrices, so the kernel amplitudes cancel).
However, in Bayesian-inference contexts the amplitude of the GP kernel is important, and the predicted variances are important.
When we consider only the mean prediction of the GP, as we do here, then the GP is a kind of linear filter that operates on data and predicts or interpolates to new data .
Classically, this linear filter is sometimes called a Wiener filter (more on this in a moment) or *kriging* (possibly because of krige).

Importantly for us, there is a strong connection between the feature-weighted OLS and the GP. In particular, when we take the limit of infinite features (; provided the limit exists) we get kernel matrices in place of the matrix products:

(28) | ||||

(29) |

where is the diagonal matrix of weights, and element of is obtained by evaluating a kernel function . Equivalently, the limit is

(30) |

where we have used the diagonality of to make a single sum over . The specific form of the kernel function depends on the basis (the features) we choose, and the weighting of the basis functions in the OLS.

The connection between the infinite basis chosen (the form and weighting of the features) and the kernel function is governed by Mercer’s theorem (see minh2006mercer). However, if the basis is Fourier, as we chose above in (3), and the spacing between modes () is small enough, the kernel approximates the Fourier transform of the square of the weighting function we use to weight the features.

That is, in the case of the specific example of weight function given in equation (21), we can connect our feature-weighted OLS to an equivalent GP if we know the Fourier transform of the square of . We chose that specific form for because it has a square that is a member of a Fourier-transform pair:

(31) | ||||

(32) |

This latter function is also known as the Matérn kernel function; it will become the kernel function for the Gaussian process when we take the limit. In the limit the specific example of the feature-weighted OLS given here becomes a GP with kernel function (under the mild assumptions that guarantee that the discrete Fourier transform converges to the continuous Fourier transform, see ft):

(33) | ||||

(34) |

Technically, the kernel function converges to the Fourier transform of the square of the weighting function in the limit (the definition of already assumes ). However, provided that the spacing of the Fourier modes in frequency space is and the maximum frequency is , where

(35) | ||||

(36) |

it will be true that the OLS with feature weights will closely approximate the mean of a GP with kernel (ft). The comparison of the OLS and GP is shown in Figure 10.

In our toy examples, we use the feature weighting that generates the Matérn 3/2 kernel. It is important to emphasize that there are literally infinite alternative choices that can be made here, and hundreds even if you restrict to kernels with known closed forms. Indeed, any function that has a finite, all-positive Fourier transform can be substituted for the Matérn kernel and associated frequency weighting function we use here.

The particular kernel we obtain in equation (34) is a stationary kernel, meaning it depends only on absolute values of time differences . Not all kernels will be stationary. The limit of in (29) leads to a stationary kernel in this case because the feature embedding in is a set of sines and cosines. Sines and cosines form a basis for translation-invariant function spaces. If we had made a different choice for our basis, such as polynomials or wavelets, we would have obtained a non-stationary kernel in the limit.

At the end of Section 5, we discussed including not just feature weights in a matrix but also data weights in a matrix . The GP also permits this, and it is often a very good idea. The generalization of equations (23) and (24) to the GP case is

(37) |

As before, in contexts where you have independent uncertainties on each element of your training data , would naturally be set to the diagonal matrix containing the inverses of the variances of those uncertainties (so would contain the variances). This form (37) is used a lot in astronomy and cosmology (for example, zaroubi, aigrain, celerite) and it is also the standard form given for the Wiener filter, where the kernel function generating is the Fourier transform of the “signal power” and the kernel function generating is the Fourier transform of the “noise power”. In this form, the amplitude of the kernel function matters, because it is competing, in some sense, the variance of the GP against the variance of the noise. It is not sufficient to get the shape of the kernel function right to make a stable prediction of the mean, when using form (37).

## Section 8 Uncertainties on the predictions

Often you need to compute not just a prediction, but also an uncertainty on that prediction. How faithfully you must compute that uncertainty will depend strongly on the context in which you are doing the fitting or interpolation. However, it is often the case in the physical sciences that predictions are required to come with good or conservative estimates of uncertainty. There are four-ish sources of uncertainty in the predictions you are making in problems like these: (1) The data points you have are individually noisy. (2) There are finitely many of those data points (there are of them) and there are gaps between them in the location space . (3) The data points might have uncertain locations or location measurements . (4) And the predictions you make depend on hyper-parameter choices, such as the form of the basis, the number of parameters , and any regularization or feature weighting. These four different sources of uncertainty propagate differently and are differently “simple” to deal with. In particular it turns out that the easiest sources of uncertainty to understand and propagate are those coming from (1) the noise in the and (2) the number and locations of the data. The uncertainties coming from (3) uncertainties in the locations , and (4) model choices, are both much harder to model and propagate.

In the best-case scenario, you might have accurate estimates of the variances of the noise contributions affecting your training data values , the noise might be Gaussian with zero mean, you might have locations that are extremely accurately known, and you might have justifiable prior variances on your parameters . In this case, you can assume that the linear model with parameters is a good model for your data, and write down a likelihood function and a prior. In this case, you can turn the Bayesian crank and deliver a posterior density for the predicted values at the test locations

. The second derivative of the natural logarithm of the posterior density can then be processed into a standard error on the predictions

(or you can deliver full posterior densities somehow). The details of this are way beyond the scope of this Note, but this (or a closely related) problem is discussed in some detail by us elsewhere (products).That scenario is best-case, but it does involve a lot of assumptions. That is, if you don’t trust your estimates of the noise variances, or if you think the noise might be non-Gaussian, then the likelihood approaches will give poor uncertainty estimates. In this sense, it is more conservative to make use of more empirical methods for uncertainty estimation. The leading candidates are jackknife or bootstrap resamplings (for example, bootjack). Here we will focus on jackknife.

The idea of jackknife resampling is that we make subsamples of the data, in each of which we have dropped (or held out) a unique fraction of the data. The variance of the results (in this case the prediction ) across the jackknife samples can be transformed into an estimate of the variance of the estimator acting on the full data set. In the most extreme form—leave-one-out jackknife—we set . In this case, the jackknife estimate of the variance of the best-fit predictions is given by

(38) |

where is the estimate of made after *leaving out* the one training point , and is the estimate made using *all* the data.
The pre-factor in (38) is not anything like the that you use when you estimate a standard variance; this difference comes from the fact that the jackknife subsamples are extremely correlated; the pre-factor is computed to amplify the jackknife variance into an unbiased estimate of the variance of the estimator .
The square-root of the trace of this uncertainty variance (that is, what you might call the jackknife estimate of the standard error) is shown in Figure 11.
Note the similarity between jackknife and cross-validation; it is relevant and important: These empirical estimates of uncertainty and prediction error are closely related.

These methods—full likelihood analysis and jackknife—take into account (1) the noise in the data and (2) the number and spacing of the training data points.
They do not, by themselves, take into account (3) the uncertainties on the locations .
There are no simple methods for uncertainty propagation from the locations into the predictions .
One option is to resample the according to your uncertainty estimates, and re-do the fits.
That’s potentially expensive, and overly conservative (because it ignores the information about the coming from the ).
Another is to linearize the first derivative of the best-fit curve and propagate uncertainty using those derivatives.
This is not conservative, because it only works if the location uncertainties are small relative to substantial changes in the slope of the predictions with .
The most extreme option would be to *simultaneously fit* for the prediction and *all* of the noisily measured .
That’s a great idea, but that fit would be extremely computationally expensive, and no reasonable fit objective would be convex.
All of these ideas are out of scope here.

Finally, in order to take account of (4) the uncertainty coming from your choices of basis, number , and regularization, you might have to know (or learn) the distribution over *these* things.
Since these choices are hyper-parameters, any inference that propagates uncertainties coming from these choices would have to be hierarchical in structure.
That is way, way out of scope.

## Section 9 Implementation notes

All of the code used to make the figures for this Note is available publicly.^{9}^{9}9https://github.com/davidwhogg/FlexibleLinearModels
Although the examples are toys, the implementation of everything can be generalized for real-data situations.
In our code, there are some aspects of the linear-algebra implementation that might seem odd.
Here are some comments.

Mostly, linear algebra stability comes down to the *condition number* of the matrix in question.
The condition number, for our purposes, is the ratio of the largest eigenvalue (for non-negative definite matrices) to the smallest *non-zero*

eigenvalue. For rectangular feature matrices it is the ratio of the largest singular value (that is, what you get out of a singular value decomposition) to the smallest nonzero singular value.

When the condition number is large, different linear-algebra approaches will differ.
For example, the function call solve(A, b) should give you the best estimate it can for the product , whereas the function call dot(inv(A), b), which is the same on paper, will give you the dot product between b and the best estimate that inv() can find for the inverse of A.
When the condition number of is large, these two values can be very different, and the solve() will be better.^{10}^{10}10This claim contradicts what is stated in the abstract of solve, but this claim is based on our real numerical experiments on real matrices, so we stand by it. What is uncontroversial is that solve(A, b) will always perform non-worse than dot(inv(A), b); sometimes it will perform far better.
Use solve(A, b) and never dot(inv(A), b) unless there are compelling code-structure reasons to use the latter, such as repeated calls (but even then, a Cholesky decomposition followed by repeated Cholesky solves is probably a more stable solution).

If your condition number gets very large, even solve() can give bad results, because very small eigenvalues of the matrix are being poorly estimated and then inverted. That’s unstable. It is better to zero out those small eigenvalues—it is better to destroy bad information than to use it—and perform a pseudo-inverse in those cases. So if you really want to be safe (and you do, here) you should really use lstsq(A, b, rcond=tiny) instead of solve(A, b). The lstsq() function requires a rcond input, which says at what (dimensionless) precision to zero-out low eigenvalues. It usually makes sense to set this dimensionless rcond input to something like machine precision, which is about 1e-16 for our current hardware–software setups.

You might think that all of this is academic, but it really isn’t when the number of features is close to the number of data points . For example, in our toy-data OLS experiments in this paper with , the matrix has a condition number (ratio of largest eigenvalue to smallest nonzero eigenvalue) that saturates machine precision for the entire range . And that’s for ; things generally get worse as gets larger.

Related to this, if you have a choice between formulations that involve an inverse of a square, like , and formulations that involve a pseudo-inverse, like , you should do the latter, because has the *square* of the condition number of .
When you want to execute , as we do in (17), you should again use lstsq(), which was literally designed for these applications; use lstsq(X, Y, rcond=tiny).
This function returns the best estimate it can for , so
it should be better than computing a pseudo-inverse pinv(X) and then matrix-multiplying pinv(X) with Y.
Again here it usually makes sense to set this rcond input to something like machine precision.

Once is large enough, if you are using a feature weighting with with diagonal entries that decrease to zero with , at some point, at machine precision, the additional columns you are adding to are effectively all zeros. They will literally underflow the linear-algebra representation. That’s not a problem if you implement your linear algebra well (that is, use lstsq() appropriately), but it does mean that your predictions and cross-validations will saturate at some (as we see them do in Figure 9).

In multiple places, but especially (23) and (24), you are performing operations on matrices that are diagonal ( and and their inverses are all diagonal). You should avoid ever constructing diagonal matrices. You can multiply or by a diagonal matrix by just multiplying the rows (or columns) by the diagonal entries. And you can invert by just inverting the diagonal entries. Avoid constructing and operating on operators that are almost entirely zeros, unless you have a very efficient sparse linear algebra implementation and you are a power user.

And finally, it is most numerically stable to operate on the smallest matrices you can.
For example, when you have the choice between the formulation in equation (23) and that in equation (24), you should choose the former when and the latter when .
That way you are always doing the heavy linear algebra (solve() and lstsq() function calls) at a size , which is both faster and more stable than linear algebra at . And it is much faster when you make these choices correctly:
Linear algebra scales naively as the dimension *cubed*.
(In practice—with excellent packages—it actually scales with a power more like 2.6 than 3, but still!)

## Section 10 Discussion

This Note was about linear fitting with very flexible models, for interpolation, prediction, and de-noising of data. We encouraged you to consider using very big models, but being intentional about regularization. These settings (over-parameterized, but carefully regularized) are adaptive and useful, and they connect, as we showed, to Gaussian processes in the limit, which are well-studied workhorses of machine learning and data science.

All the rage these days, is instead *nonlinear fitting*, with *deep learning* and the like (deep, for example).
We didn’t discuss any of that.
However, many of the high-level lessons in this Note carry over:
All of these tools work well when the function space is flexible but also carefully regularized.
In these nonlinear settings, regularization takes many additional new forms, like early stopping (earlystop), dropout (dropout), and restricted network structure (such as convolutional; see for example bishop).

We only considered the setting in which you care about predicting new data and never the setting in which you care about the internal parameters of the linear fit. However, if you are, say, measuring the power spectrum of a process (as we are, often, in cosmology), then you care about these amplitudes themselves. In this case, any regularization you apply takes the role of a prior or prior information. That prior must be chosen (in those cases) with great care, because the answers you get will be influenced by that choice; when it doesn’t properly conform to your true beliefs, your answers will be distorted in bad ways. And, technically, when you take this view (that the parameters matter), you also have to get serious about the noise model on the data. That is, the beliefs encoded in the data noise variance tensor also must represent your true beliefs if you don’t want your answers distorted in bad ways. All of that is out of scope here, but we say things about it all elsewhere (for example, fitting).

In many cases, when the goal is just to interpolate or de-noise data, investigators use running means (or medians), low-pass filters, or explicit interpolators (like cubic spline interpolation). These methods all have close relationships with what is written here. Indeed, a running mean, a low-pass filter, and a cubic-spline interpolator can all be written as a linear operator (the details of which depends on the locations of the data) acting on the input data vector , just like our linear fits produce linear operators. This means that in many cases, these methods can be translated into versions of the methods we have presented in this Note. A full translation is beyond our scope, though. And our view is that the value of making explicit investigator choices about basis and the regularization makes the linear fitting approach better in general.

Regularization was perhaps the biggest theme of this Note.
But we only really considered variants of L2 regularization (or ridge or Tikhonov).
There are other regularizations, even other convex regularizations.
A valuable and useful option is L1 or the lasso (lasso), which encourages sparsity—it encourages parameters to take the value exactly, where possible.
This kind of regularization makes sense when we have prior beliefs that the functional forms we seek will in fact be sparse in our chosen basis.
Usually this kind of consideration is less important in interpolation, prediction, and de-noising settings, but it is not unheard of.
A nice recent result is that the feature weighting that we employ here can be used with L1 regularization to obtain simultaneously smoothness *and* sparsity (rauhut2016interpolation).

We discussed deterministic, ordered expansions like Fourier series and polynomials, but there is another class of *random features* methods (see, for example, rahimi2007random) that we did not discuss.
Briefly, the idea with random features is that instead of weighting features that are regularly spaced in frequency space with a weighting function

, we could have generated unweighted features but randomly from a probability distribution

. The same Gaussian process limit appears as (provided we choose those features with appropriately random phases too). These random-feature approaches have rarely been used in the natural sciences, but they are potentially of interest in many applications.In the toy examples used throughout this Note, we considered data with locations in a one-dimensional location space or ambient space. This was a choice for simplicity of visualization; in principle the locations could be higher dimensional, or live in a different kind of space. The most important consideration about the dimensionality and range of the locations is that the functions have to sensibly take the locations as input. In astronomy and cosmology contexts, it is common for the locations to be positions on the sphere, and it is common for the natural basis functions to be spherical harmonics, for example.

The methods in this Note are all *discriminative*, in the sense that we took the locations to be prior or primary;
the goal was to find a function of locations that predicts data .
This asymmetry between the locations and the data led to concerning statements in Section 8 when we considered the possibility that the locations themselves might be noisy or uncertain.
An alternative formulation to these discriminative models are *generative* models, in which the model attempts not just to predict the from the but instead predicts both the and the .
A model that generates both can be used to make predictions at new locations by executing an inference or inverse problem on the generative model.
Generative models are generally non-convex—and harder to execute—if you want the relationship between the and the to be nonlinear (for an example of this in astronomy, see, for example, thecannon).
In these cases you don’t get closed-form solutions, and there aren’t known guarantees (like the Gauss–Markov theorem) about performance.
But they are more general, and often more appropriate, especially when both the locations and the data are comparably noisy measurements.

In our toy examples, we used a Fourier series.
That is just one choice among many.
However, *it is often a great choice*.
When you choose the Fourier basis (and—importantly—for every cosine term you include the corresponding sine term), the matrix (and the limiting kernel matrix at ) has the property that every matrix element depends on (or can be calculated from) just the absolute difference .
That is, in this basis, all fitting methods become technically *stationary*.
This all relates to the translation-independence properties of the Fourier basis.
So although the Fourier basis is just one choice among many, it is the *right choice* when you think your problem has (or might have) certain kinds of translation invariances.

plus 0.3ex

## References

## Appendix A Optimization arguments

As mentioned in the main text, the optimal solutions of the unconstrained objectives (5) and (11) can be obtained by computing the first-order critical points. The first-order critical points are global minima because all the objectives considered are convex in the regression coefficients .

To derive the optimal solutions of the constrained optimization (14) and similar, we use a classical result in quadratic optimization (see for instance nocedal2006numerical, Ch. 16): Consider the quadratic optimization problem

(39) |

The first-order necessary conditions for to be a solution of (39) is that there is a vector (known as Lagrange multipliers) such that the following system of equations is satisfied:

(40) |

Equation (40) is typically known as the Karush-Kuhn-Tucker conditions (KKT). If the objective function in (39) is convex, the KKT conditions are also sufficient for optimality.^{11}^{11}11The KKT conditions can also characterize optimality in more general cases. For instance, if has full row rank and is the nullspace of , if is positive semidefinite the KKT conditions are sufficient for optimality. If is positive definite we also have that the solution of (39) is unique (which is typically the case in the underparameterized linear regression but not in the overparameterized).

Using this formulation it is easy to check that the solution of (14) leads to predictions in (12), and the solution of (19) produce (18). For instance, in order to show the latter we consider

(41) |

or equivalently

(42) |

obtaining the formulation in (39) for , , , and . Using the KKT conditions we obtain

(43) |

(assuming the KKT matrix is invertible). Luckily there exists a complete characterization for the inverse of block matrices (see, for example, lu2002inverses). In particular if and are invertible matrices we have

(44) |

In the particular case of (43) it suffices to compute the top right block the inverse. If both and are invertible we obtain

(45) |

Comments

There are no comments yet.