The Loss Rank Principle for Model Selection

02/27/2007 ∙ by Marcus Hutter, et al. ∙ 0

We introduce a new principle for model selection in regression and classification. Many regression models are controlled by some smoothness or flexibility or complexity parameter c, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. Let f_D^c be the (best) regressor of complexity c on data D. A more flexible regressor can fit more data D' well than a more rigid one. If something (here small loss) is easy to achieve it's typically worth less. We define the loss rank of f_D^c as the number of other (fictitious) data D' that are fitted better by f_D'^c than D is fitted by f_D^c. We suggest selecting the model complexity c that has minimal loss rank (LoRP). Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression function and loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN. In this paper we formalize, discuss, and motivate LoRP, study it for specific regression problems, in particular linear ones, and compare it to other model selection schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Regression. Consider a regression or classification problem in which we want to determine the functional relationship from data , i.e. we seek a function such that is close to the unknown for all . One may define regressor directly, e.g. ‘average the values of the nearest neighbors (kNN) of in ’, or select the from a class of functions that has smallest (training) error on . If the class is not too large, e.g. the polynomials of fixed reasonable degree , this often works well.

Model selection. What remains is to select the right model complexity , like or . This selection cannot be based on the training error, since the more complex the model (large , small ) the better the fit on (perfect for and ). This problem is called overfitting, for which various remedies have been suggested:

We will not discuss empirical test set methods like cross-validation, but only training set based methods. See e.g. [Mac92] for a comparison of cross-validation with Bayesian model selection. Training set based model selection methods allow using all data for regression. The most popular ones can be regarded as penalized versions of Maximum Likelihood (ML). In addition to the function class , one has to specify a sampling model , e.g. that the

have independent Gaussian distribution with mean

. ML chooses , Penalized ML (PML) then chooses Penalty, where the penalty depends on the used approach (MDL [Ris78], BIC [Sch78], AIC [Aka73]). In particular, modern MDL [Grü04]

has sound exact foundations and works very well in practice. All PML variants rely on a proper sampling model (which may be difficult to establish), ignore (or at least do not tell how to incorporate) a potentially given loss function, and are typically limited to (semi)parametric models.

Main idea. The main goal of the paper is to establish a criterion for selecting the “best” model complexity based on regressors given as a black box without insight into the origin or inner structure of , that does not depend on things often not given (like a stochastic noise model), and that exploits what is given (like the loss function). The key observation we exploit is that large classes or more flexible regressors can fit more data well than more rigid ones, e.g. many can be fit well with high order polynomials. We define the loss rank of as the number of other (fictitious) data that are fitted better by than is fitted by , as measured by some loss function. The loss rank is large for regressors fitting not well and for too flexible regressors (in both cases the regressor fits many other better). The loss rank has a minimum for not too flexible regressors which fit not too bad. We claim that minimizing the loss rank is a suitable model selection criterion, since it trades off the quality of fit with the flexibility of the model. Unlike PML, our new Loss Rank Principle (LoRP) works without a noise (stochastic sampling) model, and is directly applicable to any non-parametric regressor, like kNN.

Contents. In Section 2, after giving a brief introduction to regression, we formally state LoRP for model selection. To make it applicable to real problems, we have to generalize it to continuous spaces and regularize infinite loss ranks. In Section 3 we derive explicit expressions for the loss rank for the important class of linear regressors, which includes kNN, polynomial, linear basis function (LBFR), Kernel, and projective regression. In Section 4

we compare linear LoRP to Bayesian model selection for linear regression with Gaussian noise and prior, and in Section

5 to PML, in particular MDL, BIC, AIC, and MacKay’s [Mac92] and Hastie’s et al. [HTF01] trace formulas for the effective dimension. In this paper we just scratch at the surface of LoRP. Section 6 contains further considerations, to be elaborated on in the future.

2 The Loss Rank Principle (LoRP)

After giving a brief introduction to regression, classification, model selection, overfitting, and some reoccurring examples (polynomial regression Example 1 and kNN Example 2), we state our novel Loss Rank Principle for model selection. We first state it for classification (Principle 3 for discrete values), and then generalize it for regression (Principle 5 for continuous values), and exemplify it on two (over-simplistic) artificial Examples 4 and 6. Thereafter we show how to regularize LoRP for realistic regression problems.

Setup. We assume data has been observed. We think of the as having an approximate functional dependence on , i.e. , where means that the are distorted by noise or otherwise from the unknown “true” values .

Regression and classification. In regression problems is typically (a subset of) the real numbers or some more general measurable space like . In classification, is a finite set or at least discrete. We impose no restrictions on . Indeed, will essentially be fixed and plays only a spectator role, so we will often notationally suppress dependencies on . The goal of regression is to find a function “close” to based on the past observations . Or phrased in another way: we are interested in a regression function such that for all .

Notation. We will write or

for generic data points, use vector notation

and , and for generic (fictitious) data of size .

Example 1 (polynomial regression)

For , consider the set of polynomials of degree . Fitting the polynomial to data

, e.g. by least squares regression, we estimate

with . The regression function can be written down in closed form (see Example 9).

Example 2 (k nearest neighbors, kNN)

Let be some vector space like and be a metric space like with some (e.g. Euclidian) metric . kNN estimates by averaging the values of the nearest neighbors of in , i.e.  with such that for all and .

Parametric versus non-parametric regression. Polynomial regression is an example of parametric regression in the sense that is the optimal function from a family of functions indexed by real parameters (). In contrast, the kNN regressor is directly given and is not based on a finite-dimensional family of functions. In general, may be given either directly or be the result of an optimization process.

Loss function. The quality of fit to the data is usually measured by a loss function , where is an estimate of . Often the loss is additive: . If the class is not too large, good regressors can be found by minimizing the loss w.r.t. all . For instance, and in Example 1.

Regression class and loss. In the following we assume a (typically countable) class of regressors (whatever their origin), e.g. the kNN regressors or the least squares polynomial regressors . Note that unlike , regressors are not functions of alone but depend on all observations , in particular on . Like for functions , we can compute the loss of each regressor :

where in the third expression, and the last expression holds in case of additive loss.

Overfitting. Unfortunately, minimizing w.r.t.  will typically not select the “best” overall regressor. This is the well-known overfitting problem. In case of polynomials, the classes are nested, hence is monotone decreasing in with perfectly fitting the data. In case of kNN, is more or less an increasing function in with perfect regression on for , since no averaging takes place. In general, is often indexed by a “flexibility” or smoothness or complexity parameter, which has to be properly determined. More flexible

can closer fit the data and hence have smaller loss, but are not necessarily better, since they have higher variance. Clearly, too inflexible

also lead to a bad fit (“high bias”).

Main goal. The main goal of the paper is to establish a selection criterion for the “best” regressor

  • based on given as a black box that does not require insight into the origin or inner structure of ,

  • that does not depend on things often not given (like a stochastic noise model),

  • that exploits what is given (like the loss function).

While for parametric (e.g. polynomial) regression, MDL and Bayesian methods work well (effectively the number of parameters serve as complexity penalty), their use is seriously limited for non-parametric black box like kNN or if a stochastic/coding model is hard to establish (see Section 4 for a detailed comparison).

Main idea: loss rank. The key observation we exploit is that a more flexible can fit more data well than a more rigid one. For instance, can perfectly fit all for , all that lie on a parabola for , but only linear for . We consider discrete i.e. classification first, and fix . is the observed data and are fictitious others.

Instead of minimizing the unsuitable w.r.t. , we could ask how many lead to smaller than . Many have small loss for flexible , and so smallness of is less significant than if is among very few other with small . We claim that the loss rank of among all is a suitable measure of fit. We define the rank of under as the number of with smaller or equal loss than :

(1)

For this to make sense, we have to assume (and will later assure) that , i.e. there are only finitely many having loss smaller than . In a sense, measures how compatible is with ; is the th most compatible with .

Since the logarithm is a strictly monotone increasing function, we can also consider the logarithmic rank , which will be more convenient.

Principle 3 (loss rank principle (LoRP) for classification)

For discrete

, the best classifier/regressor

in some class for data is the one of smallest loss rank:

(2)

where is defined in (1).

We give a simple example for which we can compute all ranks by hand to help better grasping how the principle works, but the example is too simplistic to allow any conclusion on whether the principle is appropriate.

Example 4 (simple discrete)

Consider , , and two points lying on the diagonal , with polynomial (zero, constant, linear) least squares regression (see Ex.1). is simply 0, the -average, and the line through points and . This, together with the quadratic Loss for generic and observed (and fixed ), is summarized in the following table

From the Loss we can easily compute the Rank for all nine . Equal rank due to equal loss is indicated by a in the table below. Whole equality groups are actually assigned the rank of their right-most member, e.g. for the ranks of are all 7 (and not 4,5,6,7).

So LoRP selects as best regressor, since it has minimal rank on . fits too badly and is too flexible (perfectly fits all ).

LoRP for continuous . We now consider the case of continuous or measurable spaces , i.e. normal regression problems. We assume in the following exposition, but the idea and resulting principle hold for more general measurable spaces like . We simply reduce the model selection problem to the discrete case by considering the discretized space for small and discretize . Then with counts the number of -grid points in the set

(3)

which we assume (and later assure) to be finite, analogous to the discrete case. Hence is an approximation of the loss volume of set , and typically for . Taking the logarithm we get . Since is independent of , we can drop it in comparisons like (2). So for we can define the log-loss “rank” simply as the log-volume

(4)
Principle 5 (loss rank principle for regression)

For measurable , the best regressor in some class for data is the one of smallest loss volume:

where LR, , and are defined in (3) and (4), and is the volume of .

For discrete with counting measure we recover the discrete Loss Rank Principle 3.

Example 6 (simple continuous)

Consider Example 4 but with interval . The first table remains unchanged, while the second table becomes

So LoRP again selects as best regressor, since it has smallest loss volume on .

Infinite rank or volume. Often the loss rank/volume will be infinite, e.g. if we had chosen in Ex.4 or in Ex.6. We will encounter such infinities in Section 3. There are various potential remedies. We could modify (a) the regressor or (b) the Loss to make finite, (c) the Loss Rank Principle itself, or (d) find problem-specific solutions. Regressors with infinite rank might be rejected for philosophical or pragmatic reasons. We will briefly consider (a) for linear regression later, but to fiddle around with in a generic (blackbox way) seems difficult. We have no good idea how to tinker with LoRP (c), and also a patched LoRP may be less attractive. For kNN on a grid we later use remedy (d). While in (decision) theory, the application’s goal determines the loss, in practice the loss is often more determined by convenience or rules of thumb. So the Loss (b) seems the most inviting place to tinker with. A very simple modification is to add a small penalty term to the loss.

(5)

The Euclidian norm is default, but other (non)norm regularizes are possible. The regularized based on is always finite, since has finite volume. An alternative penalty , quadratic in the regression estimates is possible if is unbounded in every direction.

A scheme trying to determine a single (flexibility) parameter (like and in the above examples) would be of no use if it depended on one (or more) other unknown parameters (), since varying through the unknown parameter leads to any (non)desired result. Since LoRP seeks the of smallest rank, it is natural to also determine by minimizing w.r.t. . The good news is that this leads to meaningful results.

3 LoRP for Linear Models

In this section we consider the important class of linear regressors with quadratic loss function. Since linearity is only assumed in and the dependence on can be arbitrary, this class is richer than it may appear. It includes kNN (Example 7), kernel (Example 8), and many other regressors. For linear regression and , the loss rank is the volume of an -dimensional ellipsoid, which can efficiently be computed in time (Theorem 10). For the special case of projective regression, e.g. linear basis function regression (Example 9), we can even determine the regularization parameter analytically (Theorem 11).

Linear regression. We assume in this section; generalization to is straightforward. A linear regressor can be written in the form

(6)

Particularly interesting is for .

(7)

where matrix . Since LoRP needs only on the training data , we only need .

Example 7 (kNN ctd.)

For kNN of Ex.2 we have if and 0 else, and if and 0 else.

Example 8 (kernel regression)

Kernel regression takes a weighted average over , where the weight of to is proportional to the similarity of to , measured by a kernel , i.e. . For example the Gaussian kernel for is .

Example 9 (linear basis function regression, LBFR)

Let be a set or vector of “basis” functions often called “features”. We place no restrictions on or . Consider the class of functions linear in :

For instance, for and we would recover the polynomial regression Example 1. For quadratic loss function we have

where matrix is defined by and is a symmetric matrix with . The loss is quadratic in with minimum at . So the least squares regressor is , hence and .

Consider now a general linear regressor with quadratic loss and quadratic penalty

(8)

(

is the identity matrix).

is a symmetric matrix. For it is positive definite and for positive semidefinite. If

are the eigenvalues of

, then are the eigenvalues of .

is an ellipsoid with the eigenvectors of

being the main axes and being their length. Hence the volume is

where is the volume of the -dimensional unit sphere, , and is the determinant. Taking the logarithm we get

(9)

Consider now a class of linear regressors , e.g. the kNN regressors or the -dimensional linear basis function regressors .

Theorem 10 (LoRP for linear regression)

For , the best linear regressor in some class for data is

(10)

where is defined in (8).

Since is independent of and it was possible to drop . The last expression shows that linear LoRP minimizes the Loss times the geometric average of the squared axes lengths of ellipsoid . Note that depends on unlike the .

Nullspace of . If has an eigenvalue 1, then has a zero eigenvalue and is necessary, since . Actually this is true for most practical . Nearly all linear regressors are invariant under a constant shift of , i.e. , which implies that has eigenvector with eigenvalue 1. This can easily be checked for kNN (Ex.2), Kernel (Ex.8), and LBFR (Ex.9). Such a generic 1-eigenvector effecting all could easily and maybe should be filtered out by considering only the orthogonal space or dropping these when computing . The 1-eigenvectors that depend on are the ones where we really need a regularizer for. For instance, in LBFR has eigenvalues 1, and has as many eigenvalues 1 as there are disjoint components in the graph determined by the edges In general we need to find the optimal numerically. If is a projection we can find analytically.

Projective regression. Consider a projection matrix with eigenvalues 1, and zero eigenvalues. For instance, of LBFR Ex.9 is such a matrix, since and for such that . This implies that has eigenvalues and eigenvalues . Hence

(11)

The first term is independent of . Consider , the reasonable region in practice. Solving w.r.t.  we get a minimum at . After some algebra we get

(12)

is the relative entropy or Kullback-Leibler divergence. Minimizing

w.r.t.  is equivalent to maximizing . This is an unusual task, since one mostly encounters minimizations. For fixed , is monotone increasing in . Since , LoRP suggests to minimize Loss for fixed model dimension . For fixed , is monotone increasing in , i.e. LoRP suggests to minimize model dimension for fixed Loss. Normally there is a tradeoff between minimizing and , and LoRP suggests that the optimal choice is the one that maximizes KL.

Theorem 11 (LoRP for projective regression)

The best projective regressor with in some projective class for data is

4 Comparison to Gaussian Bayesian Linear Regression

We now consider linear basis function regression (LBFR) from a Bayesian perspective with Gaussian noise and prior, and compare it to LoRP. In addition to the noise model as in PML, one also has to specify a prior. Bayesian model selection (BMS) proceeds by selecting the model that has largest evidence. In the special case of LBFR with Gaussian noise and prior and an ML-II estimate for the noise variance, the expression for the evidence has a similar structure as the expression of the loss rank.

Gaussian Bayesian LBFR / MAP. Recall from Sec.3 Ex.9 that is the class of functions () that are linear in feature vector . Let

(13)

denote a general -dimensional Gaussian distribution with mean and covariance matrix . We assume that observations are perturbed from by independent additive Gaussian noise with variance and zero mean, i.e. the likelihood of under model is , where . A Bayesian assumes a prior (before seeing ) distribution on . We assume a centered Gaussian with covariance matrix , i.e. . From the prior and the likelihood one can compute the evidence and the posterior

(14)
(15)

A standard Bayesian point estimate for for fixed is the one that maximizes the posterior (MAP) (which in the Gaussian case coincides with the mean) . For , MAP reduces to Maximum Likelihood (ML), which in the Gaussian case coincides with the least squares regression of Ex.9. For , the regression matrix is not a projection anymore.

Bayesian model selection. Consider now a family of models . Here the are the linear regressors with basis functions, but in general they could be completely different model classes. All quantities in the previous paragraph implicitly depend on the choice of , which we now explicate with an index. In particular, the evidence for model class is . Bayesian Model Selection (BMS) chooses the model class (here ) of highest evidence:

Once the model class is determined, the MAP (or other) regression function or are chosen. The data variance may be known or estimated from the data, is often chosen , and has to be chosen somehow. Note that while leads to a reasonable MAP=ML regressor for fixed , this limit cannot be used for BMS.

Comparison to LoRP. Inserting (13) into (14) and taking the logarithm we see that BMS minimizes

(16)

w.r.t. . Let us estimate by ML: We assume a broad prior so that can be neglected. Then . Inserting into (16) we get

(17)

Taking an improper prior and integrating out leads for small to a similar result. The last term in (17) is a constant independent of and can be ignored. The first two terms have the same structure as in linear LoRP (10), but the matrix is different. In both cases, act as regularizers, so we may minimize over in BMS like in LoRP. For (which neither makes sense in BMS nor in LoRP), in BMS coincides with of Ex.9, but still the in LoRP is the square of the in BMS. For , of BMS may be regarded as a regularized regressor as suggested in Sec.2 (a), rather than a regularized loss function (b) used in LoRP. Note also that BMS is limited to (semi)parametric regression, i.e. does not cover the non-parametric kNN Ex.2 and Kernel Ex.8, unlike LoRP.

Since only depends on (and not on ), and all are implicitly conditioned on , one could choose . In this case, , with for , is a simple multiplicative regularization of projection , and (17) coincides with (11) for suitable , apart from an irrelevant additive constant, hence minimizing (17) over also leads to (12).

5 Comparison to other Model Selection Schemes

In this section we give a brief introduction to Penalized Maximum Likelihood (PML) for (semi)parametric regression, and its major instantiations, the Akaike and the Bayesian Information Criterion (AIC and BIC), and the Minimum Description Length (MDL) principle, whose penalty terms are all proportional to the number of parameters . The effective number of parameters is often much smaller than

, e.g. if there are soft constraints like in ridge regression. We compare MacKay’s

[Mac92] trace formula for Gaussian Bayesian LBFR and Hastie’s et al. [HTF01] trace formula for general linear regression with LoRP.

Penalized ML (AIC, BIC, MDL). Consider a -dimensional stochastic model class like the Gaussian Bayesian linear regression example of Section 4. Let be the data likelihood under -dimensional model . The maximum likelihood (ML) estimator for fixed is

Since decreases with , we cannot find the model dimension by simply minimizing over (overfitting). Penalized ML adds a complexity term to get reasonable results

The penalty introduces a tradeoff between the first and second term with a minimum at . Various penalties have been suggested: The Akaike Information Criterion (AIC) [Aka73] uses , the Bayesian Information Criterion (BIC) [Sch78] and the (crude) Minimum Description Length (MDL) principle use [Ris78, Grü04] for Penalty. There are at least three important conceptual differences to LoRP:

  • In order to apply PML one needs to specify not only a class of regression functions, but a full probabilistic model ,

  • PML ignores or at least does not tell how to incorporate a potentially given loss-function,

  • PML (AIC,BIC,MDL) is mostly limited to (semi)parametric models (with “true” parameters).

We discuss two approaches to the last item in the remainder of this section: AIC, BIC, and MDL are not directly applicable (a) for non-parametric models like kNN or Kernel regression, or (b) if

does not reflect the “true” complexity of the model. For instance, ridge regression can work even for larger than , because a penalty pulls most parameters towards (but not exactly to) zero. MacKay [Mac92] suggests an expression for the effective number of parameters as a substitute for in case of (b), and Hastie et al. [HTF01] more generally also for (a).

The trace penalty for parametric Gaussian LBFR. We continue with the Gaussian Bayesian linear regression example (see Section 4 for details and notation). Performing the integration in (14), MacKay [Mac92, Eq.(21)] derives the following expression for the Bayesian evidence for

(18)

(the first bracket in (18) equals and the second equals , cf. (16)). Minimizing (18) w.r.t.  leads to the following relation:

He argues that corresponds to the effective number of parameters, hence

(19)

The trace penalty for general linear models. We now return to general linear regression (7). LBFR is a special case of a projection matrix with rank being the number of basis functions. leaves directions untouched and projects all other directions to zero. For general , Hastie et al. [HTF01, Sec.5.4.1] argue to regard a direction that is only somewhat shrunken, say by a factor of , as a fractional parameter (degrees of freedom). If are the shrinkages = eigenvalues of , the effective number of parameters could be defined as [HTF01, Sec.7.6]

which generalizes the relation beyond projections. For MacKay’s (15), , i.e.  is consistent with and generalizes .

Problems. Though nicely motivated, the trace formula is not without problems. First, since for projections, , one could equally well have argued for . Second, for kNN we have (since is on the diagonal), which does not look unreasonable. Consider now kNN’ where we average over the nearest neighbors excluding the closest neighbor. For sufficiently smooth functions, kNN’ for suitable is still a reasonable regressor, but (since is zero on the diagonal). So for kNN’, which makes no sense and would lead one to always select the model.

Relation to LoRP. In the case of kNN’, would be a better estimate for the effective dimension. In linear LoRP, serves as complexity penalty. Ignoring the nullspace of (8), we can Taylor expand in

For BMS (17) with (15) we get half of this value. So the trace penalty may be regarded as a leading order approximation to LoRP. The higher order terms prevent peculiarities like in kNN’.

6 Outlook

So far we have only scratched at the surface of the Loss Rank Principle. LoRP seems to be a promising principle with a lot of potential, leading to a rich field. In the following we briefly summarize miscellaneous considerations, which may be elaborated on in the future: Experiments, Monte Carlo estimates for non-linear LoRP, numerical approximation of , LoRP for classification, self-consistent regression, explicit expressions for kNN on a grid, loss function selection, and others.

Experiments. Preliminary experiments on selecting in kNN regression confirm that LoRP selects a “good” . (Even on artificial data we cannot determine whether the “right” is selected, since kNN is not a generative model). LoRP for LBFR seems to be consistent with rapid convergence.

Monte Carlo estimates for non-linear LoRP. For non-linear regression we did not present an efficient algorithm for the loss rank/volume . The high-dimensional volume (3) may be computed by Monte Carlo algorithms. Normally constitutes a small part of , and uniform sampling over is not feasible. Instead one should consider two competing regressors and and compute and by uniformly sampling from and respectively e.g. with a Metropolis-type algorithm. Taking the ratio we get and hence the loss rank difference , which is sufficient for LoRP. The usual tricks and problems with sampling apply here too.

Numerical approximation of