The Loss Rank Principle for Model Selection

02/27/2007
by   Marcus Hutter, et al.
0

We introduce a new principle for model selection in regression and classification. Many regression models are controlled by some smoothness or flexibility or complexity parameter c, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. Let f_D^c be the (best) regressor of complexity c on data D. A more flexible regressor can fit more data D' well than a more rigid one. If something (here small loss) is easy to achieve it's typically worth less. We define the loss rank of f_D^c as the number of other (fictitious) data D' that are fitted better by f_D'^c than D is fitted by f_D^c. We suggest selecting the model complexity c that has minimal loss rank (LoRP). Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression function and loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN. In this paper we formalize, discuss, and motivate LoRP, study it for specific regression problems, in particular linear ones, and compare it to other model selection schemes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2010

Model Selection by Loss Rank for Classification and Unsupervised Learning

Hutter (2007) recently introduced the loss rank principle (LoRP) as a ge...
research
06/04/2020

Model selection criteria for regression models with splines and the automatic localization of knots

In this paper we propose a model selection approach to fit a regression ...
research
04/18/2021

Non-asymptotic model selection in block-diagonal mixture of polynomial experts models

Model selection, via penalized likelihood type criteria, is a standard t...
research
10/03/2013

Multivariate regression and fit function uncertainty

This article describes a multivariate polynomial regression method where...
research
06/08/2022

A simple data-driven method to optimise the penalty strengths of penalised models and its application to non-parametric smoothing

Information of interest can often only be extracted from data by model f...
research
02/10/2022

Loss-guided Stability Selection

In modern data analysis, sparse model selection becomes inevitable once ...
research
08/09/2012

Algorithmic Simplicity and Relevance

The human mind is known to be sensitive to complexity. For instance, the...

Please sign up or login with your details

Forgot password? Click here to reset