The column measure and Gradient-Free Gradient Boosting
Sparse model selection by structural risk minimization leads to a set of a few predictors, ideally a subset of the true predictors. This selection clearly depends on the underlying loss function L̃. For linear regression with square loss, the particular (functional) Gradient Boosting variant L_2-Boosting excels for its computational efficiency even for very large predictor sets, while still providing suitable estimation consistency. For more general loss functions, functional gradients are not always easily accessible or, like in the case of continuous ranking, need not even exist. To close this gap, starting from column selection frequencies obtained from L_2-Boosting, we introduce a loss-dependent ”column measure”ν^(L̃) which mathematically describes variable selection. The fact that certain variables relevant for a particular loss L̃ never get selected by L_2-Boosting is reflected by a respective singular part of ν^(L̃) w.r.t. ν^(L_2). With this concept at hand, it amounts to a suitable change of measure (accounting for singular parts) to make L_2-Boosting select variables according to a different loss L̃. As a consequence, this opens the bridge to applications of simulational techniques such as various resampling techniques, or rejection sampling, to achieve this change of measure in an algorithmic way.
READ FULL TEXT