The cost-free nature of optimally tuning Tikhonov regularizers and other ordered smoothers

05/29/2019
by   Pierre C. Bellec, et al.
0

We consider the problem of selecting the best estimator among a family of Tikhonov regularized estimators, or, alternatively, to select a linear combination of these regularizers that is as good as the best regularizer in the family. Our theory reveals that if the Tikhonov regularizers share the same penalty matrix with different tuning parameters, a convex procedure based on Q-aggregation achieves the mean square error of the best estimator, up to a small error term no larger than Cσ^2, where σ^2 is the noise level and C>0 is an absolute constant. Remarkably, the error term does not depend on the penalty matrix or the number of estimators as long as they share the same penalty matrix, i.e., it applies to any grid of tuning parameters, no matter how large the cardinality of the grid is. This reveals the surprising "cost-free" nature of optimally tuning Tikhonov regularizers, in striking contrast with the existing literature on aggregation of estimators where one typically has to pay a cost of σ^2(M) where M is the number of estimators in the family. The result holds, more generally, for any family of ordered linear smoothers. This encompasses Ridge regression as well as Principal Component Regression. The result is extended to the problem of tuning Tikhonov regularizers with different penalty matrices.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset