Muddling Labels for Regularization, a novel approach to generalization

02/17/2021
by   Karim Lounici, et al.
0

Generalization is a central problem in Machine Learning. Indeed most prediction methods require careful calibration of hyperparameters usually carried out on a hold-out validation dataset to achieve generalization. The main goal of this paper is to introduce a novel approach to achieve generalization without any data splitting, which is based on a new risk measure which directly quantifies a model's tendency to overfit. To fully understand the intuition and advantages of this new approach, we illustrate it in the simple linear regression model (Y=Xβ+ξ) where we develop a new criterion. We highlight how this criterion is a good proxy for the true generalization risk. Next, we derive different procedures which tackle several structures simultaneously (correlation, sparsity,...). Noticeably, these procedures concomitantly train the model and calibrate the hyperparameters. In addition, these procedures can be implemented via classical gradient descent methods when the criterion is differentiable w.r.t. the hyperparameters. Our numerical experiments reveal that our procedures are computationally feasible and compare favorably to the popular approach (Ridge, LASSO and Elastic-Net combined with grid-search cross-validation) in term of generalization. They also outperform the baseline on two additional tasks: estimation and support recovery of β. Moreover, our procedures do not require any expertise for the calibration of the initial parameters which remain the same for all the datasets we experimented on.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2020

Optimizing generalization on the train set: a novel gradient-based framework to train parameters and hyperparameters simultaneously

Generalization is a central problem in Machine Learning. Most prediction...
research
05/20/2020

On the use of cross-validation for the calibration of the tuning parameter in the adaptive lasso

The adaptive lasso is a popular extension of the lasso, which was shown ...
research
02/04/2022

Elastic Gradient Descent and Elastic Gradient Flow: LARS Like Algorithms Approximating the Solution Paths of the Elastic Net

The elastic net combines lasso and ridge regression to fuse the sparsity...
research
06/28/2013

Simple one-pass algorithm for penalized linear regression with cross-validation on MapReduce

In this paper, we propose a one-pass algorithm on MapReduce for penalize...
research
04/29/2021

Generalization Guarantees for Neural Architecture Search with Train-Validation Split

Neural Architecture Search (NAS) is a popular method for automatically d...
research
10/21/2009

Sparsification and feature selection by compressive linear regression

The Minimum Description Length (MDL) principle states that the optimal m...

Please sign up or login with your details

Forgot password? Click here to reset