Implicit differentiation of Lasso-type models for hyperparameter optimization

02/20/2020
by   Quentin Bertrand, et al.
7

Setting regularization parameters for Lasso-type estimators is notoriously difficult, though crucial in practice. The most popular hyperparameter optimization approach is grid-search using held-out validation data. Grid-search however requires to choose a predefined grid for each parameter, which scales exponentially in the number of parameters. Another approach is to cast hyperparameter optimization as a bi-level optimization problem, one can solve by gradient descent. The key challenge for these methods is the estimation of the gradient with respect to the hyperparameters. Computing this gradient via forward or backward automatic differentiation is possible yet usually suffers from high memory consumption. Alternatively implicit differentiation typically involves solving a linear system which can be prohibitive and numerically unstable in high dimension. In addition, implicit differentiation usually assumes smooth loss functions, which is not the case for Lasso-type problems. This work introduces an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems. Our approach scales to high-dimensional data by leveraging the sparsity of the solutions. Experiments demonstrate that the proposed method outperforms a large number of standard methods to optimize the error on held-out data, or the Stein Unbiased Risk Estimator (SURE).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2023

Implicit Differentiation for Hyperparameter Tuning the Weighted Graphical Lasso

We provide a framework and algorithm for tuning the hyperparameters of t...
research
05/04/2021

Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

Finding the optimal hyperparameters of a model can be cast as a bilevel ...
research
04/06/2020

Online Hyperparameter Search Interleaved with Proximal Parameter Updates

There is a clear need for efficient algorithms to tune hyperparameters f...
research
06/08/2021

Nonsmooth Implicit Differentiation for Machine Learning and Optimization

In view of training increasingly complex learning architectures, we esta...
research
11/06/2019

Optimizing Millions of Hyperparameters by Implicit Differentiation

We propose an algorithm for inexpensive gradient-based hyperparameter op...
research
01/11/2023

Analyzing Inexact Hypergradients for Bilevel Learning

Estimating hyperparameters has been a long-standing problem in machine l...
research
12/06/2022

DiffTune^+: Hyperparameter-Free Auto-Tuning using Auto-Differentiation

Controller tuning is a vital step to ensure the controller delivers its ...

Please sign up or login with your details

Forgot password? Click here to reset