Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems

10/04/2018
by   Shuaiwen Wang, et al.
0

Consider the following class of learning schemes: β̂ := β∈C ∑_j=1^n ℓ(x_j^β; y_j) + λ R(β), (1) where x_i ∈R^p and y_i ∈R denote the i^ th feature and response variable respectively. Let ℓ and R be the convex loss function and regularizer, β denote the unknown weights, and λ be a regularization parameter. C⊂R^p is a closed convex set. Finding the optimal choice of λ is a challenging problem in high-dimensional regimes where both n and p are large. We propose three frameworks to obtain a computationally efficient approximation of the leave-one-out cross validation (LOOCV) risk for nonsmooth losses and regularizers. Our three frameworks are based on the primal, dual, and proximal formulations of (1). Each framework shows its strength in certain types of problems. We prove the equivalence of the three approaches under smoothness conditions. This equivalence enables us to justify the accuracy of the three methods under such conditions. We use our approaches to obtain a risk estimate for several standard problems, including generalized LASSO, nuclear norm regularization, and support vector machines. We empirically demonstrate the effectiveness of our results for non-differentiable cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/07/2018

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions

Consider the following class of learning schemes: β̂ := _β ∑_j=1^n ℓ(x_j...
research
02/05/2019

Consistent Risk Estimation in High-Dimensional Linear Regression

Risk estimation is at the core of many learning systems. The importance ...
research
11/14/2017

On Optimal Generalizability in Parametric Learning

We consider the parametric learning problem, where the objective of the ...
research
01/30/2018

A scalable estimate of the extra-sample prediction error via approximate leave-one-out

We propose a scalable closed-form formula (ALO_λ) to estimate the extra-...
research
03/25/2013

On Sparsity Inducing Regularization Methods for Machine Learning

During the past years there has been an explosion of interest in learnin...
research
03/25/2019

Fundamental Barriers to High-Dimensional Regression with Convex Penalties

In high-dimensional regression, we attempt to estimate a parameter vecto...
research
12/18/2022

Support Vector Regression: Risk Quadrangle Framework

This paper investigates Support Vector Regression (SVR) in the context o...

Please sign up or login with your details

Forgot password? Click here to reset