Persistent Reductions in Regularized Loss Minimization for Variable Selection

11/30/2020
by   Amin Jalali, et al.
0

In the context of regularized loss minimization with polyhedral gauges, we show that for a broad class of loss functions (possibly non-smooth and non-convex) and under a simple geometric condition on the input data it is possible to efficiently identify a subset of features which are guaranteed to have zero coefficients in all optimal solutions in all problems with loss functions from said class, before any iterative optimization has been performed for the original problem. This procedure is standalone, takes only the data as input, and does not require any calls to the loss function. Therefore, we term this procedure as a persistent reduction for the aforementioned class of regularized loss minimization problems. This reduction can be efficiently implemented via an extreme ray identification subroutine applied to a polyhedral cone formed from the datapoints. We employ an existing output-sensitive algorithm for extreme ray identification which makes our guarantee and algorithm applicable in ultra-high dimensional problems.

READ FULL TEXT

page 5

page 17

research
09/15/2021

Generalized XGBoost Method

The XGBoost method has many advantages and is especially suitable for st...
research
02/14/2018

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

In this paper we study the differentially private Empirical Risk Minimiz...
research
02/10/2021

Stability of SGD: Tightness Analysis and Improved Bounds

Stochastic Gradient Descent (SGD) based methods have been widely used fo...
research
02/25/2016

Fast Nonsmooth Regularized Risk Minimization with Continuation

In regularized risk minimization, the associated optimization problem be...
research
06/17/2020

Regularized ERM on random subspaces

We study a natural extension of classical empirical risk minimization, w...
research
05/22/2020

Online Non-convex Learning for River Pollution Source Identification

In this paper, novel gradient based online learning algorithms are devel...
research
03/06/2018

The Contextual Loss for Image Transformation with Non-Aligned Data

Feed-forward CNNs trained for image transformation problems rely on loss...

Please sign up or login with your details

Forgot password? Click here to reset