Regularized ERM on random subspaces

06/17/2020
by   Andrea Della Vecchia, et al.
9

We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data. This approach naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. Our main results show the existence of different regimes, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance. Theoretical results are complemented with numerical experiments on large scale benchmark data sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/14/2020

Risk Bounds for Robust Deep Learning

It has been observed that certain loss functions can render deep-learnin...
research
03/09/2020

Risk Analysis of Divide-and-Conquer ERM

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
03/09/2020

Theoretical Analysis of Divide-and-Conquer ERM: Beyond Square Loss and RKHS

Theoretical analysis of the divide-and-conquer based distributed learnin...
research
03/27/2023

On the Connection between L_p and Risk Consistency and its Implications on Regularized Kernel Methods

As a predictor's quality is often assessed by means of its risk, it is n...
research
07/15/2022

Support Vector Machines with the Hard-Margin Loss: Optimal Training via Combinatorial Benders' Cuts

The classical hinge-loss support vector machines (SVMs) model is sensiti...
research
05/09/2019

Two-stage Best-scored Random Forest for Large-scale Regression

We propose a novel method designed for large-scale regression problems, ...
research
11/30/2020

Persistent Reductions in Regularized Loss Minimization for Variable Selection

In the context of regularized loss minimization with polyhedral gauges, ...

Please sign up or login with your details

Forgot password? Click here to reset