Localization, Convexity, and Star Aggregation

05/19/2021
by   Suhas Vijaykumar, et al.
0

Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting, the offset complexity upper bound can be generalized to any loss satisfying a certain uniform convexity condition. Amazingly, this condition is shown to also capture exponential concavity and self-concordance, uniting several apparently disparate results. By a unified geometric argument, these bounds translate directly to improper learning in a non-convex class using Audibert's "star algorithm." As applications, we recover the optimal rates for proper and improper learning with the p-loss, 1 < p < ∞, closing the gap for p > 2, and show that improper variants of empirical risk minimization can attain fast rates for logistic regression and other generalized linear models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2022

Exponential Tail Local Rademacher Complexity Risk Bounds Without the Bernstein Condition

The local Rademacher complexity framework is one of the most successful ...
research
02/14/2018

Differentially Private Empirical Risk Minimization Revisited: Faster and More General

In this paper we study the differentially private Empirical Risk Minimiz...
research
02/21/2015

Learning with Square Loss: Localization through Offset Rademacher Complexity

We consider regression with square loss and general classes of functions...
research
12/04/2020

Non-Asymptotic Analysis of Excess Risk via Empirical Risk Landscape

In this paper, we provide a unified analysis of the excess risk of the m...
research
01/31/2022

Agnostic Learnability of Halfspaces via Logistic Loss

We investigate approximation guarantees provided by logistic regression ...
research
06/02/2021

Statistical optimality conditions for compressive ensembles

We present a framework for the theoretical analysis of ensembles of low-...
research
04/30/2022

Orthogonal Statistical Learning with Self-Concordant Loss

Orthogonal statistical learning and double machine learning have emerged...

Please sign up or login with your details

Forgot password? Click here to reset