Exploring Local Norms in Exp-concave Statistical Learning

02/21/2023
by   Nikita Puchkin, et al.
0

We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class. Answering a question raised in several prior works, we provide a O( d / n + log( 1 / δ) / n ) excess risk bound valid for a wide class of bounded exp-concave losses, where d is the dimension of the convex reference set, n is the sample size, and δ is the confidence level. Our result is based on a unified geometric assumption on the gradient of losses and the notion of local norms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2017

A Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer

In this paper, we present a simple analysis of fast rates with high pr...
research
03/22/2021

Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n)

The sharpest known high probability generalization bounds for uniformly ...
research
08/15/2016

Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back

In stochastic convex optimization the goal is to minimize a convex funct...
research
12/22/2020

Projected Stochastic Gradient Langevin Algorithms for Constrained Sampling and Non-Convex Learning

Langevin algorithms are gradient descent methods with additive noise. Th...
research
05/07/2021

Consistent estimation of distribution functions under increasing concave and convex stochastic ordering

A random variable Y_1 is said to be smaller than Y_2 in the increasing c...
research
10/16/2018

Finite-sample Analysis of M-estimators using Self-concordance

We demonstrate how self-concordance of the loss can be exploited to obta...
research
09/20/2021

Local versions of sum-of-norms clustering

Sum-of-norms clustering is a convex optimization problem whose solution ...

Please sign up or login with your details

Forgot password? Click here to reset