High-Probability Risk Bounds via Sequential Predictors

08/15/2023
by   Dirk van der Hoeven, et al.
0

Online learning methods yield sequential regret bounds under minimal assumptions and provide in-expectation risk bounds for statistical learning. However, despite the apparent advantage of online guarantees over their statistical counterparts, recent findings indicate that in many important cases, regret bounds may not guarantee tight high-probability risk bounds in the statistical setting. In this work we show that online to batch conversions applied to general online learning algorithms can bypass this limitation. Via a general second-order correction to the loss function defining the regret, we obtain nearly optimal high-probability risk bounds for several classical statistical estimation problems, such as discrete distribution estimation, linear regression, logistic regression, and conditional density estimation. Our analysis relies on the fact that many online learning algorithms are improper, as they are not restricted to use predictors from a given reference class. The improper nature of our estimators enables significant improvements in the dependencies on various problem parameters. Finally, we discuss some computational advantages of our sequential algorithms over their existing batch counterparts.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2014

Excess Risk Bounds for Exponentially Concave Losses

The overarching goal of this paper is to derive excess risk bounds for l...
research
06/14/2020

Bias no more: high-probability data-dependent regret bounds for adversarial bandits and MDPs

We develop a new approach to obtaining high probability regret bounds fo...
research
06/29/2023

Local Risk Bounds for Statistical Aggregation

In the problem of aggregation, the aim is to combine a given class of ba...
research
10/23/2016

Online Classification with Complex Metrics

We present a framework and analysis of consistent binary classification ...
research
10/11/2011

The Generalization Ability of Online Algorithms for Dependent Data

We study the generalization performance of online learning algorithms tr...
research
10/13/2015

On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities

We study an equivalence of (i) deterministic pathwise statements appeari...
research
02/16/2021

Efficient Competitions and Online Learning with Strategic Forecasters

Winner-take-all competitions in forecasting and machine-learning suffer ...

Please sign up or login with your details

Forgot password? Click here to reset