Stochastic Online Learning with Feedback Graphs: Finite-Time and Asymptotic Optimality

06/20/2022
by   Teodor V. Marinov, et al.
0

We revisit the problem of stochastic online learning with feedback graphs, with the goal of devising algorithms that are optimal, up to constants, both asymptotically and in finite time. We show that, surprisingly, the notion of optimal finite-time regret is not a uniquely defined property in this context and that, in general, it is decoupled from the asymptotic rate. We discuss alternative choices and propose a notion of finite-time optimality that we argue is meaningful. For that notion, we give an algorithm that admits quasi-optimal regret both in finite-time and asymptotically.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2023

On Computable Online Learning

We initiate a study of computable online (c-online) learning, which we a...
research
05/18/2012

Thompson Sampling: An Asymptotically Optimal Finite Time Analysis

The question of the optimality of Thompson Sampling for solving the stoc...
research
09/18/2023

Asymptotically Efficient Online Learning for Censored Regression Models Under Non-I.I.D Data

The asymptotically efficient online learning problem is investigated for...
research
12/03/2019

Experimental Evidence for Asymptotic Non-Optimality of Comb Adversary Strategy

For the problem of prediction with expert advice in the adversarial sett...
research
11/08/2020

Asymptotic Convergence of Thompson Sampling

Thompson sampling has been shown to be an effective policy across a vari...
research
06/08/2022

Learning in games from a stochastic approximation viewpoint

We develop a unified stochastic approximation framework for analyzing th...
research
06/04/2013

Online Learning under Delayed Feedback

Online learning with delayed feedback has received increasing attention ...

Please sign up or login with your details

Forgot password? Click here to reset