Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

02/17/2022
by   Nika Haghtalab, et al.
0

In this paper, we study oracle-efficient algorithms for beyond worst-case analysis of online learning. We focus on two settings. First, the smoothed analysis setting of [RST11, HRS21] where an adversary is constrained to generating samples from distributions whose density is upper bounded by 1/σ times the uniform density. Second, the setting of K-hint transductive learning, where the learner is given access to K hints per time step that are guaranteed to include the true instance. We give the first known oracle-efficient algorithms for both settings that depend only on the VC dimension of the class and parameters σ and K that capture the power of the adversary. In particular, we achieve oracle-efficient regret bounds of O ( √(T dσ^-1/2) ) and O ( √(T d K ) ) respectively for these setting. For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS21]. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for worst-case setting with small domains. In particular, we give an oracle-efficient algorithm with regret of O ( √(T(d |𝒳)|^1/2)), which is a refinement of the earlier O ( √(T|𝒳|)) bound by [DS16].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/17/2022

Adaptive Oracle-Efficient Online Learning

The classical algorithms for online learning and decision-making have th...
research
08/18/2012

Online Learning with Predictable Sequences

We present methods for online linear optimization that take advantage of...
research
04/27/2011

Online Learning: Stochastic and Constrained Adversaries

Learning theory has largely focused on two main learning scenarios. The ...
research
07/18/2023

Oracle Efficient Online Multicalibration and Omniprediction

A recent line of work has shown a surprising connection between multical...
research
02/16/2021

Smoothed Analysis with Adaptive Adversaries

We prove novel algorithmic guarantees for several online problems in the...
research
04/04/2023

Online Learning with Adversaries: A Differential Inclusion Analysis

We consider the measurement model Y = AX, where X and, hence, Y are rand...
research
02/09/2023

The Sample Complexity of Approximate Rejection Sampling with Applications to Smoothed Online Learning

Suppose we are given access to n independent samples from distribution μ...

Please sign up or login with your details

Forgot password? Click here to reset