Smoothed Analysis with Adaptive Adversaries

02/16/2021
by   Nika Haghtalab, et al.
8

We prove novel algorithmic guarantees for several online problems in the smoothed analysis model. In this model, at each time an adversary chooses an input distribution with density function bounded above by 1σ times that of the uniform distribution; nature then samples an input from this distribution. Crucially, our results hold for adaptive adversaries that can choose an input distribution based on the decisions of the algorithm and the realizations of the inputs in the previous time steps. This paper presents a general technique for proving smoothed algorithmic guarantees against adaptive adversaries, in effect reducing the setting of adaptive adversaries to the simpler case of oblivious adversaries. We apply this technique to prove strong smoothed guarantees for three problems: -Online learning: We consider the online prediction problem, where instances are generated from an adaptive sequence of σ-smooth distributions and the hypothesis class has VC dimension d. We bound the regret by Õ(√(T dln(1/σ)) + d√(ln(T/σ))). This answers open questions of [RST11,Hag18]. -Online discrepancy minimization: We consider the online Komlós problem, where the input is generated from an adaptive sequence of σ-smooth and isotropic distributions on the ℓ_2 unit ball. We bound the ℓ_∞ norm of the discrepancy vector by Õ(ln^2( nT/σ) ). -Dispersion in online optimization: We consider online optimization of piecewise Lipschitz functions where functions with ℓ discontinuities are chosen by a smoothed adaptive adversary and show that the resulting sequence is ( σ/√(Tℓ), Õ(√(Tℓ)))-dispersed. This matches the parameters of [BDV18] for oblivious adversaries, up to log factors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/17/2022

Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries

In this paper, we study oracle-efficient algorithms for beyond worst-cas...
research
11/19/2021

On the power of adaptivity in statistical adversaries

We study a fundamental question concerning adversarial noise models in s...
research
04/14/2023

Robust Algorithms on Adaptive Inputs from Bounded Adversaries

We study dynamic algorithms robust to adaptive input generated from sour...
research
04/04/2023

Online Learning with Adversaries: A Differential Inclusion Analysis

We consider the measurement model Y = AX, where X and, hence, Y are rand...
research
07/16/2022

Online Prediction in Sub-linear Space

We provide the first sub-linear space and sub-linear regret algorithm fo...
research
09/18/2023

Simple and Optimal Online Contention Resolution Schemes for k-Uniform Matroids

We provide a simple (1-O(1/√(k)))-selectable Online Contention Resolutio...
research
06/25/2020

Active Online Domain Adaptation

Online machine learning systems need to adapt to domain shifts. Meanwhil...

Please sign up or login with your details

Forgot password? Click here to reset