On the power of adaptivity in statistical adversaries

11/19/2021
by   Guy Blanc, et al.
0

We study a fundamental question concerning adversarial noise models in statistical problems where the algorithm receives i.i.d. draws from a distribution 𝒟. The definitions of these adversaries specify the type of allowable corruptions (noise model) as well as when these corruptions can be made (adaptivity); the latter differentiates between oblivious adversaries that can only corrupt the distribution 𝒟 and adaptive adversaries that can have their corruptions depend on the specific sample S that is drawn from 𝒟. In this work, we investigate whether oblivious adversaries are effectively equivalent to adaptive adversaries, across all noise models studied in the literature. Specifically, can the behavior of an algorithm 𝒜 in the presence of oblivious adversaries always be well-approximated by that of an algorithm 𝒜' in the presence of adaptive adversaries? Our first result shows that this is indeed the case for the broad class of statistical query algorithms, under all reasonable noise models. We then show that in the specific case of additive noise, this equivalence holds for all algorithms. Finally, we map out an approach towards proving this statement in its fullest generality, for all algorithms and under all reasonable noise models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2021

Smoothed Analysis with Adaptive Adversaries

We prove novel algorithmic guarantees for several online problems in the...
research
11/13/2019

Rounding Dynamic Matchings Against an Adaptive Adversary

We present a new dynamic matching sparsification scheme. From this schem...
research
07/18/2023

The Full Landscape of Robust Mean Testing: Sharp Separations between Oblivious and Adaptive Contamination

We consider the question of Gaussian mean testing, a fundamental task in...
research
03/16/2019

Notions of Centralized and Decentralized Opacity in Linear Systems

We formulate notions of opacity for cyberphysical systems modeled as dis...
research
04/14/2023

Optimal Uncoordinated Unique IDs

In the Uncoordinated Unique Identifiers Problem (UUIDP) there are n inde...
research
02/20/2018

Out-distribution training confers robustness to deep neural networks

The easiness at which adversarial instances can be generated in deep neu...
research
06/08/2018

Monge beats Bayes: Hardness Results for Adversarial Training

The last few years have seen extensive empirical study of the robustness...

Please sign up or login with your details

Forgot password? Click here to reset