Bayesian Robustness: A Nonasymptotic Viewpoint
We study the problem of robustly estimating the posterior distribution for the setting where observed data can be contaminated with potentially adversarial outliers. We propose Rob-ULA, a robust variant of the Unadjusted Langevin Algorithm (ULA), and provide a finite-sample analysis of its sampling distribution. In particular, we show that after T= Õ(d/ε_acc) iterations, we can sample from p_T such that dist(p_T, p^*) ≤ε_acc + Õ(ϵ), where ϵ is the fraction of corruptions. We corroborate our theoretical analysis with experiments on both synthetic and real-world data sets for mean estimation, regression and binary classification.
READ FULL TEXT