Adversarially Robust Learning with Tolerance
We study the problem of tolerant adversarial PAC learning with respect to metric perturbation sets. In adversarial PAC learning, an adversary is allowed to replace a test point x with an arbitrary point in a closed ball of radius r centered at x. In the tolerant version, the error of the learner is compared with the best achievable error with respect to a slightly larger perturbation radius (1+γ)r. For perturbation sets with doubling dimension d, we show that a variant of the natural “perturb-and-smooth” algorithm PAC learns any hypothesis class ℋ with VC dimension v in the γ-tolerant adversarial setting with O(v(1+1/γ)^O(d)/ε) samples. This is the first such general guarantee with linear dependence on v even for the special case where the domain is the real line and the perturbation sets are closed balls (intervals) of radius r. However, the proposed guarantees for the perturb-and-smooth algorithm currently only hold in the tolerant robust realizable setting and exhibit exponential dependence on d. We additionally propose an alternative learning method which yields sample complexity bounds with only linear dependence on the doubling dimension even in the more general agnostic case. This approach is based on sample compression.
READ FULL TEXT