On Norm-Agnostic Robustness of Adversarial Training

05/15/2019
by   Bai Li, et al.
0

Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in ℓ_2 and ℓ_∞ norms simultaneously. We discuss a possible solution to this issue and its limitations as well.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset