On Norm-Agnostic Robustness of Adversarial Training

05/15/2019
by   Bai Li, et al.
0

Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in ℓ_2 and ℓ_∞ norms simultaneously. We discuss a possible solution to this issue and its limitations as well.

READ FULL TEXT
research
12/27/2019

Efficient Adversarial Training with Transferable Adversarial Examples

Adversarial training is an effective defense method to protect classific...
research
09/23/2020

Semantics-Preserving Adversarial Training

Adversarial training is a defense technique that improves adversarial ro...
research
02/02/2021

Recent Advances in Adversarial Training for Adversarial Robustness

Adversarial training is one of the most effective approaches defending a...
research
07/25/2022

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness

Deep neural network-based image classifications are vulnerable to advers...
research
05/26/2021

Deep Repulsive Prototypes for Adversarial Robustness

While many defences against adversarial examples have been proposed, fin...
research
02/16/2023

On the Effect of Adversarial Training Against Invariance-based Adversarial Examples

Adversarial examples are carefully crafted attack points that are suppos...
research
10/14/2019

Confidence-Calibrated Adversarial Training: Towards Robust Models Generalizing Beyond the Attack Used During Training

Adversarial training is the standard to train models robust against adve...

Please sign up or login with your details

Forgot password? Click here to reset