On the Effect of Adversarial Training Against Invariance-based Adversarial Examples

02/16/2023
by   Roland Rauter, et al.
0

Adversarial examples are carefully crafted attack points that are supposed to fool machine learning classifiers. In the last years, the field of adversarial machine learning, especially the study of perturbation-based adversarial examples, in which a perturbation that is not perceptible for humans is added to the images, has been studied extensively. Adversarial training can be used to achieve robustness against such inputs. Another type of adversarial examples are invariance-based adversarial examples, where the images are semantically modified such that the predicted class of the model does not change, but the class that is determined by humans does. How to ensure robustness against this type of adversarial examples has not been explored yet. This work addresses the impact of adversarial training with invariance-based adversarial examples on a convolutional neural network (CNN). We show that when adversarial training with invariance-based and perturbation-based adversarial examples is applied, it should be conducted simultaneously and not consecutively. This procedure can achieve relatively high robustness against both types of adversarial examples. Additionally, we find that the algorithm used for generating invariance-based adversarial examples in prior work does not correctly determine the labels and therefore we use human-determined labels.

READ FULL TEXT
research
05/15/2019

On Norm-Agnostic Robustness of Adversarial Training

Adversarial examples are carefully perturbed in-puts for fooling machine...
research
10/20/2022

Balanced Adversarial Training: Balancing Tradeoffs between Fickleness and Obstinacy in NLP Models

Traditional (fickle) adversarial examples involve finding a small pertur...
research
05/05/2020

Adversarial Training against Location-Optimized Adversarial Patches

Deep neural networks have been shown to be susceptible to adversarial ex...
research
05/06/2020

Proper measure for adversarial robustness

This paper analyzes the problems of standard adversarial accuracy and st...
research
04/13/2023

Adversarial Examples from Dimensional Invariance

Adversarial examples have been found for various deep as well as shallow...
research
05/27/2022

R-HTDetector: Robust Hardware-Trojan Detection Based on Adversarial Training

Hardware Trojans (HTs) have become a serious problem, and extermination ...
research
09/10/2019

Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification

Today's state-of-the-art image classifiers fail to correctly classify ca...

Please sign up or login with your details

Forgot password? Click here to reset