Symmetry Subgroup Defense Against Adversarial Attacks

10/08/2022
by   Blerta Lindqvist, et al.
0

Adversarial attacks and defenses disregard the lack of invariance of convolutional neural networks (CNNs), that is, the inability of CNNs to classify samples and their symmetric transformations the same. The lack of invariance of CNNs with respect to symmetry transformations is detrimental when classifying transformed original samples but not necessarily detrimental when classifying transformed adversarial samples. For original images, the lack of invariance means that symmetrically transformed original samples are classified differently from their correct labels. However, for adversarial images, the lack of invariance means that symmetrically transformed adversarial images are classified differently from their incorrect adversarial labels. Might the CNN lack of invariance revert symmetrically transformed adversarial samples to the correct classification? This paper answers this question affirmatively for a threat model that ranges from zero-knowledge adversaries to perfect-knowledge adversaries. We base our defense against perfect-knowledge adversaries on devising a Klein four symmetry subgroup that incorporates an additional artificial symmetry of pixel intensity inversion. The closure property of the subgroup not only provides a framework for the accuracy evaluation but also confines the transformations that an adaptive, perfect-knowledge adversary can apply. We find that by using only symmetry defense, no adversarial samples, and by changing nothing in the model architecture and parameters, we can defend against white-box PGD adversarial attacks, surpassing the PGD adversarial training defense by up to  50 ImageNet. The proposed defense also maintains and surpasses the classification accuracy for non-adversarial samples.

READ FULL TEXT
research
08/10/2023

Symmetry Defense Against XGBoost Adversarial Perturbation Attacks

We examine whether symmetry can be used to defend tree-based ensemble cl...
research
06/08/2020

Tricking Adversarial Attacks To Fail

Recent adversarial defense approaches have failed. Untargeted gradient-b...
research
09/14/2021

Nonlinearities in Steerable SO(2)-Equivariant CNNs

Invariance under symmetry is an important problem in machine learning. O...
research
05/16/2020

Encryption Inspired Adversarial Defense for Visual Classification

Conventional adversarial defenses reduce classification accuracy whether...
research
02/01/2019

Natural and Adversarial Error Detection using Invariance to Image Transformations

We propose an approach to distinguish between correct and incorrect imag...
research
07/05/2023

GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples using Gradients and Invariance Transformations

Deep neural networks tend to make overconfident predictions and often re...
research
02/20/2020

Towards Certifiable Adversarial Sample Detection

Convolutional Neural Networks (CNNs) are deployed in more and more class...

Please sign up or login with your details

Forgot password? Click here to reset