Symmetry Defense Against XGBoost Adversarial Perturbation Attacks

08/10/2023
by   Blerta Lindqvist, et al.
0

We examine whether symmetry can be used to defend tree-based ensemble classifiers such as gradient-boosting decision trees (GBDTs) against adversarial perturbation attacks. The idea is based on a recent symmetry defense for convolutional neural network classifiers (CNNs) that utilizes CNNs' lack of invariance with respect to symmetries. CNNs lack invariance because they can classify a symmetric sample, such as a horizontally flipped image, differently from the original sample. CNNs' lack of invariance also means that CNNs can classify symmetric adversarial samples differently from the incorrect classification of adversarial samples. Using CNNs' lack of invariance, the recent CNN symmetry defense has shown that the classification of symmetric adversarial samples reverts to the correct sample classification. In order to apply the same symmetry defense to GBDTs, we examine GBDT invariance and are the first to show that GBDTs also lack invariance with respect to symmetries. We apply and evaluate the GBDT symmetry defense for nine datasets against six perturbation attacks with a threat model that ranges from zero-knowledge to perfect-knowledge adversaries. Using the feature inversion symmetry against zero-knowledge adversaries, we achieve up to 100 samples even when default and robust classifiers have 0 feature inversion and horizontal flip symmetries against perfect-knowledge adversaries, we achieve up to over 95 GBDT classifier of the F-MNIST dataset even when default and robust classifiers have 0

READ FULL TEXT
research
10/08/2022

Symmetry Subgroup Defense Against Adversarial Attacks

Adversarial attacks and defenses disregard the lack of invariance of con...
research
02/09/2021

Target Training Does Adversarial Training Without Adversarial Samples

Neural network classifiers are vulnerable to misclassification of advers...
research
03/25/2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...
research
01/28/2019

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

Adversarial examples are known to mislead deep learning models to incorr...
research
06/30/2021

A Robust Classification-autoencoder to Defend Outliers and Adversaries

In this paper, we present a robust classification-autoencoder (CAE) whic...
research
12/11/2018

Mix'n'Squeeze: Thwarting Adaptive Adversarial Samples Using Randomized Squeezing

Deep Learning (DL) has been shown to be particularly vulnerable to adver...
research
02/20/2020

Towards Certifiable Adversarial Sample Detection

Convolutional Neural Networks (CNNs) are deployed in more and more class...

Please sign up or login with your details

Forgot password? Click here to reset