Resisting Adversarial Attacks in Deep Neural Networks using Diverse Decision Boundaries

08/18/2022
by   Manaar Alam, et al.
29

The security of deep learning (DL) systems is an extremely important field of study as they are being deployed in several applications due to their ever-improving performance to solve challenging tasks. Despite overwhelming promises, the deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify. Protections against adversarial perturbations on ensemble-based techniques have either been shown to be vulnerable to stronger adversaries or shown to lack an end-to-end evaluation. In this paper, we attempt to develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model. The ensemble of classifiers constructed by (1) transformation of the input by a method called Split-and-Shuffle, and (2) restricting the significant features by a method called Contrast-Significant-Features are shown to result in diverse gradients with respect to adversarial attacks, which reduces the chance of transferring adversarial examples from the original to the defender model targeting the same class. We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks to demonstrate the robustness of the proposed ensemble-based defense. We also evaluate the robustness in the presence of a stronger adversary targeting all the models within the ensemble simultaneously. Results for the overall false positives and false negatives have been furnished to estimate the overall performance of the proposed methodology.

READ FULL TEXT

page 2

page 5

page 6

page 13

page 14

research
12/09/2021

PARL: Enhancing Diversity of Ensemble Networks to Resist Adversarial Attacks via Pairwise Adversarially Robust Loss Function

The security of Deep Learning classifiers is a critical field of study b...
research
09/11/2017

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Deep learning has become the state of the art approach in many machine l...
research
09/28/2018

Adversarial Attacks and Defences: A Survey

Deep learning has emerged as a strong and efficient framework that can b...
research
04/14/2023

Interpretability is a Kind of Safety: An Interpreter-based Ensemble for Adversary Defense

While having achieved great success in rich real-life applications, deep...
research
11/15/2019

AdvKnn: Adversarial Attacks On K-Nearest Neighbor Classifiers With Approximate Gradients

Deep neural networks have been shown to be vulnerable to adversarial exa...
research
06/26/2020

Learning Diverse Latent Representations for Improving the Resilience to Adversarial Attacks

This paper proposes an ensemble learning model that is resistant to adve...

Please sign up or login with your details

Forgot password? Click here to reset