Imperceptible Adversarial Attack on Deep Neural Networks from Image Boundary

08/29/2023
by   Fahad Alrasheedi, et al.
0

Although Deep Neural Networks (DNNs), such as the convolutional neural networks (CNN) and Vision Transformers (ViTs), have been successfully applied in the field of computer vision, they are demonstrated to be vulnerable to well-sought Adversarial Examples (AEs) that can easily fool the DNNs. The research in AEs has been active, and many adversarial attacks and explanations have been proposed since they were discovered in 2014. The mystery of the AE's existence is still an open question, and many studies suggest that DNN training algorithms have blind spots. The salient objects usually do not overlap with boundaries; hence, the boundaries are not the DNN model's attention. Nevertheless, recent studies show that the boundaries can dominate the behavior of the DNN models. Hence, this study aims to look at the AEs from a different perspective and proposes an imperceptible adversarial attack that systemically attacks the input image boundary for finding the AEs. The experimental results have shown that the proposed boundary attacking method effectively attacks six CNN models and the ViT using only 32 boundaries) with an average success rate (SR) of 95.2 signal-to-noise ratio of 41.37 dB. Correlation analyses are conducted, including the relation between the adversarial boundary's width and the SR and how the adversarial boundary changes the DNN model's attention. This paper's discoveries can potentially advance the understanding of AEs and provide a different perspective on how AEs can be constructed.

READ FULL TEXT

page 2

page 7

page 9

page 10

research
07/16/2019

Latent Adversarial Defence with Boundary-guided Generation

Deep Neural Networks (DNNs) have recently achieved great success in many...
research
02/26/2020

Defending against Backdoor Attack on Deep Neural Networks

Although deep neural networks (DNNs) have achieved a great success in va...
research
11/18/2022

Adversarial Detection by Approximation of Ensemble Boundary

A spectral approximation of a Boolean function is proposed for approxima...
research
01/04/2017

Dense Associative Memory is Robust to Adversarial Inputs

Deep neural networks (DNN) trained in a supervised way suffer from two k...
research
05/20/2020

An Adversarial Approach for Explaining the Predictions of Deep Neural Networks

Machine learning models have been successfully applied to a wide range o...
research
02/20/2023

Efficient Algorithms for Boundary Defense with Heterogeneous Defenders

This paper studies the problem of defending (1D and 2D) boundaries again...
research
02/23/2018

DeepDefense: Training Deep Neural Networks with Improved Robustness

Despite the efficacy on a variety of computer vision tasks, deep neural ...

Please sign up or login with your details

Forgot password? Click here to reset