Adversarial Example Defense via Perturbation Grading Strategy

12/16/2022
by   Shaowei Zhu, et al.
0

Deep Neural Networks have been widely used in many fields. However, studies have shown that DNNs are easily attacked by adversarial examples, which have tiny perturbations and greatly mislead the correct judgment of DNNs. Furthermore, even if malicious attackers cannot obtain all the underlying model parameters, they can use adversarial examples to attack various DNN-based task systems. Researchers have proposed various defense methods to protect DNNs, such as reducing the aggressiveness of adversarial examples by preprocessing or improving the robustness of the model by adding modules. However, some defense methods are only effective for small-scale examples or small perturbations but have limited defense effects for adversarial examples with large perturbations. This paper assigns different defense strategies to adversarial perturbations of different strengths by grading the perturbations on the input examples. Experimental results show that the proposed method effectively improves defense performance. In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.

READ FULL TEXT

page 6

page 8

research
05/08/2023

Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization

Deep Neural Networks (DNNs) have recently made significant progress in m...
research
03/08/2021

Enhancing Transformation-based Defenses against Adversarial Examples with First-Order Perturbations

Studies show that neural networks are susceptible to adversarial attacks...
research
11/30/2018

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

Deep neural networks (DNNs) have been demonstrated to be vulnerable to a...
research
09/11/2022

Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense

Deep Neural Networks (DNNs) based Synthetic Aperture Radar (SAR) Automat...
research
07/18/2018

Motivating the Rules of the Game for Adversarial Example Research

Advances in machine learning have led to broad deployment of systems wit...
research
01/08/2018

Spatially transformed adversarial examples

Recent studies show that widely used deep neural networks (DNNs) are vul...
research
05/29/2021

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

Recent researches show that deep learning model is susceptible to backdo...

Please sign up or login with your details

Forgot password? Click here to reset