DeepAI AI Chat
Log In Sign Up

Adversarial Example Defense via Perturbation Grading Strategy

by   Shaowei Zhu, et al.
Shenzhen University
NetEase, Inc

Deep Neural Networks have been widely used in many fields. However, studies have shown that DNNs are easily attacked by adversarial examples, which have tiny perturbations and greatly mislead the correct judgment of DNNs. Furthermore, even if malicious attackers cannot obtain all the underlying model parameters, they can use adversarial examples to attack various DNN-based task systems. Researchers have proposed various defense methods to protect DNNs, such as reducing the aggressiveness of adversarial examples by preprocessing or improving the robustness of the model by adding modules. However, some defense methods are only effective for small-scale examples or small perturbations but have limited defense effects for adversarial examples with large perturbations. This paper assigns different defense strategies to adversarial perturbations of different strengths by grading the perturbations on the input examples. Experimental results show that the proposed method effectively improves defense performance. In addition, the proposed method does not modify any task model, which can be used as a preprocessing module, which significantly reduces the deployment cost in practical applications.


page 6

page 8


Enhancing Transformation-based Defenses against Adversarial Examples with First-Order Perturbations

Studies show that neural networks are susceptible to adversarial attacks...

ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples

Deep neural networks (DNNs) have been demonstrated to be vulnerable to a...

Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense

Deep Neural Networks (DNNs) based Synthetic Aperture Radar (SAR) Automat...

Motivating the Rules of the Game for Adversarial Example Research

Advances in machine learning have led to broad deployment of systems wit...

Spatially transformed adversarial examples

Recent studies show that widely used deep neural networks (DNNs) are vul...

Error Diffusion Halftoning Against Adversarial Examples

Adversarial examples contain carefully crafted perturbations that can fo...

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

Recent researches show that deep learning model is susceptible to backdo...