Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization

05/08/2023
by   Zhaoxia Yin, et al.
0

Deep Neural Networks (DNNs) have recently made significant progress in many fields. However, studies have shown that DNNs are vulnerable to adversarial examples, where imperceptible perturbations can greatly mislead DNNs even if the full underlying model parameters are not accessible. Various defense methods have been proposed, such as feature compression and gradient masking. However, numerous studies have proven that previous methods create detection or defense against certain attacks, which renders the method ineffective in the face of the latest unknown attack methods. The invisibility of adversarial perturbations is one of the evaluation indicators for adversarial example attacks, which also means that the difference in the local correlation of high-frequency information in adversarial examples and normal examples can be used as an effective feature to distinguish the two. Therefore, we propose an adversarial example detection framework based on a high-frequency information enhancement strategy, which can effectively extract and amplify the feature differences between adversarial examples and normal examples. Experimental results show that the feature augmentation module can be combined with existing detection models in a modular way under this framework. Improve the detector's performance and reduce the deployment cost without modifying the existing detection model.

READ FULL TEXT

page 5

page 6

page 7

page 9

page 10

page 12

research
12/16/2022

Adversarial Example Defense via Perturbation Grading Strategy

Deep Neural Networks have been widely used in many fields. However, stud...
research
02/21/2018

Generalizable Adversarial Examples Detection Based on Bi-model Decision Mismatch

Deep neural networks (DNNs) have shown phenomenal success in a wide rang...
research
03/25/2023

AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking

Deep neural networks (DNNs) are vulnerable to adversarial examples, whic...
research
08/06/2019

BlurNet: Defense by Filtering the Feature Maps

Recently, the field of adversarial machine learning has been garnering a...
research
07/31/2020

TEAM: We Need More Powerful Adversarial Examples for DNNs

Although deep neural networks (DNNs) have achieved success in many appli...
research
03/14/2018

Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples

Deep Neural Networks (DNNs) have achieved remarkable performance in a my...
research
03/09/2021

Selective and Features based Adversarial Example Detection

Security-sensitive applications that relay on Deep Neural Networks (DNNs...

Please sign up or login with your details

Forgot password? Click here to reset