Latent Feature Relation Consistency for Adversarial Robustness

03/29/2023
by   Xingbin Liu, et al.
0

Deep neural networks have been applied in many computer vision tasks and achieved state-of-the-art performance. However, misclassification will occur when DNN predicts adversarial examples which add human-imperceptible adversarial noise to natural examples. This limits the application of DNN in security-critical fields. To alleviate this problem, we first conducted an empirical analysis of the latent features of both adversarial and natural examples and found the similarity matrix of natural examples is more compact than those of adversarial examples. Motivated by this observation, we propose Latent Feature Relation Consistency (LFRC), which constrains the relation of adversarial examples in latent space to be consistent with the natural examples. Importantly, our LFRC is orthogonal to the previous method and can be easily combined with them to achieve further improvement. To demonstrate the effectiveness of LFRC, we conduct extensive experiments using different neural networks on benchmark datasets. For instance, LFRC can bring 0.78% further improvement compared to AT, and 1.09% improvement compared to TRADES, against AutoAttack on CIFAR10. Code is available at https://github.com/liuxingbin/LFRC.

READ FULL TEXT

page 2

page 12

research
01/01/2019

A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks

Deep neural networks (DNNs) have been widely used in the fields such as ...
research
04/04/2017

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

Although deep neural networks (DNNs) have achieved great success in many...
research
12/01/2021

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness

In response to the threat of adversarial examples, adversarial training ...
research
03/25/2023

AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking

Deep neural networks (DNNs) are vulnerable to adversarial examples, whic...
research
05/22/2023

Latent Magic: An Investigation into Adversarial Examples Crafted in the Semantic Latent Space

Adversarial attacks against Deep Neural Networks(DNN) have been a crutia...
research
02/09/2019

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

Discovering and exploiting the causality in deep neural networks (DNNs) ...
research
03/24/2023

Generalist: Decoupling Natural and Robust Generalization

Deep neural networks obtained by standard training have been constantly ...

Please sign up or login with your details

Forgot password? Click here to reset