TEAM: We Need More Powerful Adversarial Examples for DNNs

07/31/2020
by   Yaguan Qian, et al.
5

Although deep neural networks (DNNs) have achieved success in many application fields, it is still vulnerable to imperceptible adversarial examples that can lead to misclassification of DNNs easily. To overcome this challenge, many defensive methods are proposed. Indeed, a powerful adversarial example is a key benchmark to measure these defensive mechanisms. In this paper, we propose a novel method (TEAM, Taylor Expansion-Based Adversarial Methods) to generate more powerful adversarial examples than previous methods. The main idea is to craft adversarial examples by minimizing the confidence of the ground-truth class under untargeted attacks or maximizing the confidence of the target class under targeted attacks. Specifically, we define the new objective functions that approximate DNNs by using the second-order Taylor expansion within a tiny neighborhood of the input. Then the Lagrangian multiplier method is used to obtain the optimize perturbations for these objective functions. To decrease the amount of computation, we further introduce the Gauss-Newton (GN) method to speed it up. Finally, the experimental result shows that our method can reliably produce adversarial examples with 100 perturbations. In addition, the adversarial example generated with our method can defeat defensive distillation based on gradient masking.

READ FULL TEXT

page 1

page 3

page 9

page 10

page 11

page 14

research
01/23/2020

Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples

Although deep neural networks (DNNs) have achieved successful applicatio...
research
05/08/2023

Adversarial Examples Detection with Enhanced Image Difference Features based on Local Histogram Equalization

Deep Neural Networks (DNNs) have recently made significant progress in m...
research
02/23/2021

Adversarial Examples Detection beyond Image Space

Deep neural networks have been proved that they are vulnerable to advers...
research
05/25/2019

Adversarial Distillation for Ordered Top-k Attacks

Deep Neural Networks (DNNs) are vulnerable to adversarial attacks, espec...
research
12/02/2020

Towards Imperceptible Adversarial Image Patches Based on Network Explanations

The vulnerability of deep neural networks (DNNs) for adversarial example...
research
11/18/2022

Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

Deep neural networks (DNNs) are powerful, but they can make mistakes tha...
research
07/22/2021

Unsupervised Detection of Adversarial Examples with Model Explanations

Deep Neural Networks (DNNs) have shown remarkable performance in a diver...

Please sign up or login with your details

Forgot password? Click here to reset