Patch-wise++ Perturbation for Adversarial Targeted Attacks

12/31/2020
by   Lianli Gao, et al.
14

Although great progress has been made on adversarial attacks for deep neural networks (DNNs), their transferability is still unsatisfactory, especially for targeted attacks. There are two problems behind that have been long overlooked: 1) the conventional setting of T iterations with the step size of ϵ/T to comply with the ϵ-constraint. In this case, most of the pixels are allowed to add very small noise, much less than ϵ; and 2) usually manipulating pixel-wise noise. However, features of a pixel extracted by DNNs are influenced by its surrounding regions, and different DNNs generally focus on different discriminative regions in recognition. To tackle these issues, we propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability. Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the ϵ-constraint is properly assigned to its surrounding regions by a project kernel. But targeted attacks aim to push the adversarial examples into the territory of a specific class, and the amplification factor may lead to underfitting. Thus, we introduce the temperature and propose a patch-wise++ iterative method (PIM++) to further improve transferability without significantly sacrificing the performance of the white-box attack. Our method can be generally integrated to any gradient-based attack method. Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models on average.

READ FULL TEXT

page 1

page 6

page 11

research
07/14/2020

Patch-wise Attack for Fooling Deep Neural Network

By adding human-imperceptible noise to clean images, the resultant adver...
research
06/28/2023

Boosting Adversarial Transferability with Learnable Patch-wise Masks

Adversarial examples have raised widespread attention in security-critic...
research
01/30/2023

Improving Adversarial Transferability with Scheduled Step Size and Dual Example

Deep neural networks are widely known to be vulnerable to adversarial ex...
research
04/02/2019

Curls & Whey: Boosting Black-Box Adversarial Attacks

Image classifiers based on deep neural networks suffer from harassment c...
research
01/18/2021

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

Deep neural networks (DNNs) have been widely adopted in different applic...
research
01/27/2023

Adapting Step-size: A Unified Perspective to Analyze and Improve Gradient-based Methods for Adversarial Attacks

Learning adversarial examples can be formulated as an optimization probl...
research
03/25/2022

Improving Adversarial Transferability with Spatial Momentum

Deep Neural Networks (DNN) are vulnerable to adversarial examples. Altho...

Please sign up or login with your details

Forgot password? Click here to reset