Patch-wise Attack for Fooling Deep Neural Network

07/14/2020
by   Lianli Gao, et al.
12

By adding human-imperceptible noise to clean images, the resultant adversarial examples can fool other unknown models. Features of a pixel extracted by deep neural networks (DNNs) are influenced by its surrounding regions, and different DNNs generally focus on different discriminative regions in recognition. Motivated by this, we propose a patch-wise iterative algorithm – a black-box attack towards mainstream normally trained and defense models, which differs from the existing attack methods manipulating pixel-wise noise. In this way, without sacrificing the performance of white-box attack, our adversarial examples can have strong transferability. Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the ϵ-constraint is properly assigned to its surrounding regions by a project kernel. Our method can be generally integrated to any gradient-based attack methods. Compared with the current state-of-the-art attacks, we significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average. Our code is available at <https://github.com/qilong-zhang/Patch-wise-iterative-attack>

READ FULL TEXT
research
12/31/2020

Patch-wise++ Perturbation for Adversarial Targeted Attacks

Although great progress has been made on adversarial attacks for deep ne...
research
06/03/2021

Improving the Transferability of Adversarial Examples with New Iteration Framework and Input Dropout

Deep neural networks(DNNs) is vulnerable to be attacked by adversarial e...
research
06/28/2023

Boosting Adversarial Transferability with Learnable Patch-wise Masks

Adversarial examples have raised widespread attention in security-critic...
research
01/18/2021

What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space

Deep neural networks (DNNs) have been widely adopted in different applic...
research
07/27/2022

Hardly Perceptible Trojan Attack against Neural Networks with Bit Flips

The security of deep neural networks (DNNs) has attracted increasing att...
research
05/21/2018

Adversarial Noise Layer: Regularize Neural Network By Adding Noise

In this paper, we introduce a novel regularization method called Adversa...
research
05/02/2023

Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature

Recent research has shown that Deep Neural Networks (DNNs) are highly vu...

Please sign up or login with your details

Forgot password? Click here to reset