Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks

10/05/2022
by   Shengming Yuan, et al.
0

Unrestricted color attacks, which manipulate semantically meaningful color of an image, have shown their stealthiness and success in fooling both human eyes and deep neural networks. However, current works usually sacrifice the flexibility of the uncontrolled setting to ensure the naturalness of adversarial examples. As a result, the black-box attack performance of these methods is limited. To boost transferability of adversarial examples without damaging image quality, we propose a novel Natural Color Fool (NCF) which is guided by realistic color distributions sampled from a publicly available dataset and optimized by our neighborhood search and initialization reset. By conducting extensive experiments and visualizations, we convincingly demonstrate the effectiveness of our proposed method. Notably, on average, results show that our NCF can outperform state-of-the-art approaches by 15.0 evading defense methods. Our code is available at https://github.com/ylhz/Natural-Color-Fool.

READ FULL TEXT

page 6

page 8

page 17

page 18

page 20

research
07/12/2022

Frequency Domain Model Augmentation for Adversarial Attack

For black-box attacks, the gap between the substitute model and the vict...
research
04/20/2021

Staircase Sign Method for Boosting Adversarial Attacks

Crafting adversarial examples for the transfer-based attack is challengi...
research
04/17/2021

Fashion-Guided Adversarial Attack on Person Segmentation

This paper presents the first adversarial example based method for attac...
research
08/20/2020

Yet Another Intermediate-Level Attack

The transferability of adversarial examples across deep neural network (...
research
10/22/2019

Structure Matters: Towards Generating Transferable Adversarial Images

Recent works on adversarial examples for image classification focus on d...
research
05/26/2022

Transferable Adversarial Attack based on Integrated Gradients

The vulnerability of deep neural networks to adversarial examples has dr...
research
11/18/2022

Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

Deep neural networks (DNNs) are powerful, but they can make mistakes tha...

Please sign up or login with your details

Forgot password? Click here to reset