Towards Imperceptible Universal Attacks on Texture Recognition

11/24/2020
by   Yingpeng Deng, et al.
13

Although deep neural networks (DNNs) have been shown to be susceptible to image-agnostic adversarial attacks on natural image classification problems, the effects of such attacks on DNN-based texture recognition have yet to be explored. As part of our work, we find that limiting the perturbation's l_p norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images. Based on the fact that human perception is affected by local visual frequency characteristics, we propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain. Our experiments indicate that our proposed method can produce less perceptible perturbations yet with a similar or higher white-box fooling rates on various DNN texture classifiers and texture datasets as compared to existing universal attack techniques. We also demonstrate that our approach can improve the attack robustness against defended models as well as the cross-dataset transferability for texture recognition problems.

READ FULL TEXT

page 1

page 7

page 13

page 14

page 16

page 17

page 18

page 19

research
03/11/2020

Frequency-Tuned Universal Adversarial Attacks

Researchers have shown that the predictions of a convolutional neural ne...
research
10/04/2020

A Study for Universal Adversarial Attacks on Texture Recognition

Given the outstanding progress that convolutional neural networks (CNNs)...
research
02/12/2021

Universal Adversarial Perturbations Through the Lens of Deep Steganography: Towards A Fourier Perspective

The booming interest in adversarial attacks stems from a misalignment be...
research
01/22/2019

Universal Rules for Fooling Deep Neural Networks based Text Classification

Recently, deep learning based natural language processing techniques are...
research
02/02/2022

Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Recent work suggests that representations learned by adversarially robus...
research
09/27/2022

FG-UAP: Feature-Gathering Universal Adversarial Perturbation

Deep Neural Networks (DNNs) are susceptible to elaborately designed pert...
research
12/09/2021

Amicable Aid: Turning Adversarial Attack to Benefit Classification

While adversarial attacks on deep image classification models pose serio...

Please sign up or login with your details

Forgot password? Click here to reset