Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

03/10/2022
by   Cheng Luo, et al.
6

Current adversarial attack research reveals the vulnerability of learning-based classifiers against carefully crafted perturbations. However, most existing attack methods have inherent limitations in cross-dataset generalization as they rely on a classification layer with a closed set of categories. Furthermore, the perturbations generated by these methods may appear in regions easily perceptible to the human visual system (HVS). To circumvent the former problem, we propose a novel algorithm that attacks semantic similarity on feature representations. In this way, we are able to fool classifiers without limiting attacks to a specific dataset. For imperceptibility, we introduce the low-frequency constraint to limit perturbations within high-frequency components, ensuring perceptual similarity between adversarial examples and originals. Extensive experiments on three datasets (CIFAR-10, CIFAR-100, and ImageNet-1K) and three public online platforms indicate that our attack can yield misleading and transferable adversarial examples across architectures and datasets. Additionally, visualization results and quantitative performance (in terms of four different metrics) show that the proposed algorithm generates more imperceptible perturbations than the state-of-the-art methods. Code is made available at.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 8

page 9

page 10

research
11/28/2022

Imperceptible Adversarial Attack via Invertible Neural Networks

Adding perturbations via utilizing auxiliary gradient information or dis...
research
07/03/2021

Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
10/03/2019

Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions

Deep neural network image classifiers are reported to be susceptible to ...
research
12/02/2021

Adversarial Robustness of Deep Reinforcement Learning based Dynamic Recommender Systems

Adversarial attacks, e.g., adversarial perturbations of the input and ad...
research
07/04/2017

UPSET and ANGRI : Breaking High Performance Image Classifiers

In this paper, targeted fooling of high performance image classifiers is...
research
01/29/2020

Semantic Adversarial Perturbations using Learnt Representations

Adversarial examples for image classifiers are typically created by sear...
research
01/01/2020

Erase and Restore: Simple, Accurate and Resilient Detection of L_2 Adversarial Examples

By adding carefully crafted perturbations to input images, adversarial e...

Please sign up or login with your details

Forgot password? Click here to reset