A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

by   Ruijie Yang, et al.

Most of the adversarial attack methods suffer from large perceptual distortions such as visible artifacts, when the attack strength is relatively high. These perceptual distortions contain a certain portion which contributes less to the attack success rate. This portion of distortions, which is induced by unnecessary modifications and lack of proper perceptual distortion constraint, is the target of the proposed framework. In this paper, we propose a perceptual distortion reduction framework to tackle this problem from two perspectives. We guide the perturbation addition process to reduce unnecessary modifications by proposing an activated region transfer attention mask, which intends to transfer the activated regions of the target model from the correct prediction to incorrect ones. Note that an ensemble model is adopted to predict the activated regions of the unseen models in the black-box setting of our framework. Besides, we propose a perceptual distortion constraint and add it into the objective function of adversarial attack to jointly optimize the perceptual distortions and attack success rate. Extensive experiments have verified the effectiveness of our framework on several baseline methods.



There are no comments yet.


page 2

page 4

page 6


Towards Visual Distortion in Black-Box Attacks

Constructing adversarial examples in a black-box threat model injures th...

Adversarial Distortion for Learned Video Compression

In this paper, we present a novel adversarial lossy video compression mo...

On Perceptual Lossy Compression: The Cost of Perceptual Reconstruction and An Optimal Training Framework

Lossy compression algorithms are typically designed to achieve the lowes...

SMART: Skeletal Motion Action Recognition aTtack

Adversarial attack has inspired great interest in computer vision, by sh...

Quantifying Perceptual Distortion of Adversarial Examples

Recent work has shown that additive threat models, which only permit the...

GreedyFool: Distortion-Aware Sparse Adversarial Attack

Modern deep neural networks(DNNs) are vulnerable to adversarial samples....

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.