A Perceptual Distortion Reduction Framework for Adversarial Perturbation Generation

05/01/2021
by   Ruijie Yang, et al.
0

Most of the adversarial attack methods suffer from large perceptual distortions such as visible artifacts, when the attack strength is relatively high. These perceptual distortions contain a certain portion which contributes less to the attack success rate. This portion of distortions, which is induced by unnecessary modifications and lack of proper perceptual distortion constraint, is the target of the proposed framework. In this paper, we propose a perceptual distortion reduction framework to tackle this problem from two perspectives. We guide the perturbation addition process to reduce unnecessary modifications by proposing an activated region transfer attention mask, which intends to transfer the activated regions of the target model from the correct prediction to incorrect ones. Note that an ensemble model is adopted to predict the activated regions of the unseen models in the black-box setting of our framework. Besides, we propose a perceptual distortion constraint and add it into the objective function of adversarial attack to jointly optimize the perceptual distortions and attack success rate. Extensive experiments have verified the effectiveness of our framework on several baseline methods.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 6

07/21/2020

Towards Visual Distortion in Black-Box Attacks

Constructing adversarial examples in a black-box threat model injures th...
04/20/2020

Adversarial Distortion for Learned Video Compression

In this paper, we present a novel adversarial lossy video compression mo...
06/05/2021

On Perceptual Lossy Compression: The Cost of Perceptual Reconstruction and An Optimal Training Framework

Lossy compression algorithms are typically designed to achieve the lowes...
11/16/2019

SMART: Skeletal Motion Action Recognition aTtack

Adversarial attack has inspired great interest in computer vision, by sh...
02/21/2019

Quantifying Perceptual Distortion of Adversarial Examples

Recent work has shown that additive threat models, which only permit the...
10/26/2020

GreedyFool: Distortion-Aware Sparse Adversarial Attack

Modern deep neural networks(DNNs) are vulnerable to adversarial samples....
12/06/2018

Towards Leveraging the Information of Gradients in Optimization-based Adversarial Attack

In recent years, deep neural networks demonstrated state-of-the-art perf...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.