Generate High-Resolution Adversarial Samples by Identifying Effective Features

01/21/2020
by   Sizhe Chen, et al.
19

As the prevalence of deep learning in computer vision, adversarial samples that weaken the neural networks emerge in large numbers, revealing their deep-rooted defects. Most adversarial attacks calculate an imperceptible perturbation in image space to fool the DNNs. In this strategy, the perturbation looks like noise and thus could be mitigated. Attacks in feature space produce semantic perturbation, but they could only deal with low resolution samples. The reason lies in the great number of coupled features to express a high-resolution image. In this paper, we propose Attack by Identifying Effective Features (AIEF), which learns different weights for features to attack. Effective features, those with great weights, influence the victim model much but distort the image little, and thus are more effective for attack. By attacking mostly on them, AIEF produces high resolution adversarial samples with acceptable distortions. We demonstrate the effectiveness of AIEF by attacking on different tasks with different generative models.

READ FULL TEXT

page 1

page 4

page 5

page 6

research
07/20/2020

Evaluating a Simple Retraining Strategy as a Defense Against Adversarial Attacks

Though deep neural networks (DNNs) have shown superiority over other tec...
research
09/13/2017

A Learning and Masking Approach to Secure Learning

Deep Neural Networks (DNNs) have been shown to be vulnerable against adv...
research
04/02/2019

Adversarial Attacks against Deep Saliency Models

Currently, a plethora of saliency models based on deep neural networks h...
research
02/25/2022

Projective Ranking-based GNN Evasion Attacks

Graph neural networks (GNNs) offer promising learning methods for graph-...
research
04/26/2020

Towards Feature Space Adversarial Attack

We propose a new type of adversarial attack to Deep Neural Networks (DNN...
research
12/08/2020

Mitigating the Impact of Adversarial Attacks in Very Deep Networks

Deep Neural Network (DNN) models have vulnerabilities related to securit...
research
04/19/2023

GREAT Score: Global Robustness Evaluation of Adversarial Perturbation using Generative Models

Current studies on adversarial robustness mainly focus on aggregating lo...

Please sign up or login with your details

Forgot password? Click here to reset