Identification of Attack-Specific Signatures in Adversarial Examples

10/13/2021
by   Hossein Souri, et al.
0

The adversarial attack literature contains a myriad of algorithms for crafting perturbations which yield pathological behavior in neural networks. In many cases, multiple algorithms target the same tasks and even enforce the same constraints. In this work, we show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims. We begin by demonstrating that one can determine the attack algorithm that crafted an adversarial example. Then, we leverage recent advances in parameter-space saliency maps to show, both visually and quantitatively, that adversarial attack algorithms differ in which parts of the network and image they target. Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 11

research
08/16/2021

Deep adversarial attack

Target...
research
10/06/2019

Unrestricted Adversarial Attacks for Semantic Segmentation

Semantic segmentation is one of the most impactful applications of machi...
research
04/02/2019

Adversarial Attacks against Deep Saliency Models

Currently, a plethora of saliency models based on deep neural networks h...
research
01/31/2023

Reverse engineering adversarial attacks with fingerprints from adversarial examples

In spite of intense research efforts, deep neural networks remain vulner...
research
03/10/2020

SAD: Saliency-based Defenses Against Adversarial Examples

With the rise in popularity of machine and deep learning models, there i...
research
02/14/2019

Can Intelligent Hyperparameter Selection Improve Resistance to Adversarial Examples?

Convolutional Neural Networks and Deep Learning classification systems i...
research
12/02/2021

A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space

The generation of feasible adversarial examples is necessary for properl...

Please sign up or login with your details

Forgot password? Click here to reset