Recovering Localized Adversarial Attacks

10/21/2019
by   Jan Philip Göpfert, et al.
21

Deep convolutional neural networks have achieved great successes over recent years, particularly in the domain of computer vision. They are fast, convenient, and – thanks to mature frameworks – relatively easy to implement and deploy. However, their reasoning is hidden inside a black box, in spite of a number of proposed approaches that try to provide human-understandable explanations for the predictions of neural networks. It is still a matter of debate which of these explainers are best suited for which situations, and how to quantitatively evaluate and compare them. In this contribution, we focus on the capabilities of explainers for convolutional deep neural networks in an extreme situation: a setting in which humans and networks fundamentally disagree. Deep neural networks are susceptible to adversarial attacks that deliberately modify input samples to mislead a neural network's classification, without affecting how a human observer interprets the input. Our goal with this contribution is to evaluate explainers by investigating whether they can identify adversarially attacked regions of an image. In particular, we quantitatively and qualitatively investigate the capability of three popular explainers of classifications – classic salience, guided backpropagation, and LIME – with respect to their ability to identify regions of attack as the explanatory regions for the (incorrect) prediction in representative examples from image classification. We find that LIME outperforms the other explainers.

READ FULL TEXT

page 3

page 4

page 5

research
02/25/2019

Adversarial attacks hidden in plain sight

Convolutional neural networks have been used to achieve a string of succ...
research
01/08/2020

Explainable Deep Convolutional Candlestick Learner

Candlesticks are graphical representations of price movements for a give...
research
12/05/2019

Scratch that! An Evolution-based Adversarial Attack against Neural Networks

Recent research has shown that Deep Neural Networks (DNNs) for image cla...
research
02/04/2022

Pixle: a fast and effective black-box attack based on rearranging pixels

Recent research has found that neural networks are vulnerable to several...
research
06/08/2023

Degraded Polygons Raise Fundamental Questions of Neural Network Perception

It is well-known that modern computer vision systems often exhibit behav...
research
03/06/2020

Explaining Away Attacks Against Neural Networks

We investigate the problem of identifying adversarial attacks on image-b...
research
02/15/2022

Random Walks for Adversarial Meshes

A polygonal mesh is the most-commonly used representation of surfaces in...

Please sign up or login with your details

Forgot password? Click here to reset