Adversarial attacks hidden in plain sight

02/25/2019
by   Jan Philip Göpfert, et al.
20

Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. We underline the severity of the issue by presenting a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer.

READ FULL TEXT

page 2

page 5

page 10

page 11

page 12

page 13

page 14

page 15

research
06/09/2019

On the Vulnerability of Capsule Networks to Adversarial Attacks

This paper extensively evaluates the vulnerability of capsule networks t...
research
10/21/2019

Recovering Localized Adversarial Attacks

Deep convolutional neural networks have achieved great successes over re...
research
06/26/2019

Defending Adversarial Attacks by Correcting logits

Generating and eliminating adversarial examples has been an intriguing t...
research
04/27/2020

Adversarial Fooling Beyond "Flipping the Label"

Recent advancements in CNNs have shown remarkable achievements in variou...
research
08/24/2022

Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps

The existence of adversarial attacks on convolutional neural networks (C...
research
12/18/2019

Adversarial VC-dimension and Sample Complexity of Neural Networks

Adversarial attacks during the testing phase of neural networks pose a c...

Please sign up or login with your details

Forgot password? Click here to reset