DeepAI AI Chat
Log In Sign Up

Adversarial attacks hidden in plain sight

by   Jan Philip Göpfert, et al.

Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. We underline the severity of the issue by presenting a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer.


page 2

page 5

page 10

page 11

page 12

page 13

page 14

page 15


On the Vulnerability of Capsule Networks to Adversarial Attacks

This paper extensively evaluates the vulnerability of capsule networks t...

Recovering Localized Adversarial Attacks

Deep convolutional neural networks have achieved great successes over re...

Defending Adversarial Attacks by Correcting logits

Generating and eliminating adversarial examples has been an intriguing t...

Adversarial Fooling Beyond "Flipping the Label"

Recent advancements in CNNs have shown remarkable achievements in variou...

Trace and Detect Adversarial Attacks on CNNs using Feature Response Maps

The existence of adversarial attacks on convolutional neural networks (C...

Adversarial VC-dimension and Sample Complexity of Neural Networks

Adversarial attacks during the testing phase of neural networks pose a c...

Code Repositories