On the Vulnerability of Capsule Networks to Adversarial Attacks

06/09/2019
by   Felix Michels, et al.
0

This paper extensively evaluates the vulnerability of capsule networks to different adversarial attacks. Recent work suggests that these architectures are more robust towards adversarial attacks than other neural networks. However, our experiments show that capsule networks can be fooled as easily as convolutional neural networks.

READ FULL TEXT
research
01/28/2019

CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks

Capsule Networks envision an innovative point of view about the represen...
research
02/25/2019

Adversarial attacks hidden in plain sight

Convolutional neural networks have been used to achieve a string of succ...
research
04/04/2020

Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks

Recent work has put forth the hypothesis that adversarial vulnerabilitie...
research
12/21/2020

Exploiting Vulnerability of Pooling in Convolutional Neural Networks by Strict Layer-Output Manipulation for Adversarial Attacks

Convolutional neural networks (CNN) have been more and more applied in m...
research
05/01/2020

Universal Adversarial Attacks with Natural Triggers for Text Classification

Recent work has demonstrated the vulnerability of modern text classifier...
research
02/23/2019

A Deep, Information-theoretic Framework for Robust Biometric Recognition

Deep neural networks (DNN) have been a de facto standard for nowadays bi...
research
10/04/2019

Requirements for Developing Robust Neural Networks

Validation accuracy is a necessary, but not sufficient, measure of a neu...

Please sign up or login with your details

Forgot password? Click here to reset