Effective and Efficient Vote Attack on Capsule Networks

02/19/2021
by   Jindong Gu, et al.
0

Standard Convolutional Neural Networks (CNNs) can be easily fooled by images with small quasi-imperceptible artificial perturbations. As alternatives to CNNs, the recently proposed Capsule Networks (CapsNets) are shown to be more robust to white-box attacks than CNNs under popular attack protocols. Besides, the class-conditional reconstruction part of CapsNets is also used to detect adversarial examples. In this work, we investigate the adversarial robustness of CapsNets, especially how the inner workings of CapsNets change when the output capsules are attacked. The first observation is that adversarial examples misled CapsNets by manipulating the votes from primary capsules. Another observation is the high computational cost, when we directly apply multi-step attack methods designed for CNNs to attack CapsNets, due to the computationally expensive routing mechanism. Motivated by these two observations, we propose a novel vote attack where we attack votes of CapsNets directly. Our vote attack is not only effective but also efficient by circumventing the routing process. Furthermore, we integrate our vote attack into the detection-aware attack paradigm, which can successfully bypass the class-conditional reconstruction based detection method. Extensive experiments demonstrate the superior attack performance of our vote attack on CapsNets.

READ FULL TEXT

page 9

page 16

research
07/05/2019

Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

Adversarial examples raise questions about whether neural network models...
research
10/15/2022

Dynamics-aware Adversarial Attack of Adaptive Neural Networks

In this paper, we investigate the dynamics-aware adversarial attack prob...
research
11/16/2018

DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules

We present a simple technique that allows capsule models to detect adver...
research
01/28/2019

CapsAttacks: Robust and Imperceptible Adversarial Attacks on Capsule Networks

Capsule Networks envision an innovative point of view about the represen...
research
09/24/2019

Intelligent image synthesis to attack a segmentation CNN using adversarial learning

Deep learning approaches based on convolutional neural networks (CNNs) h...
research
03/26/2020

On the adversarial robustness of DNNs based on error correcting output codes

Adversarial examples represent a great security threat for deep learning...
research
06/15/2022

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

The AutoAttack (AA) has been the most reliable method to evaluate advers...

Please sign up or login with your details

Forgot password? Click here to reset