Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training

02/28/2022
by   Jiazhu Dai, et al.
0

Capsule networks (CapsNets) are new neural networks that classify images based on the spatial relationships of features. By analyzing the pose of features and their relative positions, it is more capable to recognize images after affine transformation. The stacked capsule autoencoder (SCAE) is a state-of-the-art CapsNet, and achieved unsupervised classification of CapsNets for the first time. However, the security vulnerabilities and the robustness of the SCAE has rarely been explored. In this paper, we propose an evasion attack against SCAE, where the attacker can generate adversarial perturbations based on reducing the contribution of the object capsules in SCAE related to the original category of the image. The adversarial perturbations are then applied to the original images, and the perturbed images will be misclassified. Furthermore, we propose a defense method called Hybrid Adversarial Training (HAT) against such evasion attacks. HAT makes use of adversarial training and adversarial distillation to achieve better robustness and stability. We evaluate the defense method and the experimental results show that the refined SCAE model can achieve 82.14 source code is available at https://github.com/FrostbiteXSW/SCAE_Defense.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2020

An Adversarial Attack against Stacked Capsule Autoencoder

Capsule network is a kind of neural network which uses spatial relations...
research
06/07/2019

Kernelized Capsule Networks

Capsule Networks attempt to represent patterns in images in a way that p...
research
08/29/2021

DropAttack: A Masked Weight Adversarial Training Method to Improve Generalization of Neural Networks

Adversarial training has been proven to be a powerful regularization met...
research
02/05/2022

Memory Defense: More Robust Classification via a Memory-Masking Autoencoder

Many deep neural networks are susceptible to minute perturbations of ima...
research
09/23/2018

Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization

We improve the robustness of deep neural nets to adversarial attacks by ...
research
11/30/2022

Toward Robust Diagnosis: A Contour Attention Preserving Adversarial Defense for COVID-19 Detection

As the COVID-19 pandemic puts pressure on healthcare systems worldwide, ...
research
03/24/2023

Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck

In practical scenarios where training data is limited, many predictive s...

Please sign up or login with your details

Forgot password? Click here to reset