Disentangling Improves VAEs' Robustness to Adversarial Attacks

06/01/2019
by   Matthew Willetts, et al.
0

This paper is concerned with the robustness of VAEs to adversarial attacks. We highlight that conventional VAEs are brittle under attack but that methods recently introduced for disentanglement such as β-TCVAE (Chen et al., 2018) improve robustness, as demonstrated through a variety of previously proposed adversarial attacks (Tabacof et al. (2016); Gondim-Ribeiro et al. (2018); Kos et al.(2018)). This motivated us to develop Seatbelt-VAE, a new hierarchical disentangled VAE that is designed to be significantly more robust to adversarial attacks than existing approaches, while retaining high quality reconstructions.

READ FULL TEXT

page 2

page 6

page 7

research
02/20/2022

Overparametrization improves robustness against adversarial attacks: A replication study

Overparametrization has become a de facto standard in machine learning. ...
research
02/20/2019

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

advertorch is a toolbox for adversarial robustness research. It contains...
research
03/15/2023

Improving Adversarial Robustness with Hypersphere Embedding and Angular-based Regularizations

Adversarial training (AT) methods have been found to be effective agains...
research
12/08/2020

On 1/n neural representation and robustness

Understanding the nature of representation in neural networks is a goal ...
research
02/24/2020

Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference

Deep networks were recently suggested to face the odds between accuracy ...
research
12/02/2019

Fastened CROWN: Tightened Neural Network Robustness Certificates

The rapid growth of deep learning applications in real life is accompani...
research
11/28/2017

Crossmodal Attentive Skill Learner

This paper presents the Crossmodal Attentive Skill Learner (CASL), integ...

Please sign up or login with your details

Forgot password? Click here to reset