Learning from Attacks: Attacking Variational Autoencoder for Improving Image Classification

03/11/2022
by   Jianzhang Zheng, et al.
2

Adversarial attacks are often considered as threats to the robustness of Deep Neural Networks (DNNs). Various defending techniques have been developed to mitigate the potential negative impact of adversarial attacks against task predictions. This work analyzes adversarial attacks from a different perspective. Namely, adversarial examples contain implicit information that is useful to the predictions i.e., image classification, and treat the adversarial attacks against DNNs for data self-expression as extracted abstract representations that are capable of facilitating specific learning tasks. We propose an algorithmic framework that leverages the advantages of the DNNs for data self-expression and task-specific predictions, to improve image classification. The framework jointly learns a DNN for attacking Variational Autoencoder (VAE) networks and a DNN for classification, coined as Attacking VAE for Improve Classification (AVIC). The experiment results show that AVIC can achieve higher accuracy on standard datasets compared to the training with clean examples and the traditional adversarial training.

READ FULL TEXT

page 6

page 10

page 11

research
05/27/2017

A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

Some recent works revealed that deep neural networks (DNNs) are vulnerab...
research
09/10/2018

Classification by Re-generation: Towards Classification Based on Variational Inference

As Deep Neural Networks (DNNs) are considered the state-of-the-art in ma...
research
06/19/2019

Global Adversarial Attacks for Assessing Deep Learning Robustness

It has been shown that deep neural networks (DNNs) may be vulnerable to ...
research
05/20/2022

Robust Sensible Adversarial Learning of Deep Neural Networks for Image Classification

The idea of robustness is central and critical to modern statistical ana...
research
05/31/2018

Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders

Susceptibility of deep neural networks to adversarial attacks poses a ma...
research
12/07/2018

Adversarial Defense of Image Classification Using a Variational Auto-Encoder

Deep neural networks are known to be vulnerable to adversarial attacks. ...
research
08/31/2023

Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff

This paper addresses the tradeoff between standard accuracy on clean exa...

Please sign up or login with your details

Forgot password? Click here to reset