Disentangled Deep Autoencoding Regularization for Robust Image Classification

02/27/2019
by   Zhenyu Duan, et al.
0

In spite of achieving revolutionary successes in machine learning, deep convolutional neural networks have been recently found to be vulnerable to adversarial attacks and difficult to generalize to novel test images with reasonably large geometric transformations. Inspired by a recent neuroscience discovery revealing that primate brain employs disentangled shape and appearance representations for object recognition, we propose a general disentangled deep autoencoding regularization framework that can be easily applied to any deep embedding based classification model for improving the robustness of deep neural networks. Our framework effectively learns disentangled appearance code and geometric code for robust image classification, which is the first disentangling based method defending against adversarial attacks and complementary to standard defense methods. Extensive experiments on several benchmark datasets show that, our proposed regularization framework leveraging disentangled embedding significantly outperforms traditional unregularized convolutional neural networks for image classification on robustness against adversarial attacks and generalization to novel test data.

READ FULL TEXT
research
02/18/2020

TensorShield: Tensor-based Defense Against Adversarial Attacks on Images

Recent studies have demonstrated that machine learning approaches like d...
research
12/17/2018

Attending Category Disentangled Global Context for Image Classification

In this paper, we propose a general framework for image classification u...
research
06/27/2021

Immuno-mimetic Deep Neural Networks (Immuno-Net)

Biomimetics has played a key role in the evolution of artificial neural ...
research
05/18/2022

Large Neural Networks Learning from Scratch with Very Few Data and without Regularization

Recent findings have shown that Neural Networks generalize also in over-...
research
09/06/2022

Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer

As convolutional neural networks (CNNs) become more accurate at object r...
research
11/02/2022

Isometric Representations in Neural Networks Improve Robustness

Artificial and biological agents cannon learn given completely random an...
research
10/26/2022

Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness

Adversarial vulnerability remains a major obstacle to constructing relia...

Please sign up or login with your details

Forgot password? Click here to reset