DeepAI AI Chat
Log In Sign Up

Disentangled Deep Autoencoding Regularization for Robust Image Classification

by   Zhenyu Duan, et al.

In spite of achieving revolutionary successes in machine learning, deep convolutional neural networks have been recently found to be vulnerable to adversarial attacks and difficult to generalize to novel test images with reasonably large geometric transformations. Inspired by a recent neuroscience discovery revealing that primate brain employs disentangled shape and appearance representations for object recognition, we propose a general disentangled deep autoencoding regularization framework that can be easily applied to any deep embedding based classification model for improving the robustness of deep neural networks. Our framework effectively learns disentangled appearance code and geometric code for robust image classification, which is the first disentangling based method defending against adversarial attacks and complementary to standard defense methods. Extensive experiments on several benchmark datasets show that, our proposed regularization framework leveraging disentangled embedding significantly outperforms traditional unregularized convolutional neural networks for image classification on robustness against adversarial attacks and generalization to novel test data.


TensorShield: Tensor-based Defense Against Adversarial Attacks on Images

Recent studies have demonstrated that machine learning approaches like d...

Attending Category Disentangled Global Context for Image Classification

In this paper, we propose a general framework for image classification u...

Immuno-mimetic Deep Neural Networks (Immuno-Net)

Biomimetics has played a key role in the evolution of artificial neural ...

Large Neural Networks Learning from Scratch with Very Few Data and without Regularization

Recent findings have shown that Neural Networks generalize also in over-...

Improving the Accuracy and Robustness of CNNs Using a Deep CCA Neural Data Regularizer

As convolutional neural networks (CNNs) become more accurate at object r...

Isometric Representations in Neural Networks Improve Robustness

Artificial and biological agents cannon learn given completely random an...

Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness

Adversarial vulnerability remains a major obstacle to constructing relia...