Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning

07/16/2019
by   Bao Wang, et al.
3

Improving the accuracy and robustness of deep neural nets (DNNs) and adapting them to small training data are primary tasks in deep learning research. In this paper, we replace the output activation function of DNNs, typically the data-agnostic softmax function, with a graph Laplacian-based high dimensional interpolating function which, in the continuum limit, converges to the solution of a Laplace-Beltrami equation on a high dimensional manifold. Furthermore, we propose end-to-end training and testing algorithms for this new architecture. The proposed DNN with graph interpolating activation integrates the advantages of both deep learning and manifold learning. Compared to the conventional DNNs with the softmax function as output activation, the new framework demonstrates the following major advantages: First, it is better applicable to data-efficient learning in which we train high capacity DNNs without using a large number of training data. Second, it remarkably improves both natural accuracy on the clean images and robust accuracy on the adversarial images crafted by both white-box and black-box adversarial attacks. Third, it is a natural choice for semi-supervised learning. For reproducibility, the code is available at <https://github.com/BaoWangMath/DNN-DataDependentActivation>.

READ FULL TEXT

page 25

page 26

page 29

research
09/23/2018

Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization

We improve the robustness of deep neural nets to adversarial attacks by ...
research
09/16/2021

KATANA: Simple Post-Training Robustness Using Test Time Augmentations

Although Deep Neural Networks (DNNs) achieve excellent performance on ma...
research
01/08/2023

RobArch: Designing Robust Architectures against Adversarial Attacks

Adversarial Training is the most effective approach for improving the ro...
research
03/02/2020

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets

Deep neural nets (DNNs) compression is crucial for adaptation to mobile ...
research
10/31/2019

Robust and Undetectable White-Box Watermarks for Deep Neural Networks

Training deep neural networks (DNN) is expensive in terms of computation...
research
10/16/2022

Stability of Accuracy for the Training of DNNs Via the Uniform Doubling Condition

We study the stability of accuracy for the training of deep neural netwo...
research
02/01/2018

Deep Learning with Data Dependent Implicit Activation Function

Though deep neural networks (DNNs) achieve remarkable performances in ma...

Please sign up or login with your details

Forgot password? Click here to reset