Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness

11/27/2020
by   Yilun Jin, et al.
0

Deep neural networks (DNNs) are known to be prone to adversarial attacks, for which many remedies are proposed. While adversarial training (AT) is regarded as the most robust defense, it suffers from poor performance both on clean examples and under other types of attacks, e.g. attacks with larger perturbations. Meanwhile, regularizers that encourage uncertain outputs, such as entropy maximization (EntM) and label smoothing (LS) can maintain accuracy on clean examples and improve performance under weak attacks, yet their ability to defend against strong attacks is still in doubt. In this paper, we revisit uncertainty promotion regularizers, including EntM and LS, in the field of adversarial learning. We show that EntM and LS alone provide robustness only under small perturbations. Contrarily, we show that uncertainty promotion regularizers complement AT in a principled manner, consistently improving performance on both clean examples and under various attacks, especially attacks with large perturbations. We further analyze how uncertainty promotion regularizers enhance the performance of AT from the perspective of Jacobian matrices ∇_X f(X;θ), and find out that EntM effectively shrinks the norm of Jacobian matrices and hence promotes robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/08/2020

Adversarial Feature Desensitization

Deep neural networks can now perform many tasks that were once thought t...
research
07/29/2021

Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Deep learning models are prone to being fooled by imperceptible perturba...
research
07/09/2022

Improved and Interpretable Defense to Transferred Adversarial Examples by Jacobian Norm with Selective Input Gradient Regularization

Deep neural networks (DNNs) are known to be vulnerable to adversarial ex...
research
09/27/2020

Beneficial Perturbations Network for Defending Adversarial Examples

Adversarial training, in which a network is trained on both adversarial ...
research
02/15/2022

Predicting on the Edge: Identifying Where a Larger Model Does Better

Much effort has been devoted to making large and more accurate models, b...
research
07/29/2022

Robust Trajectory Prediction against Adversarial Attacks

Trajectory prediction using deep neural networks (DNNs) is an essential ...
research
11/18/2020

Contextual Fusion For Adversarial Robustness

Mammalian brains handle complex reasoning tasks in a gestalt manner by i...

Please sign up or login with your details

Forgot password? Click here to reset