An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods

03/09/2020
by   Sanghyuk Chun, et al.
13

Despite apparent human-level performances of deep neural networks (DNN), they behave fundamentally differently from humans. They easily change predictions when small corruptions such as blur and noise are applied on the input (lack of robustness), and they often produce confident predictions on out-of-distribution samples (improper uncertainty measure). While a number of researches have aimed to address those issues, proposed solutions are typically expensive and complicated (e.g. Bayesian inference and adversarial training). Meanwhile, many simple and cheap regularization methods have been developed to enhance the generalization of classifiers. Such regularization methods have largely been overlooked as baselines for addressing the robustness and uncertainty issues, as they are not specifically designed for that. In this paper, we provide extensive empirical evaluations on the robustness and uncertainty estimates of image classifiers (CIFAR-100 and ImageNet) trained with state-of-the-art regularization methods. Furthermore, experimental results show that certain regularization methods can serve as strong baseline methods for robustness and uncertainty estimation of DNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2023

Beyond Empirical Risk Minimization: Local Structure Preserving Regularization for Improving Adversarial Robustness

It is broadly known that deep neural networks are susceptible to being f...
research
10/09/2018

Average Margin Regularization for Classifiers

Adversarial robustness has become an important research topic given empi...
research
06/01/2020

Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods

We identify three common cases that lead to overestimation of adversaria...
research
02/18/2018

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks

Deep neural networks have led to a series of breakthroughs, dramatically...
research
06/18/2021

Being a Bit Frequentist Improves Bayesian Neural Networks

Despite their compelling theoretical properties, Bayesian neural network...
research
05/28/2021

Robust Regularization with Adversarial Labelling of Perturbed Samples

Recent researches have suggested that the predictive accuracy of neural ...
research
10/15/2020

Certifying Neural Network Robustness to Random Input Noise from Samples

Methods to certify the robustness of neural networks in the presence of ...

Please sign up or login with your details

Forgot password? Click here to reset