Recent studies have shown that inner mechanisms of DNNs are different from those of humans. For example, DNNs are easily fooled by human-imperceptible adversarial perturbations (adversarial robustness (Szegedy et al., 2013; Goodfellow et al., 2015)) and semantics-preserving transformations like noising, blurring, and texture corruptions (natural robustness (Geirhos et al., 2018b; Hendrycks and Dietterich, 2019; Geirhos et al., 2018a)). Another limitation of DNNs is their inability to produce sound uncertainty estimates for their predictions. They are known to be inept at producing well-calibrated predictive uncertainties (known unknowns) and detecting out-of-distribution (OOD) samples (unknown unknowns) (Hendrycks and Gimpel, 2017).
For adversarial robustness, it has been shown that augmenting adversarial perturbations during training, or adversarial training, makes a model more adversarially robust (Kurakin et al., 2016; Madry et al., 2017; Xie et al., 2018). However, it is computationally challenging to employ it on large-scale datasets (Kurakin et al., 2016; Xie et al., 2018). Adversarially trained models overfit to the specific attack type used for training (Sharma and Chen, 2017), and the performance on unperturbed images drops (Tsipras et al., 2019). On the other hand, methods which improve robustness to non-adversarial corruptions are relatively less studied. Recently it is shown that training models by augmenting a specific noise enhances the performance on the target noise but can not be generalized to the other unseen noise types (Geirhos et al., 2018b). ImageNet-C dataset (Hendrycks and Dietterich, 2019) is proposed to evaluate robustness to corruption types including blur and noise while a network should not observe the distortions during the train time. The authors have shown that the natural robustness is improved via adversarial training (Kannan et al., 2018) and Stylized ImageNet augmentation (Geirhos et al., 2018a), but have not considered more common and simpler regularization types; we provide those baseline experiments in this paper.
Efforts to improve uncertainty estimates of DNNs have followed two distinguishable paths: improving calibration of predictive uncertainty and out-of-distribution (OOD) sample detection. On the predictive uncertainty side, variants of Bayesian neural networks (Gal and Ghahramani, 2016; Kendall and Gal, 2017) and ensemble methods (Lakshminarayanan et al., 2017) have mainly been proposed. These approaches, however, are expensive and often require modifications of training and inference stages. On the OOD detection front, methods including threshold-based binary classifiers (Hendrycks and Gimpel, 2017) and real or GAN-generated OOD sample augmentation (Lee et al., 2018) have brought about improvements in OOD detections. Above approaches have demonstrated sub-optimal performances in our experiments, even compared to simple baselines.
As an independent line of research, many regularization techniques have been proposed to improve the generalization of DNN classifiers. For example, Batch Normalization (BN)(Ioffe and Szegedy, 2015) and data augmentation strategies such as random crop and random flip (Krizhevsky et al., 2012; Szegedy et al., 2016a) have become standard design choices for deep models. Despite their simplicity and efficiency, the effects of state-of-the-art regularization techniques such as label smoothing (Szegedy et al., 2016b), MixUp (Zhang et al., 2017) and CutMix (Yun et al., 2019) on the robustness and the uncertainty of deep models are still rarely investigated. A few works have shown indeed the effects of a few regularization techniques on DNN robustness (Zhang et al., 2017; Kannan et al., 2018; Yun et al., 2019), but we provide a more extensive analysis with both robustness and uncertainty perspectives.
We empirically evaluate state-of-the-art regularization techniques and show that they improve the classification, robustness, and uncertainty estimates for large-scale classifiers at marginal additional costs. We argue that certain regularization techniques must be considered as strong baselines for future researches in robustness and uncertainty of DNNs.
|Method||LS||Top-1 Err.||Top-1 Err.||Top-1 Err.||Top-1 Err.||Calibration Err.||Detection Err.|
|Cutout + ShakeDrop||15.91||88.66||50.00||26.19||6.63||19.55|
|Mixup + ShakeDrop||14.91||61.91||40.60||57.07||7.28||22.92|
|CutMix + ShakeDrop||13.81||70.75||43.36||35.83||2.46||19.82|
Adversarial Logit Pairing
|w/o Random Crop & Flip||21.83||90.63||48.71||77.46||7.99||26.91|
|Add Gaussian Noise||19.49||85.08||42.01||73.23||9.79||25.16|
|OOD augment (SVHN)||38.80||97.35||67.03||79.13||46.37||43.53|
|OOD augment (GAN)||34.78||94.65||57.09||85.30||38.22||33.35|
2 Revisiting Regularization Methods
In this section, we revisit several regularization methods including the state-of-the-art regularization methods used in our experiments.
With proper data augmentation methods, a model can generalize better to the unseen samples. For example, random cropping and flipping are widely used to improve classification performances (Krizhevsky et al., 2012; Szegedy et al., 2016a; Huang et al., 2017). However, it is not always straightforward to distinguish augmentation types that improves the generalizability. For example, adversarial samples, geometric transformations, and pixel inversion are rarely helpful for improving classification performances (Tsipras et al., 2019; Cubuk et al., 2018). One of the most effective augmentation methods is Mixup (Zhang et al., 2017)
which generates the in-between class samples by the pixel level interpolation. Another example of data augmentation is Cutout which erases pixels in a region sampled at random(DeVries and Taylor, 2017; Zhong et al., 2017). Recently proposed CutMix fills the pixels from other images instead of erasing pixels (Yun et al., 2019). While being simple and efficient, Mixup, Cutout and CutMix have shown significant improvements in classification performance. We consider their contribution to robustness and uncertainty estimates in our experiments.
Deep models often suffer from over-confident predictions; they often produce predictions with high confidence even on random Gaussian noise input (Hendrycks and Gimpel, 2017). One straightforward way to mitigate the issue is to penalize over-confident predictions by perturbing the target . For example, label smoothing (Szegedy et al., 2016b)
changes ground-truth label to a smoothed distribution whose probability of non-targeted labels are, where is a smoothing parameter whose default value is often and is the number of classes. By smoothing target predictions, models learn to regularize overconfident predictions. Another examples are Mixup (Zhang et al., 2017) and CutMix (Yun et al., 2019) which blend two one-hot labels into one smooth label by the mix ratio. Label smoothing is also known to offer a modest amount of robustness to adversarial perturbations (Kannan et al., 2018). It is thus widely used in adversarial training to achieve better adversarial robustness. We consider label smoothing as one of the axes for our investigation.
Other strategies for deep networks:
Many researches have achieved more stable convergence and better generalization performance via weight regularization (weight decay) or feature-level manipulations like dropout (Srivastava et al., 2014) and Batch Normalization (Ioffe and Szegedy, 2015). Recently, randomly adding noises on intermediate features (Ghiasi et al., 2018; Gastaldi, 2017; Huang et al., 2016; Yamada et al., 2018), or adding extra paths to the model (Hu et al., 2017, 2018) have been proposed. We present robustness and uncertainty experiments on a selection of above regularization techniques.
3 Benchmarks for Robustness and Uncertainty Estimation
In this section, we describe the settings for the benchmarks used in our experiments. We tested four benchmarks: robustness to adversarial attacks, robustness to natural corruptions, robustness to occlusions, confidence calibration error, and out-of-distribution detection.
To evaluate adversarial robustness, we use FGSM (Goodfellow et al., 2015) with . Note that our baseline regularization methods cannot provide a provable defense to the adversarial attacks while adversarial training and ALP could mitigate the effect of the adversarial attacks.
For evaluating robustness against natural corruptions, we employ naturally corrupted ImageNet (ImageNet-C) proposed by (Hendrycks and Dietterich, 2019). In ImageNet-C, there are transforms categorized into “noise”, “blur”, “weather”, and “digital” with five severities. For CIFAR-100 experiments, we create corrupted CIFAR-100 (CIFAR-100-C) using transforms proposed in ImageNet-C. We report the average accuracy over all transforms.
In occlusion robustness benchmarks, we generate occluded samples by filling zeros (black pixels) over a square at the image center whose side length is half the image width; i.e., for CIFAR-100 and for ImageNet.
To show how the methods affect the confidence of predictions, we evaluate the expected calibration error (Guo et al., 2017). We view a classification system as a probabilistic confidence estimator whose confidence is a measurement of the trustworthy estimation. The bin size is set to . We refer (Guo et al., 2017) for further details of the evaluation.
Finally, we have tested the baseline OOD detection performance of each model. We have used the threshold-based detector proposed in (Hendrycks and Gimpel, 2017). Seven datasets used in (Liang et al., 2018) were considered: cropped Tiny ImageNet, resized Tiny ImageNet, cropped LSUN (Yu et al., 2015), resized LSUN, iSUN, Gaussian noise, and Uniform noise. We report the average detection error over the seven datasets.
4 Main Results
|Methods||Top-1 Err.||Top-1 Err.||Top-1 Err.||mCE||Top-1 Err.||Top-1 Err.||Top-1 Err.||Top-1 Err.|
|Adversarial Logit Pairing||24.75||51.32||92.27||50.04||69.94||51.75||40.62||44.70|
|Add Gaussian Noise||19.49||85.08||73.23||42.01||54.63||48.42||31.54||38.48|
|Method||Top-1 Err.||Top-1 Err.||Top-1 Err||Top-1 Err.||Calibration Err.||Detection Err.|
|Cutout + SD + LS||13.49||69.59||43.86||26.33||1.45||18.40|
|Mixup + SD + LS||14.79||56.32||40.32||56.76||15.85||18.54|
|CutMix + SD + LS||13.83||62.72||44.99||34.96||5.26||18.89|
|Adversarial Logit Pairing||24.75||51.32||50.04||92.27||6.67||21.57|
|Add Gaussian Noise||19.49||85.08||42.01||73.23||9.79||25.16|
|OOD augment (SVHN)||38.80||97.35||67.03||79.13||46.37||43.53|
|OOD augment (GAN)||34.78||94.65||57.09||85.30||38.22||33.35|
4.1 Training Settings
We first describe the settings for training models used in the robustness and the uncertainty benchmarks. To ensure the effectiveness of each regularization methods, we employ a powerful baseline, PyramidNet-200 (Han et al., 2017) and ResNet-50 (He et al., 2016) for CIFAR-100 and ImageNet experiments, respectively.
We consider the state-of-the-art regularization methods of Cutout (DeVries and Taylor, 2017), Mixup (Zhang et al., 2017), CutMix (Yun et al., 2019), label smoothing (Szegedy et al., 2016b), ShakeDrop (Yamada et al., 2018), and their combinations for experiments. We optimize the models with the SGD with momentum. We set the batch size to
and training epochs to. The learning rate is initially set to and is decayed by the factor of at and epochs. We also employ random crop and random flip augmentations for all methods, unless specified otherwise.
For the comparison methods for adversarial robustness, we train the baseline model with adversarial training (Kurakin et al., 2016; Madry et al., 2017) and adversarial logit paring (ALP) (Kannan et al., 2018). We use Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015) with as the threat model. All the results are evaluated with applying label smoothing to achieve better performances. We mix the clean and adversarial samples with the same ratio as proposed in (Kurakin et al., 2016). The optimizer for adversarial training is ADAM (Kingma and Ba, 2014).
As the baseline method for CIFAR-C, we consider Gaussian noise augmentation; the same type of perturbation taken from the CIFAR-C dataset (Hendrycks and Dietterich, 2019). For out-of-distribution (OOD) detection baseline, we augment OOD samples and the target labels to be the uniform label as proposed in (Lee et al., 2018). We augment two types of OOD samples used in (Lee et al., 2018): Street View House Numbers (SVHN) dataset and generated samples by GAN. In our experiments, we use WGAN-GP (Gulrajani et al., 2017) instead of DC-GAN (Radford et al., 2015).
4.2 CIFAR-100 Results
In this section, we evaluate the effects of the state-of-the-art regularization techniques on the various robustness and uncertainty benchmarks on CIFAR-100. We show that well-regularized models are powerful baselines.
In Table 1, we report classification, adversarial and natural robustness, and uncertainty measure evaluations. Classification performances are measured on CIFAR-100 test set; adversarial robustness is measured against the FGSM (Goodfellow et al., 2015) attack on CIFAR-100; natural robustness is measured on CIFAR-C (Hendrycks and Dietterich, 2019). Uncertainty qualities are measured in terms of expected calibration error (Guo et al., 2017) and OOD detection error rates (Hendrycks and Gimpel, 2017). We report the OOD detection errors at method-specific optimal thresholds.
Here we analyze the following questions from Table 1.
Can data augmentation improve robustness against various perturbations at once? Data augmentation is a straightforward solution to improve robustness against specific type of noise, e.g., adversarial perturbation, Gaussian noise, and occlusion. In Table 2, we have observed that different type of augmentation methods improve robustness against the target noise. For example, ALP improves adversarial robustness but it fails to improve robustness against occlusion and other natural corruptions. Similarly, in Table 2, Cutout is only method that improves occlusion robustness among the other augmentation methods. However, Cutout degrades other types of robustness, such as adversarial robustness, compare to the baseline. By adding Gaussian noise to the input, robustness to the common corruptions is enhanced, especially for the “noise”. In summary, we have observed that it would be difficult to improve the robustness against various type of corruptions at once. A similar phenomenon was also observed by (Geirhos et al., 2018b).
|ShakeDrop + LS||61.45||21.92||72.65||42.85||74.47||82.15||60.47||75.67||73.10|
|Cutout + LS||61.90||22.02||75.24||29.08||79.80||84.51||62.72||79.93||76.54|
|Mixup + LS||58.54||22.41||69.43||42.31||65.36||82.95||53.37||73.94||69.14|
|CutMix + LS||61.02||21.87||67.41||31.51||77.01||84.61||63.13||81.56||76.55|
|CutMix + SD||61.75||21.60||80.00||31.28||77.06||84.18||61.04||77.07||74.69|
|CutMix + SD + LS||60.96||21.90||68.65||31.62||76.04||84.53||62.82||81.16||76.14|
Can label smoothing help adversarial robustness and uncertainty estimates? In our experiments, adding label smoothing (LS) alone does not generally improve classification accuracies. Surprisingly, however, we observe that LS improves robustness against adversarial perturbation, calibration error, and OOD detection performance (Table 1). For example, by adding LS, Cutout + ShakeDrop achieves classification top-1 error and FGSM top-1 error where performances without LS are and for classification and adversarial robustness respectively. We believe that it is because a model trained with LS produces low confident predictions in general (Figure 1). In particular, LS shows impressive improvements in the expected calibration error, except for Mixup and CutMix families. We believe the result is due to the fact that Mixup and CutMix already contain the label mixing stage that already lowers the prediction confidences; further adding label smoothing makes the overall confidences too low.
Can well-regularized models be a powerful baseline for the robustness and uncertainty estimations? In Table 3
, we have observed that our well-regularized models such as Cutout + ShakeDrop + label smoothing, Mixup + ShakeDrop + label smoothing, and CutMix + ShakeDrop outperform methods targeted for improved robustness and uncertainty estimations (ALP and OOD augmentations) in many evaluation metrics. For example, ALP model shows occlusion top-1 errorwhile Cutout and CutMix based models show and top-1 error respectively. It is notable that OOD augmentations are not effective for CIFAR-100 tasks, while they have been shown to be effective for toy datasets like SVHN and CIFAR-10 (Lee et al., 2018).
4.3 ImageNet Experiments
In this section, we report experimental results on ImageNet. We use ResNet-50 (He et al., 2016) as the baseline model and train the models with same training scheme as used in (Yun et al., 2019). We only evaluate robustness benchmarks, i.e., adversarial robustness against FGSM, natural robustness against ImageNet-C, and robustness to occlusion.
In Table 4, we report the top-1 error on clean images, attacked images, occluded images, naturally corrupted images (subsets of ImageNet-C), and their average. Also we report the mCE (mean corrupted error) normalized by AlexNet (Krizhevsky et al., 2012) which is proposed in (Hendrycks and Dietterich, 2019).
As we observed in CIFAR-100 experiments, regularized models provide better overall performances. For example, CutMix achieves average error alone but adding ShakeDrop and LS improves average error to . Table 4 also shows that label smoothing is still effective in improving the robustness of the models in ImageNet experiments. Mixup helps robustness against common corruptions; CutMix shows better classification performance, adversarial robustness, and occlusion robustness.
Interestingly, in our experiments, Mixup + label smoothing achieves the state-of-the-art performance on ImageNet-C mCE of where current best model is stylized-ImageNet trained model (Geirhos et al., 2018a) with mCE of . Note that stylized-ImageNet requires heavy pre-computations to generate the stylized images, and requires additional fine-tuning on ImageNet data.
Methods used in our experiments improve the overall robustness and uncertainty performances at negligible additional costs. We believe that well-regularized models should be considered as powerful baselines for the robustness and the uncertainty estimation benchmarks.
In this paper, we have empirically compared the robustness and uncertainty estimates of state-of-the-art regularization methods against prior methods specifically designed for such aspects. We have observed that methods proposed to solve the specific problem are only effective on their targeted task. For example, adversarial training only improves adversarial robustness while it degrades classification performance, robustness against common corruptions and occlusion, and uncertainty estimates. On the other hand, good combinations of simple and cheap regularization techniques improve overall robustness and uncertainty estimation performances, and even surpass specialized methods in certain uncertainty and robustness tasks. We believe that well-regularized models have largely been overlooked in robustness and uncertainty studies, and that they should be considered as powerful baselines in future works.
- Autoaugment: learning augmentation policies from data. arXiv preprint arXiv:1805.09501. Cited by: §2.
Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552. Cited by: §2, §4.1.
Dropout as a bayesian approximation: representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050–1059. Cited by: §1.
- Shake-shake regularization. In arXiv:1705.07485, Cited by: §2.
- ImageNet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231. Cited by: §1, §1, §4.3.
- Generalisation in humans and deep neural networks. In Advances in Neural Information Processing Systems, pp. 7538–7550. Cited by: §1, §1, §4.2.
- DropBlock: a regularization method for convolutional networks. In Advances in Neural Information Processing Systems, pp. 10750–10760. Cited by: §2.
- Explaining and harnessing adversarial examples. In International Conference on Learning Representations, Cited by: §1, §3, §4.1, §4.2.
- Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5767–5777. Cited by: §4.1.
- On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1321–1330. Cited by: §3, §4.2.
- Deep pyramidal residual networks. In CVPR, Cited by: §4.1.
- Deep residual learning for image recognition. In CVPR, Cited by: §4.1, §4.3.
- Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, External Links: Cited by: §1, §1, §3, §4.1, §4.2, §4.3.
- A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, Cited by: §1, §1, §2, §3, §4.2.
- Gather-excite: exploiting feature context in convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 9423–9433. Cited by: §2.
- Squeeze-and-excitation networks. In arXiv:1709.01507, Cited by: §2.
- Densely connected convolutional networks. In CVPR, Cited by: §2.
- Deep networks with stochastic depth. In ECCV, Cited by: §2.
- Batch normalization: accelerating deep network training by reducing internal covariate shift. In ICML, Cited by: §1, §2.
- Adversarial logit pairing. arXiv preprint arXiv:1803.06373. Cited by: §1, §1, §2, §4.1.
What uncertainties do we need in bayesian deep learning for computer vision?. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 5574–5584. Cited by: §1.
- NSML: meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957. Cited by: §4.1.
- Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.1.
- ImageNet classification with deep convolutional neural networks. In NIPS, Cited by: §1, §2, §4.3.
- Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §1, §4.1.
- Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 6402–6413. Cited by: §1.
- Training confidence-calibrated classifiers for detecting out-of-distribution samples. In International Conference on Learning Representations, External Links: Cited by: §1, §4.1, §4.2.
- Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, Cited by: §3.
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Cited by: §1, §4.1.
- Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §4.1.
- Attacking the madry defense model with -based adversarial examples. arXiv preprint arXiv:1710.10733. Cited by: §1.
- Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, pp. 1929–1958. Cited by: §2.
- NSML: a machine learning platform that enables you to focus on your models. arXiv preprint arXiv:1712.05902. Cited by: §4.1.
Inception-v4, inception-resnet and the impact of residual connections on learning. In ICLR Workshop, Cited by: §1, §2.
Rethinking the inception architecture for computer vision.
Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §1, §2, §4.1.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
Robustness may be at odds with accuracy. In International Conference on Learning Representations, External Links: Cited by: §1, §2.
- Feature denoising for improving adversarial robustness. arXiv preprint arXiv:1812.03411. Cited by: §1.
- ShakeDrop regularization for deep residual learning. arXiv preprint arXiv:1802.02375. Cited by: §2, §4.1.
- Lsun: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §3.
- CutMix: regularization strategy to train strong classifiers with localizable features. arXiv preprint arXiv:1905.04899. Cited by: §1, §2, §2, §4.1, §4.3.
- Mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412. Cited by: §1, §2, §2, §4.1.
- Random erasing data augmentation. arXiv preprint arXiv:1708.04896. Cited by: §2.