Research in neural network robustness has tried to quantify the problem by establishing benchmarks that directly measure it Hendrycks & Dietterich (2018); Gu et al. (2019) and comparing the performance of humans and neural networks Geirhos et al. (2018b). Others have tried to understand robustness by highlighting systemic failure modes of current learning methods. For instance, networks exhibit excessive invariance to visual features Jacobsen et al. (2018), texture bias Geirhos et al. (2018a), sensitivity to worst-case (adversarial) perturbations Goodfellow et al. (2014), and a propensity to rely solely on non-robust, but highly predictive features for classification Doersch et al. (2015); Ilyas et al. (2019). Of particular relevance to our work, Ford et al (Ford et al., 2019) show that in order to get adversarial robustness, one needs robustness to noise-corrupted data.
Another line of work has attempted to increase model robustness performance, either by directly projecting out superficial statistics Wang et al. (2019), via architectural improvements Cubuk et al. (2017), pre-training schemes Hendrycks et al. (2019), or through the use of data augmentations. Data augmentation increases the size and diversity of the training set, and provides a simple method for learning invariances that are challenging to encode architecturally Cubuk et al. (2018). Recent work in this area includes learning better transformations DeVries & Taylor (2017); Zhang et al. (2017); Zhong et al. (2017), inferring combinations of transformations Cubuk et al. (2018), unsupervised methods Xie et al. (2019), theory of data augmentation Dao et al. (2018), and applications for one-shot learning Asano et al. (2019).
Despite these advances, individual data augmentation methods that improve robustness do so at the expense of reduced clean accuracy. Some have even claimed that there exists a fundamental trade-off between the two Tsipras et al. (2018). Because of this, many recent works focus on improving either one or the other Madry et al. (2017); Geirhos et al. (2018a). In this work we propose a data augmentation that overcomes this trade-off, achieving both improved robustness and clean accuracy. Our contributions are as follows:
We characterize a trade-off between robustness and accuracy among two standard data augmentations: Cutout and Gaussian (Section 2.1).
We start by considering two data augmentations: Cutout DeVries & Taylor (2017) and Gaussian Grandvalet & Canu (1997). The former sets a random patch of the input images to a constant (the mean pixel in the dataset) and is successful at improving clean accuracy. The latter works by adding independent Gaussian noise to each pixel of the input image, which can increase robustness to Gaussian noise directly.
To apply Gaussian
, we uniformly sample a standard deviationfrom 0 up to some maximum value , and add i.i.d. noise sampled from to each pixel. To apply Cutout, we use a fixed patch size , and randomly set a square region with size to the constant mean of each RGB channel in the dataset. As in DeVries & Taylor (2017), the patch location is randomly sampled and can lie outside of the CIFAR-10 (or ImageNet) image but its center is constrained to lie within it. Patch sizes and are selected based on the method described in Section 3.2.
2.1 Cutout and Gaussian exhibit a trade-off between accuracy and robustness
We compare the effectiveness of Gaussian and Cutout data augmentation for accuracy and robustness by measuring the performance of models trained with each on clean data, as well as data corrupted by various standard deviations of Gaussian noise. Figure 2 highlights an apparent trade-off in using these methods. In accordance to previous work DeVries & Taylor (2017), Cutout improves accuracy on clean test data. Despite this, we find it does not lead to increased robustness. Conversely, training with higher of Gaussian can lead to increased robustness to Gaussian noise, but it also leads to decreased accuracy on clean data. Therefore, any robustness gains are offset by poor overall performance.
At first glance, these results seem to reinforce the findings of previous work (Tsipras et al., 2018), indicating that robustness comes at the cost of generalization. In the following sections, we will explore whether there exists augmentation strategies that do not exhibit this limitation.
Each of the two methods seen so far achieves one half of our stated goal: either improving robustness or improving clean test accuracy, but never both. To overcome the limitations of existing data augmentation techniques, we introduce Patch Gaussian, a new technique that combines the noise robustness of Gaussian with the improved clean accuracy of Cutout.
3.1 Patch Gaussian
Patch Gaussian works by adding
a x patch of Gaussian noise to the image (Figure 3)111 A TensorFlow implementation of
A TensorFlow implementation ofPatch Gaussian can be found in Appendix (Figure 10).. As with Cutout, the center of the patch is sampled to be within the image. By varying the size of this patch and the maximum standard deviation of noise sampled , we can interpolate between Gaussian (which applies additive Gaussian noise to the whole image) and an approximation of Cutout (which removes all information inside the patch). See Figure 9 for more examples.
All image transformations, including Patch Gaussian are performed on images with unnormalized pixel values in range. For all images, standard random flipping and cropping is applied immediately after any augmentations mentioned on CIFAR-10 (before, on Imagenet). After noise-based augmentations, images are clipped to the range.
3.2 Hyper-parameter selection
Our goal is to learn models that achieve both good clean accuracy and improved robustness to corruptions. When selecting hyper-parameters we need to decide how to weight these two metrics. Here, we focused on identifying the models that were most robust while still achieving a minimum accuracy (Z) on the clean test data. Values of Z vary per dataset and model, and can be found in the Appendix (Table 6). If no model has clean accuracy Z, we report the model with highest clean accuracy, unless otherwise specified. We find that patch sizes around on CIFAR (250 on ImageNet, i.e.: uniformly sampled with maximum value 250) with generally perform the best. A complete list of selected hyper-parameters for all augmentations can be found in Table 6.
Here, robustness is defined as average accuracy of the model, when tested on data corrupted by various (, , , , , ) of Gaussian noise, relative to the clean accuracy. This metric is correlated with mCE Ford et al. (2019), so it ensures model rosbustness is generally useful beyond Gaussian corruptions. By picking models based on their Gaussian noise robustness, we ensure that our selection process does not overfit to the Common Corruptions benchmark Hendrycks & Dietterich (2018).
3.3 Models, Datasets, & Implementation Details
We run our experiments on CIFAR-10 Krizhevsky & Hinton (2009) and ImageNet Deng et al. (2009) datasets. On CIFAR-10, we use the Wide-ResNet-28-10 model Zagoruyko & Komodakis (2016), as well as the Shake-shake-112 model Gastaldi (2017)
, trained for 200 epochs and 600 epochs respectively. The Wide-ResNet model uses a initial learning rate of 0.1 with a cosine decay schedule. Weight decay is set to bee-
and batch size is 128. We train all models, including the baseline, with standard data augmentation of horizontal flips and pad-and-crop. Our code uses the same hyper parameters asCubuk et al. (2018) 222Available at https://github.com/tensorflow/models/tree/master/research/autoaugment.
On ImageNet, we use the ResNet-50 and Resnet-200 models He et al. (2016), trained for 90 epochs. We use a weight decay rate of e-, global batch size of 512 and learning rate of 0.2. The learning rate is decayed by 10 at epochs 30, 60, and 80. We use standard data augmentation of horizontal flips and crops. All CIFAR-10 and ImageNet experiments use the listed hyper-parameters above, unless specified otherwise. Our code uses the same hyper parameters as open-sourced implementations333Available at https://github.com/tensorflow/tpu/tree/master/models/official/resnet.
We show that models trained with Patch Gaussian can overcome the trade-off observed in Fig. 2 and learn models that are robust while maintaining their generalization accuracy (Section 4.1). In doing so, we establish a new state of the art in CIFAR-C and ImageNet-C Common Corruptions benchmark Hendrycks & Dietterich (2018) (Section 4.2). We then show that Patch Gaussian can be used in complement to other common regularization strategies (Section 4.3), data augmentation policies Cubuk et al. (2018) (Section 4.4), and that it can also improve training of object detection models (Section 4.5)
4.1 Patch Gaussian overcomes this trade-off and improves both accuracy and robustness
We train models on various hyper-parameters of Patch Gaussian and find that the model selected by the method in Section 3.2 leads to improved robustness to Gaussian noise, like Gaussian, while also improving clean accuracy, much like Cutout. In Figure 4, we visualize these results in an ablation study, either varying patch sizes and fixing to the selected value () or varying noise level and fixing the patch size to the selected value ().
4.2 Training with Patch Gaussian leads to improved Common Corruption robustness
In this section, we look at how our augmentations impact robustness in a more realistic setting beyond robustness to additive Gaussian noise. Rather than focusing on adversarial examples that are worst-case bounded perturbations, we focus on a more general set of corruptions Gilmer et al. (2018) that models are likely to encounter in real-world settings: the Common Corruptions benchmark Hendrycks & Dietterich (2018). This benchmark, also referred to as CIFAR-C and ImageNet-C, is composed of images transformed with 15 corruptions, at 5 severities each. The corruptions are designed to model those commonly found in real-world settings, such as brightness, different weather conditions, and different kinds of noise.
Tables 1 and 2 show that Patch Gaussian achieves state of the art on both of these benchmarks in terms of mean Corruption Error (mCE). However, ImageNet-C was released in compressed JPEG format ECMA International (2009), which alters the corruptions applied to the raw pixels. Therefore, we report results on the benchmark as-released (“Original mCE”) as well as a version of 12 corruptions without the extra compression (“mCE”) 444Available at https://github.com/tensorflow/datasets under imagenet2012_corrupted.
|Augmentation||Test Accuracy||mCE||mCE (-noise)|
To compute mCE, we normalize the corruption error for each model and dataset to the baseline with only flip and crop data augmentation. The one exception is Original mCE ImageNet, where we use the AlexNet baseline to be directly comparable with previous work Hendrycks & Dietterich (2018); Geirhos et al. (2018a).
Because Patch Gaussian is a noise-based augmentation, we wanted to verify whether its gains on this benchmark were solely due to improved performance on noise-based corruptions (Gaussian Noise, Shot Noise, and Impulse Noise). To do this, we also measure the models’ average performance on all other corruptions, reported as “Original mCE (-noise)”, and “mCE (-noise)”. We observe that Patch Gaussian outperforms all other models, even on corruptions like fog where Gaussian hurts performance Ford et al. (2019). Scores for each corruption can be found in the Appendix (Tables 7 and 8).
Comparing the lower capacity ResNet-50 and Wide ResNet models to higher-capacity ResNet-200 and Shake 112 models, we find diminished gains in clean accuracy and robustness. Still, Patch Gaussian achieves a substantial increase in mCE relative to other augmentation strategies.
4.3 Patch Gaussian can be combined with other regularization strategies
Since Patch Gaussian has a regularization effect on the models trained above, we compare it with other regularization methods: larger weight decay, label smoothing, and dropblock (Table 3). We find that while label smoothing improves clean accuracy, it weakens the robustness in all corruption metrics we have considered. This agrees with the theoretical prediction from Cubuk et al. (2017), which argued that increasing the confidence of models would improve robustness, whereas label smoothing reduces the confidence of predictions. We find that increasing the weight decay from the default value used in all models does not improve clean accuracy or robustness.
Here, we focus on analyzing the interaction of different regularization methods with Patch Gaussian. Previous work indicates that improvements on the clean accuracy appear after training with Dropblock for 270 epochs Ghiasi et al. (2018), but we did not find that training for 270 epochs changed our analysis. Thus, we present models trained at 90 epochs for direct comparison with other results. Due to the shorter training time, Dropblock does not improve clean accuracy, yet it does make the model more robust (relative to baseline) according to all corruption metrics we consider.
We find that using label smoothing in addition to Patch Gaussian has a mixed effect, it improves clean accuracy while slightly improving robustness metrics except for the Original mCE. Combining Dropblock with Patch Gaussian reduces the clean accuracy relative to the Patch Gaussian-only model, as Dropblock seems to be a strong regularizer when used for 90 epochs. However, using Dropblock and Patch Gaussian together leads to the best robustness performance. These results indicate that Patch Gaussian can be used in conjunction with existing regularization strategies.
4.4 Patch Gaussian can be combined with AutoAugment policies for improved results
Knowing that Patch Gaussian can be combined with other regularizers, it’s natural to ask whether it can also be combined with other data augmentation policies. Table 4 highlights models trained with AutoAugment Cubuk et al. (2018). For fair comparison of mCE scores, we train all models with the best AutoAugment policy, but without contrast and Inception color pre-processing, as those are present in the Common Corruptions benchmark. This process is imperfect since AutoAugment has many operations, some of which could still be correlated with corruptions. Regardless, we find that Patch Gaussian improves accuracy and robustness over simply using AutoAugment.
Because AutoAugment leads to state of the art accuracies, we are interested in seeing how far it can be combined with Patch Gaussian to improve results. Therefore, and unlike previous experiments, models are trained for 180 epochs to yield best results possible.
4.5 Patch Gaussian improves performance in object detection
Since Patch Gaussian can be combined with both regularization strategies as well as data augmentation policies, we want to see if it is generally useful beyond classification tasks. We train a RetinaNet detector Lin et al. (2017) with ResNet-50 backbone He et al. (2016) on the COCO dataset Lin et al. (2014). Images for both baseline and Patch Gaussian models are horizontally flipped half of the time, after being resized to . We train both models for 150 epochs using a learning rate of 0.08 and a weight decay of . The focal loss parameters are set to be and .
Despite being designed for classification, Patch Gaussian improves detection performance according to all metrics when tested on the clean COCO validation set (Table 5). On the primary COCO metric mean average precision (mAP), the model trained with Patch Gaussian achieves a 1% higher accuracy over the baseline, whereas the model trained with Gaussian suffers a 2.9% loss.
Next, we evaluate these models on the validation set corrupted by i.i.d. Gaussian noise, with . We find that model trained with Gaussian and Patch Gaussian achieve the highest mAP of 26.1% on the corrupted data, whereas the baseline achieves 11.6%. It is interesting to note that Patch Gaussian model achieves a better result on the harder metrics of small object detection and stricter intersection over union (IOU) thresholds, whereas the Gaussian model achieves a better result on the easier tasks of large object detection and less strict IOU threshold metric.
Overall, as was observed for the classification tasks, training object detection models with Patch Gaussian leads to significantly more robust models without sacrificing clean accuracy.
In an attempt to understand Patch Gaussian’s performance, we perform a frequency-based analysis of models trained with various augmentations using the method introduced in Yin et al. (2019).
First, we perturb each image in the dataset with noise sampled at each orientation and frequency in Fourier space. Then, we measure changes in the network activations and test error when evaluated with these Fourier-noise-corrupted images: we measure the change in
norm of the tensor directly after the first convolution, as well as the absolute test error. This procedure yields a heatmap, which indicates model sensitivity to different frequency and orientation perturbations in the Fourier domain. Each image in Fig5 shows first layer (or test error) sensitivity as a function of frequency and orientation of the sampled noise, with the middle of the image containing the lowest frequencies, and the edges of the image containing highest frequencies.
For CIFAR-10 models, we present this analysis for the entire Fourier domain, with noise sampled with norm . For ImageNet, we focus our analysis on lower frequencies that are more visually salient add noise with norm .
Note that for Cutout and Gaussian, we chose larger patch sizes and s than those selected with the method in Section 3.2 in order to highlight the effect of these augmentations on sensitivity. Heatmaps of other models can be found in the Appendix (Figure 12).
5.1 Frequency-based analysis of models trained with Patch Gaussian
We confirm findings by Yin et al. (2019) that Gaussian encourages the model to learn a low-pass filter of the inputs. Models trained with this augmentation, then, have low test error sensitivity at high frequencies, which could help robustness. However, valuable high-frequency information (Brendel & Bethge, 2019) is being thrown out at low layers, which could explain the lower test accuracy.
We further find that Cutout encourages the use of high-frequency information, which could help explain its improved generalization performance. Yet, it does not encourage lower test error sensitivity, which explains why it doesn’t improve robustness either.
Patch Gaussian, on the other hand, seems to allow high-frequency information through at lower layers, but still encourages relatively lower test error sensitivity at high frequencies. Indeed, when we measure accuracy on images filtered with a high-pass filter, we see that Patch Gaussian models can maintain accuracy in a similar way to the baseline and to Cutout, where Gaussian fails to. See Figure 5 for full results.
Understanding the impact of data distributions and noise on representations has been well-studied in neuroscience (Barlow et al., 1961; Simoncelli & Olshausen, 2001; Karklin & Simoncelli, 2011). The data augmentations that we propose here alter the distribution of inputs that the network sees, and thus are expected to alter the kinds of representations that are learned. Prior work on efficient coding (Karklin & Simoncelli, 2011)
and autoencoders(Poole et al., 2014) has shown how filter properties change with noise in the unsupervised setting, resulting in lower-frequency filters with Gaussian, as we observe in Fig. 5. Consistent with prior work on natural image statistics (Torralba & Oliva, 2003), we find that networks are least sensitive to low frequency noise where the spectral density is largest. Performance drops at higher frequencies when the amount of noise we add grows relative to the typical spectral density observed at these frequencies. In future work, we hope to better understand the relationship between naturally occuring properties of images and sensitivity, and investigate whether training with more naturalistic noise can yield similar gains in corruption robustness.
In this work, we introduced a single data augmentation operation, Patch Gaussian, which improves robustness to common corruptions without incurring a drop in clean accuracy. For models that are large relative to the dataset size (like ResNet-200 on ImageNet and all models on CIFAR-10), Patch Gaussian improves clean accuracy and robustness concurrently. We showed that Patch Gaussian achieves this by interpolating between two standard data augmentation operations Cutout and Gaussian. We also demonstrate that Patch Gaussian can be used in conjunction with other regularization and data augmentation strategies, and can also improve the performance of object detection models, indicating it is generally useful. Finally, we analyzed the sensitivity to noise in different frequencies of models trained with Cutout and Gaussian, and showed that Patch Gaussian combines their strengths without inheriting their weaknesses.
We would like to thank Benjamin Caine, Trevor Gale, Keren Gu, Jonathon Shlens, Brandon Yang, Nic Ford, Samy Bengio, Alex Alemi, Matthew Streeter, Robert Geirhos, Alex Berardino, Jon Barron, Barret Zoph, and the Google Brain and Google AI Residency teams.
- Asano et al. (2019) Asano, Y. M., Rupprecht, C., and Vedaldi, A. Surprising effectiveness of few-image unsupervised feature learning, 2019.
- Azulay & Weiss (2018) Azulay, A. and Weiss, Y. Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint arXiv:1805.12177, 2018.
- Barlow et al. (1961) Barlow, H. B. et al. Possible principles underlying the transformation of sensory messages. Sensory communication, 1:217–234, 1961.
- Brendel & Bethge (2019) Brendel, W. and Bethge, M. Approximating cnns with bag-of-local-features models works surprisingly well on imagenet. arXiv preprint arXiv:1904.00760, 2019.
- Cubuk et al. (2017) Cubuk, E. D., Zoph, B., Schoenholz, S. S., and Le, Q. V. Intriguing properties of adversarial examples. arXiv preprint arXiv:1711.02846, 2017.
- Cubuk et al. (2018) Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
- Dao et al. (2018) Dao, T., Gu, A., Ratner, A. J., Smith, V., De Sa, C., and Ré, C. A kernel theory of modern data augmentation. arXiv preprint arXiv:1803.06084, 2018.
- Deng et al. (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In , 2009.
- DeVries & Taylor (2017) DeVries, T. and Taylor, G. W. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
Dodge & Karam (2017)
Dodge, S. and Karam, L.
A study and comparison of human and deep learning recognition performance under visual distortions.In 2017 26th international conference on computer communication and networks (ICCCN), pp. 1–7. IEEE, 2017.
- Doersch et al. (2015) Doersch, C., Gupta, A., and Efros, A. A. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422–1430, 2015.
- ECMA International (2009) ECMA International. JPEG Interchange Format (JFIF). 2009.
- Ford et al. (2019) Ford, N., Gilmer, J., Carlini, N., and Cubuk, D. Adversarial examples are a natural consequence of test error in noise. arXiv preprint arXiv:1901.10513, 2019.
- Gastaldi (2017) Gastaldi, X. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017.
- Geirhos et al. (2018a) Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., and Brendel, W. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018a.
- Geirhos et al. (2018b) Geirhos, R., Temme, C. R., Rauber, J., Schütt, H. H., Bethge, M., and Wichmann, F. A. Generalisation in humans and deep neural networks. In Advances in Neural Information Processing Systems, pp. 7538–7550, 2018b.
- Ghiasi et al. (2018) Ghiasi, G., Lin, T.-Y., and Le, Q. V. Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems, pp. 10727–10737, 2018.
- Gilmer et al. (2018) Gilmer, J., Adams, R. P., Goodfellow, I., Andersen, D., and Dahl, G. E. Motivating the rules of the game for adversarial example research. arXiv preprint arXiv:1807.06732, 2018.
- Goodfellow et al. (2014) Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Grandvalet & Canu (1997) Grandvalet, Y. and Canu, S. Noise injection for inputs relevance determination. Advances in intelligent systems, 41:378, 1997.
- Gu et al. (2019) Gu, K., Yang, B., Ngiam, J., Le, Q., and Shlens, J. Using videos to evaluate image model robustness. arXiv preprint arXiv:1904.10076, 2019.
- Han et al. (2017) Han, D., Kim, J., and Kim, J. Deep pyramidal residual networks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6307–6315. IEEE, 2017.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.
- Hendrycks & Dietterich (2018) Hendrycks, D. and Dietterich, T. G. Benchmarking neural network robustness to common corruptions and surface variations. arXiv preprint arXiv:1807.01697, 2018.
- Hendrycks et al. (2019) Hendrycks, D., Lee, K., and Mazeika, M. Using pre-training can improve model robustness and uncertainty. arXiv preprint arXiv:1901.09960, 2019.
- Hu et al. (2017) Hu, J., Shen, L., and Sun, G. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 2017.
- Huang et al. (2018) Huang, Y., Cheng, Y., Chen, D., Lee, H., Ngiam, J., Le, Q. V., and Chen, Z. Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965, 2018.
- Ilyas et al. (2019) Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., and Madry, A. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019.
- Jacobsen et al. (2018) Jacobsen, J.-H., Behrmann, J., Zemel, R., and Bethge, M. Excessive invariance causes adversarial vulnerability. arXiv preprint arXiv:1811.00401, 2018.
Karklin & Simoncelli (2011)
Karklin, Y. and Simoncelli, E. P.
Efficient coding of natural images with a population of noisy linear-nonlinear neurons.In Advances in neural information processing systems, pp. 999–1007, 2011.
- Karpathy (2011) Karpathy, A. Lessons learned from manually classifying cifar-10. Published online at http://karpathy. github. io/2011/04/27/manually-classifying-cifar10, 2011.
- Krizhevsky & Hinton (2009) Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Krizhevsky et al. (2012)
Krizhevsky, A., Sutskever, I., and Hinton, G. E.
Imagenet classification with deep convolutional neural networks.In Advances in Neural Information Processing Systems, 2012.
- Lin et al. (2014) Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014.
- Lin et al. (2017) Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988, 2017.
- Liu et al. (2018) Liu, H., Simonyan, K., and Yang, Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
- Madry et al. (2017) Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
- Poole et al. (2014) Poole, B., Sohl-Dickstein, J., and Ganguli, S. Analyzing noise in autoencoders and deep networks, 2014.
- Recht et al. (2018) Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do cifar-10 classifiers generalize to cifar-10? arXiv preprint arXiv:1806.00451, 2018.
- Recht et al. (2019) Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. Do imagenet classifiers generalize to imagenet? arXiv preprint arXiv:1902.10811, 2019.
- Rosenfeld et al. (2018) Rosenfeld, A., Zemel, R., and Tsotsos, J. K. The elephant in the room. arXiv preprint arXiv:1808.03305, 2018.
- Simoncelli & Olshausen (2001) Simoncelli, E. P. and Olshausen, B. A. Natural image statistics and neural representation. Annual review of neuroscience, 24(1):1193–1216, 2001.
- Simonyan & Zisserman (2015) Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. Advances in Neural Information Processing Systems, 2015.
- Szegedy et al. (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A., et al. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
Szegedy et al. (2017)
Szegedy, C., Ioffe, S., Vanhoucke, V., and Alemi, A. A.
Inception-v4, inception-resnet and the impact of residual connections on learning.In AAAI, 2017.
- Torralba & Oliva (2003) Torralba, A. and Oliva, A. Statistics of natural image categories. Network: computation in neural systems, 14(3):391–412, 2003.
Tsipras et al. (2018)
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., and Madry, A.
Robustness may be at odds with accuracy.stat, 1050:11, 2018.
- Wang et al. (2019) Wang, H., He, Z., Lipton, Z. C., and Xing, E. P. Learning robust representations by projecting superficial statistics out. arXiv preprint arXiv:1903.06256, 2019.
- Xie et al. (2019) Xie, Q., Dai, Z., Hovy, E., Luong, M.-T., and Le, Q. V. Unsupervised data augmentation, 2019.
- Yin et al. (2019) Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E. D., and Gilmer, J. A fourier perspective on model robustness in computer vision. ICML Workshop on Uncertainty and Robustness in Deep Learning, 2019.
- Zagoruyko & Komodakis (2016) Zagoruyko, S. and Komodakis, N. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
- Zhang et al. (2017) Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
- Zhong et al. (2017) Zhong, Z., Zheng, L., Kang, G., Li, S., and Yang, Y. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017.
Zoph & Le (2017)
Zoph, B. and Le, Q. V.
Neural architecture search with reinforcement learning.In International Conference on Learning Representations, 2017.
- Zoph et al. (2017) Zoph, B., Vasudevan, V., Shlens, J., and Le, Q. V. Learning transferable architectures for scalable image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2017.
|Augmentation||Test Accuracy||og mCE||og mCE (-noise)|
|Patch Gaussian ()||75.6%||0.693||0.718|
* exhibits high variance filters. We posit these have not been trained and have little importance, given the low sensitivity of this model to high frequencies. Future work will investigate the importance of filters on sensitivity.