-
A Fourier Perspective on Model Robustness in Computer Vision
Achieving robustness to distributional shift is a longstanding and chall...
read it
-
Data Augmentation Revisited: Rethinking the Distribution Gap between Clean and Augmented Data
Data augmentation has been widely applied as an effective methodology to...
read it
-
Augmentation Inside the Network
In this paper, we present augmentation inside the network, a method that...
read it
-
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Deploying machine learning systems in the real world requires both high ...
read it
-
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
Data augmentation (DA) is an essential technique for training state-of-t...
read it
-
Dataset Condensation with Differentiable Siamese Augmentation
In many machine learning problems, large-scale datasets have become the ...
read it
-
BitMix: Data Augmentation for Image Steganalysis
Convolutional neural networks (CNN) for image steganalysis demonstrate b...
read it
Does Data Augmentation Benefit from Split BatchNorms
Data augmentation has emerged as a powerful technique for improving the performance of deep neural networks and led to state-of-the-art results in computer vision. However, state-of-the-art data augmentation strongly distorts training images, leading to a disparity between examples seen during training and inference. In this work, we explore a recently proposed training paradigm in order to correct for this disparity: using an auxiliary BatchNorm for the potentially out-of-distribution, strongly augmented images. Our experiments then focus on how to define the BatchNorm parameters that are used at evaluation. To eliminate the train-test disparity, we experiment with using the batch statistics defined by clean training images only, yet surprisingly find that this does not yield improvements in model performance. Instead, we investigate using BatchNorm parameters defined by weak augmentations and find that this method significantly improves the performance of common image classification benchmarks such as CIFAR-10, CIFAR-100, and ImageNet. We then explore a fundamental trade-off between accuracy and robustness coming from using different BatchNorm parameters, providing greater insight into the benefits of data augmentation on model performance.
READ FULL TEXT
Comments
There are no comments yet.