Permuted AdaIN: Enhancing the Representation of Local Cues in Image Classifiers

10/09/2020
by   Oren Nuriel, et al.
24

Recent work has shown that convolutional neural network classifiers overly rely on texture at the expense of shape cues, which adversely affects the classifier's performance in shifted domains. In this work, we make a similar but different distinction between local image cues, including shape and texture, and global image statistics. We provide a method that enhances the representation of local cues in the hidden layers of image classifiers. Our method, called Permuted Adaptive Instance Normalization (pAdaIN), samples a random permutation π that rearranges the samples in a given batch. Adaptive Instance Normalization (AdaIN) is then applied between the activations of each (non-permuted) sample i and the corresponding activations of the sample π(i), thus swapping statistics between the samples of the batch. Since the global image statistics are distorted, this swapping procedure causes the network to rely on the local image cues. By choosing the random permutation with probability p and the identity permutation otherwise, one can control the strength of this effect. With the correct choice of p, selected without considering the test data, our method consistently outperforms baseline methods in image classification, as well as in the setting of domain generalization.

READ FULL TEXT
research
03/29/2021

DualNorm-UNet: Incorporating Global and Local Statistics for Robust Medical Image Segmentation

Batch Normalization (BN) is one of the key components for accelerating n...
research
06/07/2021

Proxy-Normalizing Activations to Match Batch Normalization while Removing Batch Dependence

We investigate the reasons for the performance degradation incurred with...
research
04/06/2019

Split Batch Normalization: Improving Semi-Supervised Learning under Domain Shift

Recent work has shown that using unlabeled data in semi-supervised learn...
research
04/06/2023

Patch-aware Batch Normalization for Improving Cross-domain Robustness

Despite the significant success of deep learning in computer vision task...
research
03/28/2021

BA^2M: A Batch Aware Attention Module for Image Classification

The attention mechanisms have been employed in Convolutional Neural Netw...
research
05/09/2021

TextAdaIN: Fine-Grained AdaIN for Robust Text Recognition

Leveraging the characteristics of convolutional layers, image classifier...
research
02/15/2022

On the Origins of the Block Structure Phenomenon in Neural Network Representations

Recent work has uncovered a striking phenomenon in large-capacity neural...

Please sign up or login with your details

Forgot password? Click here to reset