DeepAI AI Chat
Log In Sign Up

Proxy-Normalizing Activations to Match Batch Normalization while Removing Batch Dependence

by   Antoine Labatie, et al.

We investigate the reasons for the performance degradation incurred with batch-independent normalization. We find that the prototypical techniques of layer normalization and instance normalization both induce the appearance of failure modes in the neural network's pre-activations: (i) layer normalization induces a collapse towards channel-wise constant functions; (ii) instance normalization induces a lack of variability in instance statistics, symptomatic of an alteration of the expressivity. To alleviate failure mode (i) without aggravating failure mode (ii), we introduce the technique "Proxy Normalization" that normalizes post-activations using a proxy distribution. When combined with layer normalization or group normalization, this batch-independent normalization emulates batch normalization's behavior and consistently matches or exceeds its performance.


page 1

page 2

page 3

page 4


U-Net Training with Instance-Layer Normalization

Normalization layers are essential in a Deep Convolutional Neural Networ...

Stochastic Normalizations as Bayesian Learning

In this work we investigate the reasons why Batch Normalization (BN) imp...

A New Look at Ghost Normalization

Batch normalization (BatchNorm) is an effective yet poorly understood te...

Mode Normalization

Normalization methods are a central building block in the deep learning ...

Permuted AdaIN: Enhancing the Representation of Local Cues in Image Classifiers

Recent work has shown that convolutional neural network classifiers over...

Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes

Normalization techniques have only recently begun to be exploited in sup...