An Investigation into the Stochasticity of Batch Whitening

03/27/2020 ∙ by Lei Huang, et al. ∙ 0

Batch Normalization (BN) is extensively employed in various network architectures by performing standardization within mini-batches. A full understanding of the process has been a central target in the deep learning communities. Unlike existing works, which usually only analyze the standardization operation, this paper investigates the more general Batch Whitening (BW). Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs). We attribute this phenomenon to the stochasticity that BW introduces. We quantitatively investigate the stochasticity of different whitening transformations and show that it correlates well with the optimization behaviors during training. We also investigate how stochasticity relates to the estimation of population statistics during inference. Based on our analysis, we provide a framework for designing and comparing BW algorithms in different scenarios. Our proposed BW algorithm improves the residual networks by a significant margin on ImageNet classification. Besides, we show that the stochasticity of BW can improve the GAN's performance with, however, the sacrifice of the training stability.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Normalization techniques have been extensively used for learning algorithms during data preprocessing [25, 23, 12]. It has been shown that centering, scaling and decorrelating the inputs speeds up the training [25]. Furthermore, whitening the input that combines all above operations, improves the conditioning of the Hessian, making the gradient descent updates similar to the Newton updates [25, 46, 16].

Batch Normalization (BN) [19]

extends the idea of normalizing the input into the activations of intermediate layers of Deep Neural Networks (DNNs), and represents a milestone technique in the deep learning community

[12, 43, 47]. BN standardizes the activations by executing centering and scaling within a mini-batch of data, facilitating the optimization [19, 22, 36] and generalization [4, 5]. Batch Whitening (BW) further extends the scope of BN by removing the correlation of standardized activations [16]. It has been shown to improve the performance in discriminative scenarios [16] and Generative Adversarial Networks (GAN) [39].

(a) BN standardization
(b) PCA whitening
(c) ZCA whitening
(d) CD whitening
Figure 1:

We sample 1000 examples (black points) from a Gaussian distribution in a 16-dimensional space, and show the examples in the two-dimension sub-space (the 6th and 16th dimension). Given an example

(red diamond), when combining with different mini-batches (), we provide the normalized output (yellow pentagram), where (a), (b), (c) and (d) show the results of BN standardization, PCA, ZCA and CD whitening, respectively.

Despite BW’s theoretical support in improving conditioning, there remains some intriguing observations relating to BW that are not well explored. Firstly, while various whitening transformations can equivalently improve the conditioning [20]

, they show significant differences in performance: 1) Principal Component Analysis (PCA) whitening hardly converges while Zero-phase Component Analysis (ZCA) whitening works well in discriminative scenarios

[16]; 2) Cholesky Decomposition (CD) whitening achieves significantly better performance than ZCA in GAN training, while it has slightly worse performance in discriminative cases [39]. Secondly, while group based whitening—where features are divided into groups and whitening is performed within each one—is essential in discriminative scenarios [16, 31], full feature whitening has been shown to achieve better performance in training GANs [39].

This paper focuses on explaining the above observations of BW. We find that the stochasticity introduced by normalization over batch data (Figure 1) can be key to uncovering the intriguing phenomena of BW. We quantitatively investigate the magnitude of stochasticity in different whitening transformations, using the evaluation method called Stochastic Normalization Disturbance (SND) [17]. By doing so, we demonstrate that PCA whitening has significantly larger stochasticity, which is difficult to be controlled by either increasing batch size or using group based whitening. On the other side, ZCA whitening has the smallest stochasticity, while CD whitening has a moderate value, and more importantly, their stochasticity can be well controlled. This suggests ZCA whitening should have better optimization behaviors in training, while PCA whitening has problems in converging, due to the increased stochasticity which slows down the progress of the optimization [42, 17].

We also investigate the stochasticity during inference, which is caused by the estimation of population statistics averaged over the mini-batch statistics during training. We show that in terms of estimating the population statistics of the whitening matrix, it is more stable to use the mini-batch covariance matrix indirectly (We calculate the whitening matrix after training) than the mini-batch whitening matrix directly. We further provide an empirical investigation to understand the reasons behind this observation, and find that the stochastic sequences of the mini-batch whitening matrix have a large diversity than the covariance matrix.

Based on the above analyses, we provide a general framework for designing and comparing BW algorithms in different scenarios. We design new BW algorithm and apply them to the residual networks [12] for ImageNet dataset  [8], significantly improving the performance over the original one. We further conduct thorough experiments on training GANs. We show that full feature whitening, when combined with coloring, can improve the final score of evaluation. However, it reduces the training stability and is more sensitive to the hyper-parameters configurations. We attribute this phenomenon to two main effects caused by the introduced stochasticity of BW: 1) Strong stochasticity can increase the diversity of generated images and thus improve the GAN’s performance; 2) At the same time, high stochasticity harms optimization and thus is more sensitive to the hyper-parameters. We argue that controlling the magnitude of whitening (stochasticity) is also important in training GANs and we validate this argument with our experiments.

2 Related Work

The previous analyses on BN mainly focus on the optimization. One main argument is that BN can improve the conditioning of the optimization problem. This argument was initially introduced in the BN paper [19] and further refined in [36], showing that BN leads to a smoother landscape of the optimization problem under certain assumptions. Ghorbani et al. [10] investigated this explanation by computing the spectrum of the Hessian for a large-scale dataset. It is believed that the improved conditioning enables large learning rates, thus improving the generalization, as shown in [5]. Another argument is that BN can adaptively adjust the learning rate [7, 15, 2] due to its scale invariance [19, 4]. This effect has been further discussed in combination with weight decay [49]. Other works have included an investigation into the signal propagation and gradient back-propagation [48]. Different from these approaches, our work focuses on analyzing the stochasticity of whitening over batch data.

The stochasticity introduced by normalization over batch data was first mentioned in the BN paper [19], and further explored in [3, 44] from the perspective of Bayesian optimization. This stochasticity results in differences between the training distribution (using mini-batch statistics) and the test distribution (using estimated population statistics) [18], which is believed to be the main cause of the small-batch-problem of BN [47]. To address this issue, a number of approaches have been proposed [47, 34, 29, 18, 45, 41]. Furthermore, it has been observed that BN also encounters difficulties in optimization during training [37, 17]. This phenomenon is explored by the stochastic analysis shown in [17]. Different from the above research which focuses on standardization, we analyze, for the first time, the stochasticity on batch whitening. We propose that analyzing whitening rather than standardization, has several advantages in understanding the behaviors of normalization over batch data: 1) There are an infinite number of whitening transformations and the main ones show significant differences as discussed in Section 1; 2) The extent of the whitening (stochasticity) can be well controlled by the batch and group size, which provides more information in designing experiments.

Our work is related to the previously proposed whitening methods regarding the activation of DNNs. One approach is to consider the whitening matrix as model parameters to be estimated using full data [9, 28]. This kind of whitening has also been exploited in image style transformation tasks [26, 38]. Another line of research is batch whitening, which is what this paper discusses. This approach treats the normalization as a function over a mini-batch input. The main works include PCA whitening, ZCA whitening [16] and its approximation ItN [17], and CD whitening [39]. Pan et al. [31] propose switchable whitening to learn different batch/instance whitening/standardization operations in DNNs. However, they used only the ZCA whitening transformation. Our work aims to understand different whitening transformation behaviors based on stochastic analysis.

3 Stochasticity Analysis of Batch Whitening

Let be a data matrix denoting the mini-batch input of size in a certain layer. For simplifying the discussion, we assume that the data is centered, by performing where is the mean of , and

is a column vector of all ones. Whitening performs normalization over the mini-batch input as follows:

(1)

where is the mini-batch whitening matrix that is derived from the corresponding covariance matrix . The population statistics of the whitening matrix used for inference, is usually calculated by running average over the mini-batches as follows:

(2)

It is clear that both the whitened output (Eqn. 1) and the population statistics (Eqn. 2) can be viewed as stochastic variables, because they depend on the mini-batch inputs which are sampled over datasets. For illustration, we defer the analysis of the stochasticity to Sections 3.2 and 3.3, and first provide a review of the whitening transformations.

3.1 Whitening Transformations

There are an infinite number of possible whitening matrices, as shown in [20, 16], since any whitened data with a rotation is still whitened. This paper focuses on the whitening transformations based on PCA, ZCA and CD, since these three transformations have shown significant differences in performance when used in training DNNs [16, 39]. Note that BN [19] can be viewed as a special case of batch whitening, since it performs standardization without removing correlations, where with setting the off-diagonal elements of a matrix to zeros. To simplify the description, this paper regards BN as a (reduced) whitening transformation, unless otherwise stated.

PCA Whitening

uses , where and

are the eigenvalues and associated eigenvectors of

, i.e. . Under this transformation, the variables are first rotated by the eigen-matrix () of the covariance, then scaled by the square root inverse of the eigenvalues (). PCA whitening over batch data suffers significant instability in training DNNs, and hardly converges, due to the so called Stochastic Axis Swapping (SAS), as explained in [16].

ZCA Whitening

uses , where the PCA whitened input is rotated back by the corresponding rotation matrix . ZCA whitening works by stretching/squeezing the dimensions along the eigenvectors. It has been shown that ZCA whitening avoids the SAS and achieves better performance over standardization (used in BN) on discriminative classification tasks [16].

CD Whitening

uses where is a lower triangular matrix from the Cholesky decomposition, with . This kind of whitening works by recursively decorrelating the current dimension over the previous decorrelated ones, resulting in a triangular form of its whitening matrix. CD whitening has been shown to achieve the state-of-the-art performance in training GANs, while ZCA whitening has degenerated performance.

The primary motivation of this paper is to investigate the following problem: while all whitening methods equivalently improve the conditioning of the layer input, why do they show significantly different behaviors in training DNNs? In the following sections, we provide a unified analysis and demonstrate that the key is the stochasticity introduced by normalization over batch data.

Figure 2: SND comparison of different batch whitening methods. We sample 60,000 examples from a Gaussian distribution as the training set. To calculate SND, we use and . We show (a) the SND with respect to the dimensions ranging from to , under a batch size of 1024; (b) the SND with respect to the batch size ranging from to , under a dimension of 128.

3.2 Stochasticity During Training

Given a sample from a distribution , we take a sample set with a size of . The whitened output can be formulated as . For a certain ,

can be viewed as a random variable

[3, 44, 17]. is thus another random variable showing stochasticity. Here, we investigate the stochasticity effects of different whitening transformations.

To provide a more intuitive illustration, we conduct a toy experiment to show how the normalized output of one sample changes when combined with a different sample set , by performing different whitening methods. Figure 1 (a), (b), (c) and (d) show the results when performing BN, PCA, ZCA and CD whitening, respectively. It is clear that the distribution of the PCA whitened output of one sample is very sparse, which means that has significant diversity. This suggests that PCA whitening provides large stochasticity. On the other side, the BN standardized output shows a tight Gaussian-style distribution, which suggests that BN has smaller stochasticity. Note that BN can not guarantee that the normalized output has an identity covariance matrix, while other whitening methods can. Similarly, we also observe that ZCA whitening provides reduced stochasticity, compared to CD. In fact, ZCA whitening has been shown to minimize the total squared distance between the original and whitened variables [20, 16]. We conjecture that such a property results in ZCA whitening having smaller stochasticity than CD.

Figure 3:

Experiments on a 4-layer MLP with 256 neurons in each layer, for MNIST classification. We use a batch size of 1024 and report the training errors. (a) The results of full whitening methods; (b) The results of group based whitening, where ‘ZCA-16’ indicates ZCA whitening with a group size of 16.

3.2.1 Quantitative Analysis

To provide a qualitative comparison, we exploit the evaluation called Stochastic Normalization Disturbance (SND), introduced in [17]. The empirical estimation of SND for sample over the normalization is defined as:

(3)

where denotes the number of mini-batches that are randomly sampled from the dataset. SND can be used to evaluate the stochasticity of a sample after the normalization operation [17]. Further, a normalization operation ’s empirical SND is defined as given samples. describes the magnitudes of stochasticity for the corresponding normalization operations.

Here, we conduct experiments to quantitatively evaluate the effects of different normalization methods. Noticeably, the stochasticity is related to the batch size and the dimension . Figure 2 (a) shows the SND of different normalization methods with respect to the dimensions, when fixing the batch size to 1024. We find that PCA whitening shows the largest SND while BN the smallest, over all the dimensions, which is consistent with the observation shown in Figure 1. We notice that all whitening methods have an increased SND when the dimension increases. Besides, ZCA has a smaller SND than CD, over all the dimensions, which is also consistent with the data shown in Figure 1. Figure 2 (b) shows the SND of different normalization methods with respect to the batch size, when fixing the dimension to 128. An interesting observation is that PCA whitening has nearly the same large SND among different batch sizes. This suggests that the PCA whitening is extremely unstable, no matter how accurate the estimation of the mini-batch covariance matrix is. This effect is in accordance with the explanation of Stochastic Axis Swapping (SAS) shown in [16], where a small change over the examples (when performing PCA whitening) results in a large change of representation.

To further investigate how this stochasticity affects DNN training, we perform experiments on a four-layer Multilayer Perceptron (MLP) with 256 neurons in each layer. We evaluate the training loss with respect to the epochs, and show the results in Figure

3 (a). We find that, among all the whitening methods, ZCA works the best, while PCA is the worst. We argue that this correlates closely with the SND they produce. Apparently, the increased stochasticity can slow down training, even though all the whitening methods have equivalently improved conditioning. An interesting observation is that, in this case, BN works better than ZCA whitening. This is surprising since ZCA has improved conditioning over BN by removing the correlation [16], and it should theoretically have a better optimization behavior. However, the amplified stochasticity of ZCA whitening mitigates this advantage in optimization, thus resulting in a degenerated performance. Therefore, from an optimization perspective, we should control the extent of the stochasticity.

Figure 4: Group-based whitening experiments. (a) We show the SND of different normalization operations with respect to the group size. The experimental setup is the same as Figure 2 and the input dimension is . (b) We show the spectrum of covariance matrix of the ZCA whitened output (Note that CD/PCA whitening has the same spectrum as ZCA whitening.), where ‘G16’ indicates whitening with a group size of 16.

3.2.2 Controlling the Stochasticity by Groups

Huang et al. [16] proposed to use groups to control the extent of whitening. They argue that this method reduces the inaccuracy in estimating the full covariance matrix when the batch size is not sufficiently large. Here, we empirically show how group based whitening affects the SND, providing a good trade-off between introduced stochasticity and improved conditioning. This is essential for achieving a better optimization behavior.

We evaluate the SND of different whitening transformations by varying the group size ranging from 2 to 512, as shown in Figure 4 (a). We also display the spectrum of the covariance matrix of the whitened output (based on groups) in Figure 4 (b). We find that the group size effectively controls the SND of the ZCA/CD whitening. With decreasing group size, ZCA and CD show reduced stochasticity (Figure 4 (a)), while having also degenerated conditioning (Figure 4 (b)), since the output is only partially whitened. Besides, we observe that PCA whitening still has a large SND over all group sizes, and with no significant differences. This observation further corroborates the explanation of SAS given in [16], i.e., that the PCA whitening is extremely unstable.

We also show the SND of the approximate ZCA whitening method (called ItN [17]) in Figure 4 (a), which uses Newton’s iteration to approximately calculate the whitening matrix. We denote ‘ItN5’ as the ItN method with an iteration number of 5. An interesting observation is that ItN has smaller SND than BN, when using a large group size (e.g., 256) with a smaller iteration (e.g., T=5). This suggests that we can further combine group size and iteration number to control the stochasticity for ItN, providing an efficient and stable solution to approximate ZCA whitening [17]. The above observations furnish substantial insights into the application of whitening in neural networks (especially in scenarios that requires decorating the activation in a certain layer), and we will further elaborate on this in Section 4.

We also use group-based ZCA/CD whitening methods on the four-layer MLP experiments. The results are shown in Figure 3 (b). We observe that ZCA and CD whitening, with a group size of 16 to control the stochasticity, achieve better training behaviors than BN.

(a) ZCA Whitening
(b) CD Whitening
Figure 5: Comparison of estimating using different estimation objects (/). We train MLPs with varying widths (number of neurons in each layer) and batch sizes, for MNIST classification. We evaluate the difference of test accuracy between and : . (a) and (b) show the results using ZCA and CD whitening, respectively, under a learning rate of 1. We also tried other learning rates and obtained similar observations (See Appendix B for details).

3.3 Stochasticity During Inference

In the previous section we have shown that performing different whitening transformations, by introducing different magnitudes of stochasticity, results in significantly different training behaviors from an optimization perspective. It’s clear that such a stochasticity will also affect the final test performance during inference, because the population statistics is estimated by running average (Eqn. 2) over the stochastic sequences , where T is the total number of training iterations. The more diverse of the stochastic sequence , the more difficult to accurately estimate . Rather than estimating directly, Siarohin et al. [39] proposed to first estimate the population statistic of the covariance matrix , then compute based on after training. However, no further analysis was provided to explain why they do like this.

Here, we provide an empirical investigation on how the estimation object (/) in Eqn. 2 affects the test performance. Let and denote the method to estimate by and , respectively. We conduct experiments on MLP with variations in width (the number of neurons in each layer) and batch size, for MNIST classification. Figure 5 (a) and (b) show the results of ZCA and CD whitening, respectively. We find that has better performance than , especially under the scenarios with large width and small batch size (Intuitively, estimating in high-dimensional space with a small batch size will make the estimation noisier). This suggests that using to estimate indirectly is more stable than using directly. Another interesting observation is that, by comparing Figure 5 (a) and (b), the differences between the estimation methods using CD whitening is smaller than the ZCA whitening.

We further analyze how the diversity of stochastic sequences affect the estimation of , where . Intuitively, if the stochastic sequence has high diversity, the effects of estimation will be worse. We view each element of

as an independent stochastic variable and calculate the standard deviation of each element during training, as follows:

(4)

where () indicates the (i,j)-th element of (). Furthermore, we calculate the normalized standard deviation of each element , which, as defined in Eqn. 4, is calculated over . Figure 6 (a) and (b) show the histogram of and , respectively, when using ZCA whitening. We clearly find that has a larger standard deviation on average, thus a large diversity in general, compared to . This reveals why using is more stable for estimating than using .

(a) histogram of
(b) histogram of
Figure 6: Analysis of the diversity of the stochastic sequences in estimating . We report the histogram of and .

4 Evaluation on Vision Tasks

Based on the previous analysis, we can design new BW algorithms, and construct more effective DNNs by using BW. We investigate this in classification and training GANs. The code to reproduce the experiments is available at https://github.com/huangleiBuaa/StochasticityBW.

4.1 The Landscape of Batch Whitening Algorithms

We provide a general view of batch whitening algorithms in Algorithm 1 for vectorial inputs . Back-propagation is necessary to pass through the whitening transformation, and we provide the details in Appendix A for completeness. For feature map inputs , where and indicate the height and width, the whitening transformation is performed over the unrolled , since each spatial position of the feature map can be viewed as a sample [19].

Note that an extra step to recover the representation capacity of normalization is given in Line 11 of Algorithm 1, which is shown to be empirically effective in [19, 4, 16, 47]. There are two alternatives for : One is a dimension-wise scale and shift [19, 16]: ; The other is a coloring transformation: , which was proposed in [39] to achieve better performance in training GANs. By combining whitening transformation , estimation object and recovery operation , we can design different BW algorithms. Table 1 shows the scope of value for different components this paper discusses. Note that ItN [17] is the efficient and numerical stable approximation of ZCA whitening. We use the ‘Scale Shift’ operation in for all algorithm, unless otherwise stated.

1:  Input: mini-batch inputs .
2:  Output: .
3:  if Training then
4:     Calculate covariance matrix: .
5:     Calculate whitening matrix: .
6:     Calculate whitened output: .
7:     Update population statistics: .
8:  else
9:     Calculate whitened output: .
10:  end if
11:  Recover representation: .
Algorithm 1 A general view of batch whitening algorithms.
COMPONENT VALUE
Whitening transformation ‘ZCA’, ‘PCA’, ‘CD’, ‘ItN’
Estimation object ’, ‘
Recovery operation ‘Scale Shift’, ‘Coloring’
Table 1: The scope of value in Algorithm 1 for different components this paper discusses. The Cartesian product of the values considers the landscape of the batch whitening algorithms used in this study.

4.2 Investigation on Discriminative Scenarios

In the section, we first investigate how different whitening transformations affect the training of VGG network [40] for CIFAR-10 datasets  [23], from the perspective of optimization. We then show the performance on large-scale ImageNet classification  [8], by applying the designed BW algorithm on residual networks [12].

4.2.1 VGG on CIFAR-10

We use the VGG networks [40] tailored for inputs (16 convolutional layers and 1 fully-connected layer). We add the normalization methods after each convolutional layer in the VGGs. We compare several methods including: 1) The full whitening methods ‘PCA’, ‘ZCA’ and ‘CD’; 2) The approximate whitening ‘ItN’ and the standardization ‘BN’; 3) Group-based whitening methods with the group size ranging in . We denote ‘ZCA-16’ as ZCA whitening with group size 16. We focus on comparing the training performance from an optimization perspective, and we use mini-batch covariance matrix ‘’ as the estimation object for all methods during inference. We use SGD with a batch size of 256 to optimize the model. We set the initial learning rate to 0.1, then divide it by 5 after 60 epochs and finish the training at 120 epochs.

(a) Training error
(b) Test error
Figure 7: Experiments on VGG for CIFAR-10 classification.

The main experimental observations include: 1) ‘PCA’ fails training under all the configurations, which means that either the training loss does not decrease or numerical instability appears. This observation is consistent with the previous MLP models. 2) ‘ZCA-16’ can train well, while other ‘ZCA’ related configurations fail training due to numerical instability. This is caused by the back-propagation through the eigen decomposition, which requires different eigenvalues for the mini-batch covariance matrix [16, 39]. 3) ‘CD’ has no numerical instability and can ensure full feature whitening of the models. Fully whitening features by ‘CD’ have significantly worse performance than the group-based ones shown in Figure 7. This again suggests that it is essential to control the extent of the whitening for discriminative models. 4) We find that ‘ItN’ (the approximates of the ZCA whitening) works the best.

Figure 8: Stability experiments on GAN with hinge loss [30, 6]

for unconditional image generation. The box shows the quartiles while the whiskers extend to show the rest of the distribution (we limit the FID ranging in (20,100) for better representation, and we show the full range of FID in the

Appendix C.2.1). Note that all methods use covariance matrix to estimate population statistics.

4.2.2 Residual Network on ImageNet

We investigate the effectiveness of all kinds of whitening algorithms on residual networks for ImageNet classification with 1000 classes  [8]. We use the given official 1.28M training images as a training set, and evaluate the top-1 accuracy on the validation set with 50k images.

Ablation Study on ResNet-18

We first execute an ablation study on the 18-layer residual network (ResNet-18) to explore multiple positions for replacing BN with BW. We consider three architectures: 1) ARC: where we only replace the first BN module of ResNet-18 proposed in [16]; 2) ARC: where we further plug-in the BW layer after the last average pooling (before the last linear layer) to learn the decorrelated feature representations based on ARC, as proposed in [17]; 3) ARC: where we also replace the th BN modules (the th for ResNet-50) based on ARC. We compare all the whitening transformations and estimation objects in Table 1. ‘ZCA’ and ‘ZCA’ denote the ZCA whitening that estimate the population statistics using and , respectively. For ‘PCA’, ‘ZCA’ and ‘CD’, we use a group size in and report the best performance from these two configurations.

Method ARC ARC ARC
Baseline [12] 70.31
PCA [16] 59.93 (10.38)
PCA 70.01 (0.30)
ZCA [16] 70.58 (0.27)
ZCA 70.62 (0.31)
CD 70.46 (0.15) 70.80 (0.49) 68.15 (2.16)
CD [39] 70.55 (0.24) 70.89 (0.58) 68.56 (1.75)
ItN [17] 70.62 (0.31) 71.14 (0.83) 71.26 (0.95)
ItN 70.63 (0.32) 71.33 (1.02) 71.62 (1.31)
Table 2: Comparison of validation accuracy (, single model and single-crop) on an 18-layer residual network for ImageNet.

We follow the same experimental setup as described in  [12], except that we use one GPU and train over 100 epochs. We apply SGD with a mini-batch size of 256, momentum of 0.9 and weight decay of 0.0001. The initial learning rate is set to 0.1 and divided by 10 at 30, 60 and 90 epochs.

The results are shown in Table 2. For ARC, we find that all whitening methods, except PCA related methods, improve the performance over the baselines. We observe that ZCA (ZCA) and its approximate ItN (ItN), achieve slightly better performance than CD (CD), and this observation is consistent with the results in [39]. This suggests ZCA whitening, which minimizes the distortion introduced by whitening under the L2 distance [16], usually works better than other whitening methods for discriminate classification tasks. Under ARC and ARC, ZCA/PCA related methods suffer from numerical instability. We also observe that CD-related methods have significantly degenerated performance under ARC. This implies that the stochasticity introduced by decorrelating multiple layers with CD whitening harms the learning. We find that ItN-related methods can effectively control the stochasticity and achieve further performance improvement on ARC. We try a ResNet-18 where all BN layers are replaced with ItN. However, the network has no performance improvement over ARC, while the computations introduced are significant, as already observed in [17]. These results demonstrate that controlling the extent of whitening (stochasticity) is important for achieving performance improvements over the standardization.

From all the architectures and whitening methods, we observe that using to estimate the population statistics is better than using , especially on ARC.

Results on ResNet-50

Based on the above observation, we further apply the ItN to ResNet-50. In addition to the standard step learning rate decay [12] used in the previous setup, we also consider the cosine learning rate decay [27] that is also a basic setting [13] when training on ImageNet, and we want to address that the improvement of the proposed method can be obtained under different setups. For cosine decay, we start with a learning rate of 0.1 and decay it to 0.00001 over 100 epochs. We find that the proposed models significantly improves the performance over the original one, under all configurations. The additional time cost of ‘ItN-ARC’ and ‘ItN-ARC’ is and over the original. Note that ItN consistently provides a slight improvement (around to for all configurations in this experiment) over the original ItN proposed in [17], and please see Appendix C.1 for details. Moreover, ItN can bear a larger group size and smaller batch size than ItN, which is particularly important in the scenario of training GANs, as we will discuss.

Method Step decay Cosine decay
Baseline 76.20 76.62
ItN-ARC 77.18 (0.98) 77.68 (1.06)
ItN-ARC 77.28 (1.08) 77.92 (1.30)
Table 3: Results using ResNet-50 for ImageNet. We evaluate the top-1 validation accuracy (, single model and single-crop).

4.3 Investigation on Training GANs

In this section, we provide an empirical analysis on BW algorithms for training GANs. We focus on investigating the effects of different BW algorithms in stabilizing the training and improving the performance. We evaluate on unconditional image generation for the CIFAR-10 dataset. In our qualitative evaluation of the generated samples, we use the two most common metrics: Fréchet Inception Distance (FID) [14] (the lower the better) and Inception Score (IS) [35] (the higher the better). Following the setup in [39], we only apply the BW on the generator. Unless otherwise noted, the batch whitening methods used in this section use the covariance matrix to estimate the whitening matrix .

Scale Shift Coloring
Method IS FID IS FID
BN 7.243 0.073 27.89 7.317 0.093 27.61
CD 6.986 0.065 30.11 7.612 0.112 24.44
ZCA-64 7.265 0.069 26.8 7.412 0.118 25.79
ItN 7.246 0.104 27.44 7.599 0.089 24.68
Table 4: Results on stability experiments shown in Section 4.3.1. We report the best FID and the corresponding IS from all the configurations for different whitening methods.
Figure 9: Comparison of BW methods between using Covariance Matrix (CM) and Whitening Matrix (WM) as estimation object.

4.3.1 Stability Experiments

Following the analysis in [24], we conduct experiments to demonstrate the effects of BW algorithms in stabilizing the training of GANs. We use DCGAN [33] architectures and use the hinge loss [30, 6]. We provide the implementation details in the Appendix C.2.

We use the Adam optimizer [21] and train for 100 epochs. We consider 15 hyper-parameter settings (See Appendix C.2.1 for details) by varying the learning rate , first and second momentum (, ) of Adam, and number of discriminator updates per generator update . We use both the ‘ScaleShift’ (denoted as ‘-S’ ) and ‘Coloring’ (denoted as ‘-C’ ) to recover the representation of BW. We calculate the FID distribution of the trained models under 15 configurations, and the results are shown in Figure 8

. The lower the variance, the more stable the model from an optimization perspective. In Table

4, we also provide the best FID and the corresponding IS from all the configurations for different whitening methods. Note that full ZCA whitening also suffers from the numerical instability in this experiment, and we thus use the group-based ZCA whitening.

We observe that ‘CD’ whitening combined with coloring transformation (CD-C) achieves the best performances. However, it has the worst stability over all whitening methods. We argue that full feature whitening introduces strong stochasticity, which benefits the diversity of the generated examples. However, the strong stochasticity also harms the training and makes the model sensitive to the hyperparameter. We find that the coloring operation can improve the performances, especially for the whitening transformations (

e.g., BN benefits less). Nevertheless, it also makes the training unstable and sensitive to the hyperparameter. Besides, we observe that ‘ZCA-64’ achieves better performance than ’CD-64‘. This suggests that the advantage of ZCA whitening over CD in discriminative scenarios still exists when training GANS. We argue that controlling the magnitude of whitening (stochasticity) is also still important in training GANs. For example, for ItN, the approximation of ZCA whitening has nearly the same performance as CD (Table 4), and shows better stability (Figure 8) due to its effective control of the stochasticity, as explained in [17].

We also have obtained similar results when using the non-saturated loss [11], as shown in the Appendix C.2.1.

Comparison of Estimation Object

Considering that the CD whitening in [39] uses the covariance matrix to indirectly estimate the whitening matrix when training a GAN, we also compare the whitening methods with different estimation objects. The results are shown in Figure 9. We observe that the methods using the covariance matrix as the estimation object, achieve consistently better FID scores and are generally more stable than using the whitening matrix.

DCGAN ResNet
Method IS FID IS FID
CD 7.820 0.099 22.89 8.522 0.113 18.34
ZCA-64 7.672 0.068 22.91 8.492 0.106 17.86
ItN 7.652 0.070 22.5 8.375 0.093 17.55
Table 5: Performance comparison between different whitening methods on the DCGAN and ResNet architectures used in [39]. For fairness, we report all the results at the end of the training.

4.3.2 Validation on Larger Architectures

Based on our previous analysis, we observe that ItN and ZCA whitening (with an appropriate group size) have similar performance to CD in training GANs, when using the covariance matrix as the estimation object. We further apply the ZCA whitening and ItN to the DCGAN and Resnet models, where CD whitening achieves nearly the state-of-the-art performance on CIFAR-10. We use the code provided in [39] and use the same setup as in [39]. We replace all the CD whitening in the models with ItN/ZCA whitening. The results are shown in Table 5. We observe that the CD whitening has no significant advantages over ItN/ZCA-G64. Ultimately, ItN achieves the best performances in terms of the FID evaluation, while CD whitening is the best in terms of IS. This suggests that there is still room to improve the performance of batch whitening in training GANs by further finely controlling the stochasticity.

5 Conclusions

In this paper, we provided a stochasticity analysis of batch whitening that thoroughly explains why different whitening transformations show significant differences in performance when training DNNs, despite their equivalent improvement of the conditioning. Our analysis provides insights for designing new normalization algorithms and constructing new network architectures. We believe that our analysis will open new avenues of research in better understanding normalization over batch data.

Acknowledgement We thank Anna Hennig and Ying Hu for their help with proofreading.

References

  • [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), 2016.
  • [2] Sanjeev Arora, Zhiyuan Li, and Kaifeng Lyu. Theoretical analysis of auto rate-tuning by batch normalization. In ICLR, 2019.
  • [3] Andrei Atanov, Arsenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, and Dmitry Vetrov. Uncertainty estimation via stochastic batch normalization. In ICLR Workshop, 2018.
  • [4] Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016.
  • [5] Johan Bjorck, Carla Gomes, and Bart Selman. Understanding batch normalization. In NIPS, 2018.
  • [6] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019.
  • [7] Minhyung Cho and Jaehyung Lee. Riemannian approach to batch normalization. In NIPS, 2017.
  • [8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
  • [9] Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and koray kavukcuoglu. Natural neural networks. In NIPS, 2015.
  • [10] Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian eigenvalue density. In ICML, 2019.
  • [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS. 2014.
  • [12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
  • [13] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li.

    Bag of tricks for image classification with convolutional neural networks.

    In CVPR, 2019.
  • [14] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS. 2017.
  • [15] Elad Hoffer, Ron Banner, Itay Golan, and Daniel Soudry. Norm matters: efficient and accurate normalization schemes in deep networks. In NIPS, 2018.
  • [16] Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. In CVPR, 2018.
  • [17] Lei Huang, Yi Zhou, Fan Zhu, Li Liu, and Ling Shao. Iterative normalization: Beyond standardization towards efficient whitening. In CVPR, 2019.
  • [18] Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In NIPS, 2017.
  • [19] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
  • [20] Agnan Kessy, Alex Lewin, and Korbinian Strimmer. Optimal whitening and decorrelation. The American Statistician, 72(4):309–314, 2018.
  • [21] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
  • [22] Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, and Thomas Hofmann. Towards a theoretical understanding of batch normalization. arXiv preprint arXiv:1805.10694, 2018.
  • [23] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
  • [24] Karol Kurach, Mario Lučić, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. A large-scale study on regularization and normalization in GANs. In ICML, 2019.
  • [25] Yann LeCun, Léon Bottou, Genevieve B. Orr, and Klaus-Robert Müller. Effiicient backprop. In Neural Networks: Tricks of the Trade, pages 9–50, 1998.
  • [26] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. In NIPS, 2017.
  • [27] Ilya Loshchilov and Frank Hutter.

    SGDR: stochastic gradient descent with restarts.

    In ICLR, 2017.
  • [28] Ping Luo. Learning deep architectures via generalized whitened neural networks. In ICML, 2017.
  • [29] Ping Luo, Jiamin Ren, and Zhanglin Peng. Differentiable learning-to-normalize via switchable normalization. arXiv preprint arXiv:1806.10779, 2018.
  • [30] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
  • [31] Xingang Pan, Xiaohang Zhan, Jianping Shi, Xiaoou Tang, and Ping Luo. Switchable whitening for deep representation learning. In ICCV, 2019.
  • [32] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer.

    Automatic differentiation in PyTorch.

    In NIPS Autodiff Workshop, 2017.
  • [33] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
  • [34] Mengye Ren, Renjie Liao, Raquel Urtasun, Fabian H. Sinz, and Richard S. Zemel. Normalizing the normalizers: Comparing and extending network normalization schemes. In ICLR, 2017.
  • [35] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.
  • [36] Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In NIPS, 2018.
  • [37] Alexander Shekhovtsov and Boris Flach. Stochastic normalizations as bayesian learning. In ACCV, 2018.
  • [38] Lu Sheng, Ziyi Lin, Jing Shao, and Xiaogang Wang. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In CVPR, 2018.
  • [39] Aliaksandr Siarohin, Enver Sangineto, and Nicu Sebe. Whitening and coloring batch transform for gans. In ICLR, 2019.
  • [40] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
  • [41] Saurabh Singh and Abhinav Shrivastava. Evalnorm: Estimating batch normalization statistics for evaluation. In ICCV, 2019.
  • [42] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, Jan. 2014.
  • [43] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.

    Rethinking the inception architecture for computer vision.

    In CVPR, 2016.
  • [44] Mattias Teye, Hossein Azizpour, and Kevin Smith. Bayesian uncertainty estimation for batch normalized deep networks. In ICML, 2018.
  • [45] Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, and Liang Lin. Kalman normalization: Normalizing internal representations across network layers. In NIPS, 2018.
  • [46] Simon Wiesler and Hermann Ney. A convergence analysis of log-linear training. In NIPS, pages 657–665, 2011.
  • [47] Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018.
  • [48] Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. A mean field theory of batch normalization. In ICLR, 2019.
  • [49] Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger B. Grosse. Three mechanisms of weight decay regularization. In ICLR, 2019.

Appendix A Back-propagation

Given the Batch Whitening (BW) algorithms described in Algorithm 1 of Section 4.1, we provide the corresponding back-propagation during training in Algorithm I. Note that it is easy to calculate in Line. 3 of Algorithm I, given the specific recovery operation shown in the paper. Here, we mainly elaborate on the nontrivial solution of back-propagation through the whitening transformation, i.e., how to calculate given , shown in Line. 5 of Algorithm I.

We consider the following whitening transformations discussed in the paper: Principal Component Analysis (PCA) whitening [16], Zero-phase Component Analysis (ZCA) whitening [16], Cholesky Decomposition (CD) whitening [39] and ItN [17] that approximates ZCA whitening. The back-propagations of all whitening transformations are derived in their original papers [16, 39, 17]. Here, we provide the results for completeness.

1:  Input: mini-batch gradients .
2:  Output: .
3:  Calculate: .
4:  Calculate: .
5:  Calculate: .
6:  Calculate: .
Algorithm I Back-propagation of batch whitening algorithms.

a.1 PCA Whitening

PCA Whitening uses , where and are the eigenvalues and associated eigenvectors of , i.e. .

The backward pass of PCA whitening is:

(5)
(6)
(7)

where is 0-diagonal with , the operator is an element-wise matrix multiplication, and sets the off-diagonal elements of to zero.

a.2 ZCA Whitening

ZCA whitening uses , where the PCA whitened input is rotated back by the corresponding rotation matrix .

The backward pass of ZCA whitening is:

(8)
(9)
(10)
1:  Input: .
2:  Output: .
3:   .
4:  .
5:  for  do
6:     
7:  end for
8:  
Algorithm II Whitening activations with Newton’s iteration.
(a) ZCA Whitening
(b) CD Whitening
Figure A1: Comparison of estimating the population statistics of whiten matrix using different estimation objects (/). We train MLPs with varying widths (the number of neurons in each layer) and batch sizes, for MNIST classification. We evaluate the difference in test accuracy between and : . (a) and (b) show the results using ZCA and CD whitening respectively, under a learning rate of 0.5.

a.3 CD Whitening

CD whitening uses , where is a lower triangular matrix from the Cholesky decomposition, with . The backward pass of CD whitening is:

(11)
(12)

a.4 ItN Whitening

ItN, which approximates the ZCA whitening using Newton’s iteration, is shown in Algorithm II, where is the iteration number and denotes the trace of a matrix.

Given , the back-propagation is:

(13)

Here, can be calculated by the following iterations:

(14)

Appendix B Stochasticity During Inference

In Section 3.3 of the paper, we mentioned that we try other learning rates in the experiments to compare the estimation methods during inference. Here, we provide the results under a learning rate of 0.5, shown in Figure A1.

Appendix C More Details for Experiments

c.1 Classification on ImageNet

In Section 4.2.2 of the paper, we mentioned that ItN consistently provides a slight improvement over the original ItN proposed in [17] (using as an estimation object). Here, we provide the results, including the original ItN, in Table A1. We use an iteration number of 5 for all ItN-related methods. For ItN, we use a group size of 64, as recommended in [17]. For ItN, we use a larger group size of 256, since ItN can still provide a good estimation of population statistics in high dimensions during inference, as shown in the paper.

Method Step decay Cosine decay
Baseline 76.20 76.62
ItN-ARC [17] 77.07 (0.87) 77.47 (0.85)
ItN-ARC [17] 76.97 (0.77) 77.43 (0.81)
ItN-ARC 77.18 (0.98) 77.68 (1.06)
ItN-ARC 77.28 (1.08) 77.92 (1.30)
Table A1: Results using ResNet-50 for ImageNet. We evaluate the top-1 validation accuracy (, single model and single-crop).
PARAMETER VALUE
Learning Rate
Table A2: The scope of hyperparameter ranges used in the stability experiments. The Cartesian product of the values suffices to uncover most of the recent results from the literature [24].
Scale Shift Coloring
Method IS FID IS FID
BN 7.141 0.066 30.26 7.264 0.122 28.35
CD 7.135 0.066 30.40 7.362 0.069 25.97
ZCA-64 7.128 0.067 29.24 7.445 0.094 26.78
ItN 7.369 0.104 29.02 7.396 0.054 28.24
Table A3: Results on stability experiments with non-saturating loss. We report the best FID and the corresponding IS from all the configurations for different whitening methods.
Figure A2: Stability experiments on GAN with hinge loss [30, 6] for unconditional image generation. We show the FIDs with full range.
Figure A3: Comparison of BW methods when using the Covariance Matrix (CM) and Whitening Matrix (WM) as the estimation object. We show the FIDs with full range.

c.2 Training GANs

c.2.1 Stability Experiments

Experimental Setup

The DCGAN architecture in these experiments is shown in Table A7. Note that spectral normalization [30] is applied on the discriminator, following the setup shown in [39, 30]. We evaluate the FID between the 10k test data and 10K randomly generated examples. We use the Adam optimizer [21] and train for 100 epochs. We consider 15 hyper-parameter settings (Table A2) by varying the learning rate , first and second momentum (, ) of Adam, and number of discriminator updates per generator update . The learning rate is linearly decreased until it reaches 0.

Results

In Figures 8 and 9 of the paper, we only show the results with FID ranging in (20, 100) for better representation. Here, we provide the corresponding FIDs with full range in Figure A2 and A3.

We perform experiments on DCGAN with the non-saturating loss [11]. Figure A4 shows the results. Here, we obtain similar observations as for the DCGAN with the hinge loss, shown in Section 4.3.1 of the paper.

(a) FID ranging in (20,100)

Figure A4: Stability experiments on GAN with non-saturating loss [11] for unconditional image generation.

c.2.2 Validation on Larger Architecture

Experimental Setup

The architectures in these experiments are shown in Table A8 (DCGAN) and A9 (ResNet). The experimental setup is as follows [39]: We evaluate the FID between the 10k test data and 10K randomly generated examples. We use the Adam optimizer [21] with a learning rate . The learning rate is linearly decreased until it reaches 0. We use first momentum and second momentum . For DCGAN, we use the number of discriminator updates per generator update and train the model for 100 epochs, while for ResNet, we use and train for 50 epochs.

c.3 Comparison in Running Time

The main disadvantage of whitening methods, compared to standardization, is their expensive computational costs. Here, we provide the time costs of the trained models discussed in the paper.

We note that the computational cost of the Singular Value Decomposition (SVD) used in ZCA/PCA whitening and the Cholesky Decomposition highly depends on the specific deep learning platform (

e.g., PyTorch [32] or Tensorflow [1]). We thus conduct experiments both on PyTorch (version number: 1.0.1) and Tensorflow (version number: 1.13), with CUDA (version number: 10.0.130). All implementations of BW are based on the API provided by PyTorch and Tensorflow. We hope that our experiments can provide good guidelines for applying BW in practice.

c.3.1 Classification Models Using PyTorch

Table A4 shows the time costs of the VGG models for CIFAR-10 classification, described in Section 4.2.1 of the paper. Figure A5 and Table A5 show the time costs of the ResNet-18 and ResNet-50 described in Section 4.2.2 of the paper, respectively. Note that we run ResNet-18 on one GPU, while ResNet-50 on two GPUs.

Our main observations include: (1) ItN is more computationally efficient. (2) CD whitening is slower than ZCA whitening, which is contrary to the observation in [17] where CD whitening was significantly more efficient than ZCA. The main reason for this is that CD whitening requires the Cholesky decomposition and matrix inverse, which are slower than SVD under the PyTorch platform. We note that under Tensorflow, which was used to implement CD whitening in [39], CD whitening is more efficient than ZCA, as shown in Section C.3.2. (3) Group based whitening with small groups requires more training time than larger groups. The main reason is our unoptimized implementation where we whiten each group sequentially instead of in parallel, because the corresponding platform does not provide an easy way to use linear algebra library of CUDA in parallel, which is also mentioned in [16]. We believe BW would be more widely used if the group based methods could be easily implemented in parallel.

(a) ARC
(b) ARC
Figure A5: Time cost (s/iteration) on ResNet-18 for ImageNet classification described in Section 4.2.2 of the paper. We compare the BW methods with different group sizes under (a) ARC and (b) ARC, and the results are averaged over 100 iterations.
(a) GPU
(b) CPU
Figure A6: Time cost (ms/iteration) on GANs for stability experiments described in Section 4.3.1 of the paper. We compare the BW methods with different group sizes running on (a) a GPU and (b) a CPU. We evaluate the time cost of the generator where BW is applied, and the results are averaged over 100 iterations.
Method Time (Group Size)
16 32 64 128 256 512
BN 0.049 0.049 0.049 0.049 0.049 0.049
ItN 0.624 0.338 0.192 0.13 0.11 0.106
ZCA 2.245 1.536 0.906 0.498 0.505 0.512
CD 9.8 4.818 2.391 1.29 0.907 0.793
Table A4: Time cost (s/iteration) on VGG for CIFAR-10 classification described in Section 4.2.1 of the paper. We compare the BW methods with different group sizes, and the results are averaged over 100 iterations.
Method Time
Baselin (BN) 0.418
ItN-G256-ARC 0.464
ItN-G256-ARC 0.546
ItN-G64-ARC 0.522
ItN-G64-ARC 0.662
Table A5: Time cost (s/iteration) on ResNet-50 for ImageNet classification described in Section 4.2.2 of the paper. The results are averaged over 100 iterations.
Method DCGAN Resnet
CPU GPU CPU GPU
CD 0.023 0.019 0.036 0.038
ZCA-64 0.018 0.544 0.036 0.894
ItN 0.017 0.016 0.034 0.034
Table A6: Time cost (s/iteration) on large GAN architecture described in Section 4.3.2 of the paper. We evaluate the time cost of the generator where BW is applied, and the results are averaged over 100 iterations.
dense,

, stride=2 deconv. BN 256 ReLU

, stride=2 deconv. BN 128 ReLU
, stride=2 deconv. BN 64 ReLU
, stride=1 conv. 3 Tanh
(a) Generator
RGB image
, stride=1 conv 32 lReLU
, stride=2 conv 64 lReLU
, stride=1 conv 64 lReLU
, stride=2 conv 128 lReLU
, stride=1 conv 128 lReLU
, stride=2 conv 256 lReLU
, stride=1 conv 256 lReLU
dense 1
(b) Discriminator
Table A7: DCGAN architectures in stability experiments.
dense,
, stride=2 deconv. BN 512 ReLU
, stride=2 deconv. BN 256 ReLU
, stride=2 deconv. BN 128 ReLU
, stride=1 conv. 3 Tanh
(a) Generator
RGB image
, stride=1 conv 64 lReLU
, stride=2 conv 128 lReLU
, stride=1 conv 128 lReLU
, stride=2 conv 256 lReLU
, stride=1 conv 256 lReLU
, stride=2 conv 512 lReLU
, stride=1 conv 512 lReLU
dense 1
(b) Discriminator
Table A8: Large architecture of DCGAN.
dense,
ResBlock up 256
ResBlock up 256
ResBlock up 256
BN, ReLU, conv, 3 Tanh
(a) Generator
RGB image
ResBlock down 128
ResBlock down 128
ResBlock 128
ResBlock 128
ReLU
Global sum pooling
dense 1
(b) Discriminator
Table A9: Large architecture of ResNet. The ResBlock follows [39].

c.3.2 GANs Using Tensorflow

Figure A6 shows the time costs of the GAN models described in Section 4.3.1 of the paper. We perform experiments with the BW operation running on both a GPU and a CPU, shown in Figure A6 (a) and A6 (b), respectively. We observe that ZCA whitening running on GPUs has a significantly more expensive computational cost than other methods, which is consistent with the observation in [39]. We find this can be alleviated when running ZCA whitening on CPUs.

Table A6 shows the time cost of the GAN models described in Section 4.3.2 of the paper.