1 Introduction
The resolution and quality of images produced by generative methods, especially generative adversarial networks (GAN) [15], are improving rapidly [23, 31, 5]. The current stateoftheart method for highresolution image synthesis is StyleGAN [24], which has been shown to work reliably on a variety of datasets. Our work focuses on fixing its characteristic artifacts and improving the result quality further.
The distinguishing feature of StyleGAN [24] is its unconventional generator architecture. Instead of feeding the input latent code only to the beginning of a the network, the mapping network first transforms it to an intermediate latent code . Affine transforms then produce styles that control the layers of the synthesis network via adaptive instance normalization (AdaIN) [20, 9, 12, 8]. Additionally, stochastic variation is facilitated by providing additional random noise maps to the synthesis network. It has been demonstrated [24, 38] that this design allows the intermediate latent space to be much less entangled than the input latent space . In this paper, we focus all analysis solely on , as it is the relevant latent space from the synthesis network’s point of view.
Many observers have noticed characteristic artifacts in images generated by StyleGAN [3]. We identify two causes for these artifacts, and describe changes in architecture and training methods that eliminate them. First, we investigate the origin of common bloblike artifacts, and find that the generator creates them to circumvent a design flaw in its architecture. In Section 2, we redesign the normalization used in the generator, which removes the artifacts. Second, we analyze artifacts related to progressive growing [23] that has been highly successful in stabilizing highresolution GAN training. We propose an alternative design that achieves the same goal — training starts by focusing on lowresolution images and then progressively shifts focus to higher and higher resolutions — without changing the network topology during training. This new design also allows us to reason about the effective resolution of the generated images, which turns out to be lower than expected, motivating a capacity increase (Section 4).
Quantitative analysis of the quality of images produced using generative methods continues to be a challenging topic. Fréchet inception distance (FID) [19]
measures differences in the density of two distributions in the highdimensional feature space of a InceptionV3 classifier
[39]. Precision and Recall (P&R)
[36, 27] provide additional visibility by explicitly quantifying the percentage of generated images that are similar to training data and the percentage of training data that can be generated, respectively. We use these metrics to quantify the improvements.Both FID and P&R are based on classifier networks that have recently been shown to focus on textures rather than shapes [11], and consequently, the metrics do not accurately capture all aspects of image quality. We observe that the perceptual path length (PPL) metric [24]
, introduced as a method for estimating the quality of latent space interpolations, correlates with consistency and stability of shapes. Based on this, we regularize the synthesis network to favor smooth mappings (Section
3) and achieve a clear improvement in quality. To counter its computational expense, we also propose executing all regularizations less frequently, observing that this can be done without compromising effectiveness.Finally, we find that projection of images to the latent space works significantly better with the new, pathlength regularized generator than with the original StyleGAN. This has practical importance since it allows us to tell reliably whether a given image was generated using a particular generator (Section 5).
Our implementation and trained models are available at https://github.com/NVlabs/stylegan2
2 Removing normalization artifacts
We begin by observing that most images generated by StyleGAN exhibit characteristic blobshaped artifacts that resemble water droplets. As shown in Figure 1, even when the droplet may not be obvious in the final image, it is present in the intermediate feature maps of the generator.^{1}^{1}1In rare cases (perhaps 0.1% of images) the droplet is missing, leading to severely corrupted images. See Appendix A for details. The anomaly starts to appear around 6464 resolution, is present in all feature maps, and becomes progressively stronger at higher resolutions. The existence of such a consistent artifact is puzzling, as the discriminator should be able to detect it.
We pinpoint the problem to the AdaIN operation that normalizes the mean and variance of each feature map separately, thereby potentially destroying any information found in the magnitudes of the features relative to each other. We hypothesize that the droplet artifact is a result of the generator intentionally sneaking signal strength information past instance normalization: by creating a strong, localized spike that dominates the statistics, the generator can effectively scale the signal as it likes elsewhere. Our hypothesis is supported by the finding that when the normalization step is removed from the generator, as detailed below, the droplet artifacts disappear completely.
2.1 Generator architecture revisited
We will first revise several details of the StyleGAN generator to better facilitate our redesigned normalization. These changes have either a neutral or small positive effect on their own in terms of quality metrics.
Figure 2a shows the original StyleGAN synthesis network [24], and in Figure 2b we expand the diagram to full detail by showing the weights and biases and breaking the AdaIN operation to its two constituent parts: normalization and modulation. This allows us to redraw the conceptual gray boxes so that each box indicates the part of the network where one style is active (i.e., “style block”). Interestingly, the original StyleGAN applies bias and noise within the style block, causing their relative impact to be inversely proportional to the current style’s magnitudes. We observe that more predictable results are obtained by moving these operations outside the style block, where they operate on normalized data. Furthermore, we notice that after this change it is sufficient for the normalization and modulation to operate on the standard deviation alone (i.e., the mean is not needed). The application of bias, noise, and normalization to the constant input can also be safely removed without observable drawbacks. This variant is shown in Figure 2c, and serves as a starting point for our redesigned normalization.
2.2 Instance normalization revisited
Given that instance normalization appears to be too strong, how can we relax it while retaining the scalespecific effects of the styles? We rule out batch normalization
[21] as it is incompatible with the small minibatches required for highresolution synthesis. Alternatively, we could simply remove the normalization. While actually improving FID slightly [27], this makes the effects of the styles cumulative rather than scalespecific, essentially losing the controllability offered by StyleGAN (see video). We will now propose an alternative that removes the artifacts while retaining controllability. The main idea is to base normalization on the expected statistics of the incoming feature maps, but without explicit forcing.Recall that a style block in Figure 2c consists of modulation, convolution, and normalization. Let us start by considering the effect of a modulation followed by a convolution. The modulation scales each input feature map of the convolution based on the incoming style, which can alternatively be implemented by scaling the convolution weights:
(1) 
where and are the original and modulated weights, respectively, is the scale corresponding to the th input feature map, and and enumerate the output feature maps and spatial footprint of the convolution, respectively.
Now, the purpose of instance normalization is to essentially remove the effect of
from the statistics of the convolution’s output feature maps. We observe that this goal can be achieved more directly. Let us assume that the input activations are i.i.d. random variables with unit standard deviation. After modulation and convolution, the output activations have standard deviation of
(2) 
i.e., the outputs are scaled by the norm of the corresponding weights. The subsequent normalization aims to restore the outputs back to unit standard deviation. Based on Equation 2, this is achieved if we scale (“demodulate”) each output feature map by . Alternatively, we can again bake this into the convolution weights:
(3) 
where is a small constant to avoid numerical issues.
Configuration  FFHQ, 10241024  LSUN Car, 512384  

FID  Path length  Precision  Recall  FID  Path length  Precision  Recall  
a  Baseline StyleGAN [24]  4.40  195.9  0.721  0.399  3.27  1484.5  0.701  0.435 
b  + Weight demodulation  4.39  173.8  0.702  0.425  3.04  862.4  0.685  0.488 
c  + Lazy regularization  4.38  167.2  0.719  0.427  2.83  981.6  0.688  0.493 
d  + Path length regularization  4.34  139.2  0.715  0.418  3.43  651.2  0.697  0.452 
e  + No growing, new G & D arch.  3.31  116.7  0.705  0.449  3.19  471.2  0.690  0.454 
f  + Large networks  2.84  129.4  0.689  0.492  2.32  415.5  0.678  0.514 
We have now baked the entire style block to a single convolution layer whose weights are adjusted based on using Equations 1 and 3 (Figure 2d). Compared to instance normalization, our demodulation technique is weaker because it is based on statistical assumptions about the signal instead of actual contents of the feature maps. Similar statistical analysis has been extensively used in modern network initializers [13, 18], but we are not aware of it being previously used as a replacement for datadependent normalization. Our demodulation is also related to weight normalization [37]
that performs the same calculation as a part of reparameterizing the weight tensor. Prior work has identified weight normalization as beneficial in the context of GAN training
[42].Our new design removes the characteristic artifacts (Figure 3) while retaining full controllability, as demonstrated in the accompanying video. FID remains largely unaffected (Table 1, rows a, b), but there is a notable shift from precision to recall. We argue that this is generally desirable, since recall can be traded into precision via truncation, whereas the opposite is not true [27]. In practice our design can be implemented efficiently using grouped convolutions, as detailed in Appendix B. To avoid having to account for the activation function in Equation 3, we scale our activation functions so that they retain the expected signal variance.
3 Image quality and generator smoothness
While GAN metrics such as FID or Precision and Recall (P&R) successfully capture many aspects of the generator, they continue to have somewhat of a blind spot for image quality. For an example, refer to Figures 13 and 14 that contrast generators with identical FID and P&R scores but markedly different overall quality.^{2}^{2}2
We believe that the key to the apparent inconsistency lies in the particular choice of feature space rather than the foundations of FID or P&R. It was recently discovered that classifiers trained using ImageNet
[35] tend to base their decisions much more on texture than shape [11], while humans strongly focus on shape [28]. This is relevant in our context because FID and P&R use highlevel features from InceptionV3 [39] and VGG16 [39], respectively, which were trained in this way and are thus expected to be biased towards texture detection. As such, images with, e.g., strong cat textures may appear more similar to each other than a human observer would agree, thus partially compromising densitybased metrics (FID) and manifold coverage metrics (P&R).We observe an interesting correlation between perceived image quality and perceptual path length (PPL) [24], a metric that was originally introduced for quantifying the smoothness of the mapping from a latent space to the output image by measuring average LPIPS distances [49] between generated images under small perturbations in latent space. Again consulting Figures 13 and 14, a smaller PPL (smoother generator mapping) appears to correlate with higher overall image quality, whereas other metrics are blind to the change. Figure 4 examines this correlation more closely through perimage PPL scores computed by sampling latent space around individual points in on StyleGAN trained on LSUN Cat: low PPL scores are indeed indicative of highquality images, and vice versa. Figure 5a shows the corresponding histogram of perimage PPL scores and reveals the long tail of the distribution. The overall PPL for the model is simply the expected value of the perimage PPL scores.
It is not immediately obvious why a low PPL should correlate with image quality. We hypothesize that during training, as the discriminator penalizes broken images, the most direct way for the generator to improve is to effectively stretch the region of latent space that yields good images. This would lead to the lowquality images being squeezed into small latent space regions of rapid change. While this improves the average output quality in the short term, the accumulating distortions impair the training dynamics and consequently the final image quality.
This empirical correlation suggests that favoring a smooth generator mapping by encouraging low PPL during training may improve image quality, which we show to be the case below. As the resulting regularization term is somewhat expensive to compute, we first describe a general optimization that applies to all regularization techniques.
3.1 Lazy regularization
Typically the main loss function (e.g., logistic loss
[15]) and regularization terms (e.g., [30]) are written as a single expression and are thus optimized simultaneously. We observe that typically the regularization terms can be computed much less frequently than the main loss function, thus greatly diminishing their computational cost and the overall memory usage. Table 1, row c shows that no harm is caused when regularization is performed only once every 16 minibatches, and we adopt the same strategy for our new regularizer as well. Appendix B gives implementation details.3.2 Path length regularization
Excess path distortion in the generator is evident as poor local conditioning: any small region in becomes arbitrarily squeezed and stretched as it is mapped by . In line with earlier work [33], we consider a generator mapping from the latent space to image space to be wellconditioned if, at each point in latent space, small displacements yield changes of equal magnitude in image space regardless of the direction of perturbation.
At a single , the local metric scaling properties of the generator mapping are captured by the Jacobian matrix . Motivated by the desire to preserve the expected lengths of vectors regardless of the direction, we formulate our regularizer as
(4) 
where
are random images with normally distributed pixel intensities, and
, where are normally distributed. We show in Appendix C that, in high dimensions, this prior is minimized when is orthogonal (up to a global scale) at any. An orthogonal matrix preserves lengths and introduces no squeezing along any dimension.
To avoid explicit computation of the Jacobian matrix, we use the identity
, which is efficiently computable using standard backpropagation
[6]. The constant is set dynamically during optimization as the longrunning exponential moving average of the lengths , allowing the optimization to find a suitable global scale by itself.Our regularizer is closely related to the Jacobian clamping regularizer presented by Odena et al. [33]. Practical differences include that we compute the products analytically whereas they use finite differences for estimating with . It should be noted that spectral normalization [31] of the generator [45]
only constrains the largest singular value, posing no constraints on the others and hence not necessarily leading to better conditioning.
In practice, we notice that path length regularization leads to more reliable and consistently behaving models, making architecture exploration easier. Figure 5b shows that path length regularization clearly improves the distribution of perimage PPL scores. Table 1, row d shows that regularization reduces PPL, as expected, but there is a tradeoff between FID and PPL in LSUN Car and other datasets that are less structured than FFHQ. Furthermore, we observe that a smoother generator is easier to invert (Section 5).
4 Progressive growing revisited
Progressive growing [23] has been very successful in stabilizing highresolution image synthesis, but it causes its own characteristic artifacts. The key issue is that the progressively grown generator appears to have a strong location preference for details; the accompanying video shows that when features like teeth or eyes should move smoothly over the image, they may instead remain stuck in place before jumping to the next preferred location. Figure 6 shows a related artifact. We believe the problem is that in progressive growing each resolution serves momentarily as the output resolution, forcing it to generate maximal frequency details, which then leads to the trained network to have excessively high frequencies in the intermediate layers, compromising shift invariance [48]. Appendix A shows an example. These issues prompt us to search for an alternative formulation that would retain the benefits of progressive growing without the drawbacks.
4.1 Alternative network architectures
While StyleGAN uses simple feedforward designs in the generator (synthesis network) and discriminator, there is a vast body of work dedicated to the study of better network architectures. In particular, skip connections [34, 22], residual networks [17, 16, 31], and hierarchical methods [7, 46, 47] have proven highly successful also in the context of generative methods. As such, we decided to reevaluate the network design of StyleGAN and search for an architecture that produces highquality images without progressive growing.
Figure 7a shows MSGGAN [22], which connects the matching resolutions of the generator and discriminator using multiple skip connections. The MSGGAN generator is modified to output a mipmap [41] instead of an image, and a similar representation is computed for each real image as well. In Figure 7b we simplify this design by upsampling and summing the contributions of RGB outputs corresponding to different resolutions. In the discriminator, we similarly provide the downsampled image to each resolution block of the discriminator. We use bilinear filtering in all up and downsampling operations. In Figure 7
c we further modify the design to use residual connections.
^{3}^{3}3In residual network architectures, the addition of two paths leads to a doubling of signal variance, which we cancel by multiplying with . This is crucial for our networks, whereas in classification resnets [17] the issue is typically hidden by batch normalization. This design is similar to LAPGAN [7] without the perresolution discriminators employed by Denton et al.FFHQ  D original  D input skips  D residual  

FID  PPL  FID  PPL  FID  PPL  
G original  4.32  237  4.18  207  3.58  238 
G output skips  4.33  149  3.77  116  3.31  117 
G residual  4.35  187  3.96  201  3.79  203 
LSUN Car  D original  D input skips  D residual  

FID  PPL  FID  PPL  FID  PPL  
G original  3.75  905  3.23  758  3.25  802 
G output skips  3.77  544  3.86  316  3.19  471 
G residual  3.93  981  3.40  667  2.66  645 
Table 2 compares three generator and three discriminator architectures: original feedforward networks as used in StyleGAN, skip connections, and residual networks, all trained without progressive growing. FID and PPL are provided for each of the 9 combinations. We can see two broad trends: skip connections in the generator drastically improve PPL in all configurations, and a residual discriminator network is clearly beneficial for FID. The latter is perhaps not surprising since the structure of discriminator resembles classifiers where residual architectures are known to be helpful. However, a residual architecture was harmful in the generator — the lone exception was FID in LSUN Car when both networks were residual.
For the rest of the paper we use a skip generator and a residual discriminator, and do not use progressive growing. This corresponds to configuration e in Table 1, and as can be seen table, switching to this setup significantly improves FID and PPL.
4.2 Resolution usage
The key aspect of progressive growing, which we would like to preserve, is that the generator will initially focus on lowresolution features and then slowly shift its attention to finer details. The architectures in Figure 7 make it possible for the generator to first output low resolution images that are not affected by the higherresolution layers in a significant way, and later shift the focus to the higherresolution layers as the training proceeds. Since this is not enforced in any way, the generator will do it only if it is beneficial. To analyze the behavior in practice, we need to quantify how strongly the generator relies on particular resolutions over the course of training.
Since the skip generator (Figure 7b) forms the image by explicitly summing RGB values from multiple resolutions, we can estimate the relative importance of the corresponding layers by measuring how much they contribute to the final image. In Figure 8a, we plot the standard deviation of the pixel values produced by each tRGB layer as a function of training time. We calculate the standard deviations over 1024 random samples of and normalize the values so that they sum to 100%.
Dataset  Resolution  StyleGAN (a)  Ours (f)  

FID  PPL  FID  PPL  
LSUN Car  512384  3.27  1485  2.32  416 
LSUN Cat  256256  8.53  924  6.93  439 
LSUN Church  256256  4.21  742  3.86  342 
LSUN Horse  256256  3.83  1405  3.43  338 
At the start of training, we can see that the new skip generator behaves similar to progressive growing — now achieved without changing the network topology. It would thus be reasonable to expect the highest resolution to dominate towards the end of the training. The plot, however, shows that this fails to happen in practice, which indicates that the generator may not be able to “fully utilize” the target resolution. To verify this, we inspected the generated images manually and noticed that they generally lack some of the pixellevel detail that is present in the training data — the images could be described as being sharpened versions of images instead of true images.
This leads us to hypothesize that there is a capacity problem in our networks, which we test by doubling the number of feature maps in the highestresolution layers of both networks.^{4}^{4}4We double the number of feature maps in resolutions – while keeping other parts of the networks unchanged. This increases the total number of trainable parameters in the generator by 22% (25M 30M) and in the discriminator by 21% (24M 29M). This brings the behavior more in line with expectations: Figure 8b shows a significant increase in the contribution of the highestresolution layers, and Table 1, row f shows that FID and Recall improve markedly.
Table 3 compares StyleGAN and our improved variant in several LSUN categories, again showing clear improvements in FID and significant advances in PPL. It is possible that further increases in the size could provide additional benefits.
5 Projection of images to latent space
Inverting the synthesis network is an interesting problem that has many applications. Manipulating a given image in the latent feature space requires finding a matching latent vector for it first. Also, as the image quality of GANs improves, it becomes more important to be able to attribute a potentially synthetic image to the network that generated it.
Previous research [1, 10] suggests that instead of finding a common latent vector , the results improve if a separate is chosen for each layer of the generator. The same approach was used in an early encoder implementation [32]. While extending the latent space in this fashion finds a closer match to a given image, it also enables projecting arbitrary images that should have no latent representation. Focusing on the forensic detection of generated images, we concentrate on finding latent codes in the original, unextended latent space, as these correspond to images that the generator could have produced.
Our projection method differs from previous methods in two ways. First, we add rampeddown noise to the latent code during optimization in order to explore the latent space more comprehensively. Second, we also optimize the stochastic noise inputs of the StyleGAN generator, regularizing them to ensure they do not end up carrying coherent signal. The regularization is based on enforcing the autocorrelation coefficients of the noise maps to match those of unit Gaussian noise over multiple scales. Details of our projection method can be found in Appendix D.
5.1 Detection of generated images
It is possible to train classifiers to detect GANgenerated images with reasonably high confidence [29, 44, 40, 50]. However, given the rapid pace of progress, this may not be a lasting situation. Projectionbased methods are unique in that they can provide evidence, in the form of a matching latent vector, that an image was synthesized by a specific network [2]. There is also no reason why their effectiveness would diminish as the quality of synthetic images improves, unlike classifierbased methods that may have fewer clues to work with in the future.
Example images and their projected and resynthesized counterparts. For each configuration, top row shows the target images and bottom row shows the synthesis of the corresponding projected latent vector and noise inputs. Top: With the baseline StyleGAN, projection often finds a reasonably close match for generated images, but especially the backgrounds differ from the originals. Middle: The images generated using our best architecture can be projected almost perfectly back into generator inputs, allowing unambiguous attribution to the generating model. Bottom: Projected real images (from the training set) show clear differences to the originals, as expected. All tests were done using the same projection method and hyperparameters.
It turns out that our improvements to StyleGAN make it easier to detect generated images using projectionbased methods, even though the quality of generated images is higher. We measure how well the projection succeeds by computing the LPIPS [49] distance between original and resynthesized image as , where is the image being analyzed and denotes the approximate projection operation. Figure 9 shows histograms of these distances for LSUN Car and FFHQ datasets using the original StyleGAN and our best architecture, and Figure 10 shows example projections. As the latter illustrates, the images generated using our improved architecture can be projected into generator inputs so well that they can be unambiguously attributed to the generating network. With original StyleGAN, even though it should technically be possible to find a matching latent vector, it appears that the latent space is in practice too complex for this to succeed reliably. Our improved model with smoother latent space suffers considerably less from this problem.
6 Conclusions and future work
We have identified and fixed several image quality issues in StyleGAN, improving the quality further and considerably advancing the state of the art in several datasets. In some cases the improvements are more clearly seen in motion, as demonstrated in the accompanying video. Appendix A includes further examples of results obtainable using our method. Despite the improved quality, it is easier to detect images generated by our method using projectionbased methods, compared to the original StyleGAN.
Training performance has also improved. At resolution, the original StyleGAN (config a in Table 1) trains at 37 images per second on NVIDIA DGX1 with 8 Tesla V100 GPUs, while our config e trains 40% faster at 61 img/s. Most of the speedup comes from simplified dataflow due to weight demodulation, lazy regularization, and code optimizations. Config f (larger networks) trains at 31 img/s, and is thus only slightly more expensive to train than original StyleGAN. With config f, the total training time was 9 days for FFHQ and 13 days for LSUN Car.
As future work, it could be fruitful to study further improvements to the path length regularization, e.g., by replacing the pixelspace distance with a datadriven featurespace metric.
7 Acknowledgements
We thank MingYu Liu for an early review, Timo Viitanen for his help with code release, and Tero Kuosmanen for compute infrastructure.
References
 [1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2StyleGAN: How to embed images into the StyleGAN latent space? In ICCV, 2019.
 [2] Michael Albright and Scott McCloskey. Source generator attribution via inversion. In CVPR Workshops, 2019.
 [3] Carl Bergstrom and Jevin West. Which face is real? http://www.whichfaceisreal.com/learn.html, Accessed November 15, 2019.
 [4] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
 [5] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. CoRR, abs/1809.11096, 2018.
 [6] Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. Equilibrated adaptive learning rates for nonconvex optimization. CoRR, abs/1502.04390, 2015.
 [7] Emily L. Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. CoRR, abs/1506.05751, 2015.
 [8] Vincent Dumoulin, Ethan Perez, Nathan Schucher, Florian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio. Featurewise transformations. Distill, 2018. https://distill.pub/2018/featurewisetransformations.
 [9] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. CoRR, abs/1610.07629, 2016.
 [10] Aviv Gabbay and Yedid Hoshen. Style generator inversion for image enhancement and animation. CoRR, abs/1906.11880, 2019.
 [11] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. ImageNettrained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. CoRR, abs/1811.12231, 2018.
 [12] Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, and Jonathon Shlens. Exploring the structure of a realtime, arbitrary neural artistic stylization network. CoRR, abs/1705.06830, 2017.

[13]
Xavier Glorot and Yoshua Bengio.
Understanding the difficulty of training deep feedforward neural networks.
InProceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
, pages 249–256, 2010.  [14] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins Studies in the Mathematical Sciences. Johns Hopkins University Press, 2013.
 [15] Ian Goodfellow, Jean PougetAbadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In NIPS, 2014.
 [16] Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. CoRR, abs/1704.00028, 2017.
 [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
 [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing humanlevel performance on ImageNet classification. CoRR, abs/1502.01852, 2015.
 [19] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two timescale update rule converge to a local Nash equilibrium. In Proc. NIPS, pages 6626–6637, 2017.
 [20] Xun Huang and Serge J. Belongie. Arbitrary style transfer in realtime with adaptive instance normalization. CoRR, abs/1703.06868, 2017.
 [21] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
 [22] Animesh Karnewar, Oliver Wang, and Raghu Sesha Iyengar. MSGGAN: multiscale gradient GAN for stable image synthesis. CoRR, abs/1903.06048, 2019.
 [23] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. CoRR, abs/1710.10196, 2017.
 [24] Tero Karras, Samuli Laine, and Timo Aila. A stylebased generator architecture for generative adversarial networks. In Proc. CVPR, 2018.
 [25] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.

[26]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.
ImageNet classification with deep convolutional neural networks.
In NIPS, pages 1097–1105. 2012.  [27] Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. In Proc. NeurIPS, 2019.
 [28] Barbara Landau, Linda B. Smith, and Susan S. Jones. The importance of shape in early lexical learning. Cognitive Development, 3(3), 1988.
 [29] Haodong Li, Han Chen, Bin Li, and Shunquan Tan. Can forensic detectors identify GAN generated images? In Proc. AsiaPacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2018.
 [30] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? CoRR, abs/1801.04406, 2018.
 [31] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. CoRR, abs/1802.05957, 2018.

[32]
Dmitry Nikitko.
StyleGAN – Encoder for official TensorFlow implementation.
https://github.com/Puzer/styleganencoder/, 2019.  [33] Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B. Brown, Christopher Olah, Colin Raffel, and Ian Goodfellow. Is generator conditioning causally related to GAN performance? CoRR, abs/1802.08768, 2018.
 [34] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. UNet: Convolutional networks for biomedical image segmentation. In Proc. Medical Image Computing and ComputerAssisted Intervention (MICCAI), pages 234–241, 2015.
 [35] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and FeiFei Li. ImageNet large scale visual recognition challenge. In Proc. CVPR, 2015.
 [36] Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. CoRR, abs/1806.00035, 2018.
 [37] Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. CoRR, abs/1602.07868, 2016.
 [38] Yujun Shen, Jinjin Gu, Xiaoou Tang, and Bolei Zhou. Interpreting the latent space of GANs for semantic face editing. CoRR, abs/1907.10786, 2019.
 [39] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for largescale image recognition. CoRR, abs/1409.1556, 2014.
 [40] Run Wang, Lei Ma, Felix JuefeiXu, Xiaofei Xie, Jian Wang, and Yang Liu. FakeSpotter: A simple baseline for spotting AIsynthesized fake faces. CoRR, abs/1909.06122, 2019.
 [41] Lance Williams. Pyramidal parametrics. SIGGRAPH Comput. Graph., 17(3):1–11, 1983.
 [42] Sitao Xiang and Hao Li. On the effects of batch and weight normalization in generative adversarial networks. CoRR, abs/1704.03971, 2017.
 [43] Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff, and Jianxiong Xiao. LSUN: Construction of a largescale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015.
 [44] Ning Yu, Larry Davis, and Mario Fritz. Attributing fake images to GANs: Analyzing fingerprints in generated images. CoRR, abs/1811.08180, 2018.
 [45] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Selfattention generative adversarial networks. CoRR, abs/1805.08318, 2018.
 [46] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris N. Metaxas. StackGAN: text to photorealistic image synthesis with stacked generative adversarial networks. In ICCV, 2017.
 [47] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N. Metaxas. StackGAN++: realistic image synthesis with stacked generative adversarial networks. CoRR, abs/1710.10916, 2017.
 [48] Richard Zhang. Making convolutional networks shiftinvariant again. In Proc. ICML, 2019.

[49]
Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang.
The unreasonable effectiveness of deep features as a perceptual metric.
In Proc. CVPR, 2018.  [50] Xu Zhang, Svebor Karaman, and ShihFu Chang. Detecting and simulating artifacts in GAN fake images. CoRR, abs/1907.06515, 2019.
Appendix A Image quality
We include several large images that illustrate various aspects related to image quality. Figure 11 shows handpicked examples illustrating the quality and diversity achievable using our method in FFHQ, while Figure 12 shows uncurated results for all datasets mentioned in the paper.
Figures 13 and 14 demonstrate cases where FID and P&R give nonintuitive results, but PPL seems to be more in line with human judgement.
We also include images relating to StyleGAN artifacts. Figure 15 shows a rare case where the blob artifact fails to appear in StyleGAN activations, leading to a seriously broken image. Figure 16 visualizes the activations inside Table 1 configurations a and f. It is evident that progressive growing leads to higherfrequency content in the intermediate layers, compromising shift invariance of the network. We hypothesize that this causes the observed uneven location preference for details when progressive growing is used.
Appendix B Implementation details
We implemented our techniques on top of the official TensorFlow implementation of StyleGAN^{5}^{5}5https://github.com/NVlabs/stylegan corresponding to configuration a in Table 1. We kept most of the details unchanged, including the dimensionality of and (512), mapping network architecture (8 fully connected layers, 100 lower learning rate), equalized learning rate for all trainable parameters [23], leaky ReLU activation with , bilinear filtering [48] in all up/downsampling layers [24], minibatch standard deviation layer at the end of the discriminator [23], exponential moving average of generator weights [23], style mixing regularization [24], nonsaturating logistic loss [15] with regularization [30], Adam optimizer [25] with the same hyperparameters (), and training datasets [24, 43]. We performed all training runs on NVIDIA DGX1 with 8 Tesla V100 GPUs using TensorFlow 1.14.0 and cuDNN 7.4.2.
Generator redesign
In configurations b–f we replace the original StyleGAN generator with our revised architecture. In addition to the changes highlighted in Section 2, we initialize components of the constant input using and simplify the noise broadcast operations to use a single shared scaling factor for all feature maps. We employ weight modulation and demodulation in all convolution layers, except for the output layers (tRGB in Figure 7) where we leave out the demodulation. With output resolution, the generator contains a total of 18 affine transformation layers where the first one corresponds to resolution, the next two correspond to , and so forth.
Weight demodulation
Considering the practical implementation of Equations 1 and 3, it is important to note that the resulting set of weights will be different for each sample in a minibatch, which rules out direct implementation using standard convolution primitives. Instead, we choose to employ grouped convolutions [26] that were originally proposed as a way to reduce computational costs by dividing the input feature maps into multiple independent groups, each with their own dedicated set of weights. We implement Equations 1 and 3 by temporarily reshaping the weights and activations so that each convolution sees one sample with groups — instead of samples with one group. This approach is highly efficient because the reshaping operations do not actually modify the contents of the weight and activation tensors.
Lazy regularization
In configurations c–f we employ lazy regularization (Section 3.1) by evaluating the regularization terms ( and path length) in a separate regularization pass that we execute once every training iterations. We share the internal state of the Adam optimizer between the main loss and the regularization terms, so that the optimizer first sees gradients from the main loss for iterations, followed by gradients from the regularization terms for one iteration. To compensate for the fact that we now perform training iterations instead of , we adjust the optimizer hyperparameters , , and , where . We also multiply the regularization term by to balance the overall magnitude of its gradients. We use for the discriminator and for the generator.
Path length regularization
Configurations d–f include our new path length regularizer (Section 3.2). We initialize the target scale to zero and track it on a perGPU basis as the exponential moving average of using decay coefficient . We weight our regularization term by
(5) 
where specifies the output resolution (e.g. ). We have found these parameter choices to work reliably across all configurations and datasets. To ensure that our regularizer interacts correctly with style mixing regularization, we compute it as an average of all individual layers of the synthesis network. Appendix C provides detailed analysis of the effects of our regularizer on the mapping between and image space.
Progressive growing
In configurations a–d we use progressive growing with the same parameters as Karras et al. [24] (start at resolution and learning rate , train for 600k images per resolution, fade in next resolution for 600k images, increase learning rate gradually by ). In configurations e–f we disable progressive growing and set the learning rate to a fixed value , which we found to provide the best results. In addition, we use output skips in the generator and residual connections in the discriminator as detailed in Section 4.1.
Datasetspecific tuning
Similar to Karras et al. [24], we augment the FFHQ dataset with horizontal flips to effectively increase the number of training images from 70k to 140k, and we do not perform any augmentation for the LSUN datasets. We have found that the optimal choices for the training length and regularization weight tend to vary considerably between different datasets and configurations. We use for all training runs except for configuration e in Table 1, as well as LSUN Church and LSUN Horse in Table 3, where we use . It is possible that further tuning of could provide additional benefits.
Performance optimizations
We profiled our training runs extensively and found that — in our case — the default primitives for image filtering, up/downsampling, bias addition, and leaky ReLU had surprisingly high overheads in terms of training time and GPU memory footprint. This motivated us to optimize these operations using handwritten CUDA kernels. We implemented filtered up/downsampling as a single fused operation, and bias and activation as another one. In configuration e at resolution, our optimizations improved the overall training time by about 30% and memory footprint by about 20%.
Appendix C Effects of path length regularization
The path length regularizer described in Section 3.2 is of the form:
(6) 
where is a unit normal distributed random variable in the space of generated images (of dimension , namely the RGB image dimensions), is the Jacobian matrix of the generator function at a latent space point , and is a global value that expresses the desired scale of the gradients.
c.1 Effect on pointwise Jacobians
The value of this prior is minimized when the inner expectation over is minimized at every latent space point separately. In this subsection, we show that the inner expectation is (approximately) minimized when the Jacobian matrix is orthogonal, up to a global scaling factor. The general strategy is to use the wellknown fact that, in high dimensions , the density of a unit normal distribution is concentrated on a spherical shell of radius . The inner expectation is then minimized when the matrix scales the function under expectation to have its minima at this radius. This is achieved by any orthogonal matrix (with suitable global scale that is the same at every ).
We begin by considering the inner expectation
We first note that the radial symmetry of the distribution of , as well as of the
norm, allows us to focus on diagonal matrices only. This is seen using the Singular Value Decomposition
, where and are orthogonal matrices, and is a horizontal concatenation of a diagonal matrixand a zero matrix
[14]. Because rotating a unit normal random variable by an orthogonal matrix leaves the distribution unchanged, and rotating a vector leaves its norm unchanged, the expression simplifies toFurthermore, the zero matrix in drops the dimensions of beyond , effectively marginalizing its distribution over those dimensions. The marginalized distribution is again a unit normal distribution over the remaining dimensions. We are then left to consider the minimization of the expression
over diagonal square matrices , where is unit normal distributed in dimension . To summarize, all matrices that share the same singular values with produce the same value for the original loss.
Next, we show that this expression is minimized when the diagonal matrix
has a specific identical value at every diagonal entry, i.e., it is a constant multiple of an identity matrix. We first write the expectation as an integral over the probability density of
:Observing the radially symmetric form of the density, we change into a polar coordinates , where is the distance from origin, and is a unit vector, i.e., a point on the dimensional unit sphere. This change of variables introduces a Jacobian factor :
The probability density is then an dimensional unit normal density expressed in polar coordinates, dependent only on the radius and not on the angle. A standard argument by Taylor approximation shows that when is high, for any the density is well approximated by density , which is a (unnormalized) onedimensional normal density in , centered at of standard deviation [4]. In other words, the density of the dimensional unit normal distribution is concentrated on a shell of radius . Substituting this density into the integral, the loss becomes approximately
(7) 
where the approximation becomes exact in the limit of infinite dimension .
To minimize this loss, we set such that the function obtains minimal values on the spherical shell of radius . This is achieved by , whereby the function becomes constant in and the expression reduces to
where is the surface area of the unit sphere (and like the other constant factors, irrelevant for minimization). Note that the zero of the parabola coincides with the maximum of the probability density, and therefore this choice of minimizes the inner integral in Eq. 7 separately for every .
In summary, we have shown that — assuming a high dimensionality of the latent space — the value of the path length prior (Eq. 6) is minimized when all singular values of the Jacobian matrix of the generator are equal to a global constant, at every latent space point , i.e., they are orthogonal up to a globally constant scale.
While in theory merely scales the values of the mapping without changing its properties and could be set to a fixed value (e.g., ), in practice it does affect the dynamics of the training. If the imposed scale does not match the scale induced by the random initialization of the network, the training spends its critical early steps in pushing the weights towards the required overall magnitudes, rather than enforcing the actual objective of interest. This may degrade the internal state of the network weights and lead to suboptimal performance in later training. Empirically we find that setting a fixed scale reduces the consistency of the training results across training runs and datasets. Instead, we set dynamically based on a running average of the existing scale of the Jacobians, namely . With this choice the prior targets the scale of the local Jacobians towards whatever global average already exists, rather than forcing a specific global average. This also eliminates the need to measure the appropriate scale of the Jacobians explicitly, as is done by Odena et al. [33] who consider a related conditioning prior.
, with largest eigenvalue normalized to
. In both datasets, path length regularization (Config D) and novel architecture (Config F) exhibit better conditioning; notably, the effect is more pronounced in the Cars dataset that contains much more variability, and where path length regularization has a relatively stronger effect on the PPL metric (Table 1).Figure 17 shows empirically measured magnitudes of singular values of the Jacobian matrix for networks trained with and without path length regularization. While orthogonality is not reached, the eigenvalues of the regularized network are closer to one another, implying better conditioning, with the strength of the effect correlated with the PPL metric (Table 1).
c.2 Effect on global properties of generator mapping
In the previous subsection, we found that the prior encourages the Jacobians of the generator mapping to be everywhere orthogonal. While Figure 17 shows that the mapping does not satisfy this constraint exactly in practice, it is instructive to consider what global properties the constraint implies for mappings that do. Without loss of generality, we assume unit global scale for the matrices to simplify the presentation.
The key property is that that a mapping with everywhere orthogonal Jacobians preserves the lengths of curves. To see this, let parametrize a curve in the latent space. Mapping the curve through the generator , we obtain a curve in the space of images. Its arc length is
(8) 
where prime denotes derivative with respect to
. By chain rule, this equals
(9) 
where is the Jacobian matrix of evaluated at . By our assumption, the Jacobian is orthogonal, and consequently it leaves the 2norm of the vector unaffected:
(10) 
This is the length of the curve in the latent space, prior to mapping with . Hence, the lengths of and are equal, and so preserves the length of any curve.
In the language of differential geometry, isometrically embeds the Euclidean latent space into a submanifold in — e.g., the manifold of images representing faces, embedded within the space of all possible RGB images. A consequence of isometry is that straight line segments in the latent space are mapped to geodesics, or shortest paths, on the image manifold: a straight line that connects two latent space points cannot be made any shorter, so neither can there be a shorter onmanifold imagespace path between the corresponding images than . For example, a geodesic on the manifold of face images is a continuous morph between two faces that incurs the minimum total amount of change (as measured by difference in RGB space) when one sums up the image difference in each step of the morph.
Isometry is not achieved in practice, as demonstrated in empirical experiments in the previous subsection. The full loss function of the training is a combination of potentially conflicting criteria, and it is not clear if a genuinely isometric mapping would be capable of expressing the image manifold of interest. Nevertheless, a pressure to make the mapping as isometric as possible has desirable consequences. In particular, it discourages unnecessary “detours”: in a nonconstrained generator mapping, a latent space interpolation between two similar images may pass through any number of distant images in RGB space. With regularization, the mapping is encouraged to place distant images in different regions of the latent space, so as to obtain short image paths between any two endpoints.
Appendix D Projection method details
Given a target image , we seek to find the corresponding and perlayer noise maps denoted where is the layer index and denotes the resolution of the th noise map. The baseline StyleGAN generator in 10241024 resolution has 18 noise inputs, i.e., two for each resolution from 44 to 10241024 pixels. Our improved architecture has one fewer noise input because we do not add noise to the learned 44 constant (Figure 2).
Before optimization, we compute by running 10 000 random latent codes through the mapping network . We also approximate the scale of by computing , i.e., the average square Euclidean distance to the center.
At the beginning of optimization, we initialize and for all . The trainable parameters are the components of as well as all components in all noise maps . The optimization is run for 1000 iterations using Adam optimizer [25] with default parameters. Maximum learning rate is , and it is ramped up from zero linearly during the first 50 iterations and ramped down to zero using a cosine schedule during the last 250 iterations. In the first three quarters of the optimization we add Gaussian noise to when evaluating the loss function as , where goes from one to zero during the first 750 iterations. This adds stochasticity to the optimization and stabilizes the finding of the global optimum.
Given that we are explicitly optimizing the noise maps, we must be careful to avoid the optimization from sneaking actual signal into them. Thus we include several noise map regularization terms in our loss function, in addition to an image quality term. The image quality term is the LPIPS [49] distance between target image and the synthesized image: . For increased performance and stability, we downsample both images to 256256 resolution before computing the LPIPS distance. Regularization of the noise maps is performed on multiple resolution scales. For this purpose, we form for each noise map greater than 88 in size a pyramid down to 88 resolution by averaging 22 pixel neighborhoods and multiplying by 2 at each step to retain the expected unit variance. These downsampled noise maps are used for regularization only and have no part in synthesis.
Let us denote the original noise maps by and the downsampled versions by . Similarly, let be the resolution of an original () or downsampled () noise map so that . The regularization term for noise map is then
where the noise map is considered to wrap at the edges. The regularization term is thus sum of squares of the resolutionnormalized autocorrelation coefficients at one pixel shifts horizontally and vertically, which should be zero for a normally distributed signal. The overall loss term is then . In all our tests, we have used noise regularization weight . In addition, we renormalize all noise maps to zero mean and unit variance after each optimization step. Figure 18 illustrates the effect of noise regularization on the resulting noise maps.
Comments
There are no comments yet.