Analyzing and Improving the Image Quality of StyleGAN

12/03/2019 ∙ by Tero Karras, et al. ∙ 36

The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent vectors to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably detect if an image is generated by a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 8

page 11

page 12

page 13

page 14

page 15

page 19

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The resolution and quality of images produced by generative methods, especially generative adversarial networks (GAN) [15], are improving rapidly [23, 31, 5]. The current state-of-the-art method for high-resolution image synthesis is StyleGAN [24], which has been shown to work reliably on a variety of datasets. Our work focuses on fixing its characteristic artifacts and improving the result quality further.

The distinguishing feature of StyleGAN [24] is its unconventional generator architecture. Instead of feeding the input latent code only to the beginning of a the network, the mapping network first transforms it to an intermediate latent code . Affine transforms then produce styles that control the layers of the synthesis network via adaptive instance normalization (AdaIN) [20, 9, 12, 8]. Additionally, stochastic variation is facilitated by providing additional random noise maps to the synthesis network. It has been demonstrated [24, 38] that this design allows the intermediate latent space to be much less entangled than the input latent space . In this paper, we focus all analysis solely on , as it is the relevant latent space from the synthesis network’s point of view.

Figure 1: Instance normalization causes water droplet -like artifacts in StyleGAN images. These are not always obvious in the generated images, but if we look at the activations inside the generator network, the problem is always there, in all feature maps starting from the 64x64 resolution. It is a systemic problem that plagues all StyleGAN images.

Many observers have noticed characteristic artifacts in images generated by StyleGAN [3]. We identify two causes for these artifacts, and describe changes in architecture and training methods that eliminate them. First, we investigate the origin of common blob-like artifacts, and find that the generator creates them to circumvent a design flaw in its architecture. In Section 2, we redesign the normalization used in the generator, which removes the artifacts. Second, we analyze artifacts related to progressive growing [23] that has been highly successful in stabilizing high-resolution GAN training. We propose an alternative design that achieves the same goal — training starts by focusing on low-resolution images and then progressively shifts focus to higher and higher resolutions — without changing the network topology during training. This new design also allows us to reason about the effective resolution of the generated images, which turns out to be lower than expected, motivating a capacity increase (Section 4).

Quantitative analysis of the quality of images produced using generative methods continues to be a challenging topic. Fréchet inception distance (FID) [19]

measures differences in the density of two distributions in the high-dimensional feature space of a InceptionV3 classifier

[39]

. Precision and Recall (P&R)

[36, 27] provide additional visibility by explicitly quantifying the percentage of generated images that are similar to training data and the percentage of training data that can be generated, respectively. We use these metrics to quantify the improvements.

Both FID and P&R are based on classifier networks that have recently been shown to focus on textures rather than shapes [11], and consequently, the metrics do not accurately capture all aspects of image quality. We observe that the perceptual path length (PPL) metric [24]

, introduced as a method for estimating the quality of latent space interpolations, correlates with consistency and stability of shapes. Based on this, we regularize the synthesis network to favor smooth mappings (Section 

3) and achieve a clear improvement in quality. To counter its computational expense, we also propose executing all regularizations less frequently, observing that this can be done without compromising effectiveness.

Finally, we find that projection of images to the latent space works significantly better with the new, path-length regularized generator than with the original StyleGAN. This has practical importance since it allows us to tell reliably whether a given image was generated using a particular generator (Section 5).

Our implementation and trained models are available at https://github.com/NVlabs/stylegan2

2 Removing normalization artifacts


(a) StyleGAN (b) StyleGAN (detailed) (c) Revised architecture (d) Weight demodulation

Figure 2: We redesign the architecture of the StyleGAN synthesis network. (a) The original StyleGAN, where A denotes a learned affine transform from that produces a style and B

is a noise broadcast operation. (b) The same diagram with full detail. Here we have broken the AdaIN to explicit normalization followed by modulation, both operating on the mean and standard deviation per feature map. We have also annotated the learned weights (

), biases (, and constant input (

), and redrawn the gray boxes so that one style is active per box. The activation function (leaky ReLU) is always applied right after adding the bias. (c) We make several changes to the original architecture that are justified in the main text. We remove some redundant operations at the beginning, move the addition of

and B to be outside active area of a style, and adjust only the standard deviation per feature map. (d) The revised architecture enables us to replace instance normalization with a “demodulation” operation, which we apply to the weights associated with each convolution layer.

We begin by observing that most images generated by StyleGAN exhibit characteristic blob-shaped artifacts that resemble water droplets. As shown in Figure 1, even when the droplet may not be obvious in the final image, it is present in the intermediate feature maps of the generator.111In rare cases (perhaps 0.1% of images) the droplet is missing, leading to severely corrupted images. See Appendix A for details. The anomaly starts to appear around 6464 resolution, is present in all feature maps, and becomes progressively stronger at higher resolutions. The existence of such a consistent artifact is puzzling, as the discriminator should be able to detect it.

We pinpoint the problem to the AdaIN operation that normalizes the mean and variance of each feature map separately, thereby potentially destroying any information found in the magnitudes of the features relative to each other. We hypothesize that the droplet artifact is a result of the generator intentionally sneaking signal strength information past instance normalization: by creating a strong, localized spike that dominates the statistics, the generator can effectively scale the signal as it likes elsewhere. Our hypothesis is supported by the finding that when the normalization step is removed from the generator, as detailed below, the droplet artifacts disappear completely.

2.1 Generator architecture revisited

We will first revise several details of the StyleGAN generator to better facilitate our redesigned normalization. These changes have either a neutral or small positive effect on their own in terms of quality metrics.

Figure 2a shows the original StyleGAN synthesis network [24], and in Figure 2b we expand the diagram to full detail by showing the weights and biases and breaking the AdaIN operation to its two constituent parts: normalization and modulation. This allows us to re-draw the conceptual gray boxes so that each box indicates the part of the network where one style is active (i.e., “style block”). Interestingly, the original StyleGAN applies bias and noise within the style block, causing their relative impact to be inversely proportional to the current style’s magnitudes. We observe that more predictable results are obtained by moving these operations outside the style block, where they operate on normalized data. Furthermore, we notice that after this change it is sufficient for the normalization and modulation to operate on the standard deviation alone (i.e., the mean is not needed). The application of bias, noise, and normalization to the constant input can also be safely removed without observable drawbacks. This variant is shown in Figure 2c, and serves as a starting point for our redesigned normalization.

2.2 Instance normalization revisited

Given that instance normalization appears to be too strong, how can we relax it while retaining the scale-specific effects of the styles? We rule out batch normalization

[21] as it is incompatible with the small minibatches required for high-resolution synthesis. Alternatively, we could simply remove the normalization. While actually improving FID slightly [27], this makes the effects of the styles cumulative rather than scale-specific, essentially losing the controllability offered by StyleGAN (see video). We will now propose an alternative that removes the artifacts while retaining controllability. The main idea is to base normalization on the expected statistics of the incoming feature maps, but without explicit forcing.

Recall that a style block in Figure 2c consists of modulation, convolution, and normalization. Let us start by considering the effect of a modulation followed by a convolution. The modulation scales each input feature map of the convolution based on the incoming style, which can alternatively be implemented by scaling the convolution weights:

(1)

where and are the original and modulated weights, respectively, is the scale corresponding to the th input feature map, and and enumerate the output feature maps and spatial footprint of the convolution, respectively.

Now, the purpose of instance normalization is to essentially remove the effect of

from the statistics of the convolution’s output feature maps. We observe that this goal can be achieved more directly. Let us assume that the input activations are i.i.d. random variables with unit standard deviation. After modulation and convolution, the output activations have standard deviation of

(2)

i.e., the outputs are scaled by the norm of the corresponding weights. The subsequent normalization aims to restore the outputs back to unit standard deviation. Based on Equation 2, this is achieved if we scale (“demodulate”) each output feature map by . Alternatively, we can again bake this into the convolution weights:

(3)

where is a small constant to avoid numerical issues.

Configuration FFHQ, 10241024 LSUN Car, 512384
FID Path length Precision Recall FID Path length Precision Recall
a Baseline StyleGAN [24] 4.40 195.9 0.721 0.399 3.27 1484.5 0.701 0.435
b + Weight demodulation 4.39 173.8 0.702 0.425 3.04 0862.4 0.685 0.488
c + Lazy regularization 4.38 167.2 0.719 0.427 2.83 0981.6 0.688 0.493
d + Path length regularization 4.34 139.2 0.715 0.418 3.43 0651.2 0.697 0.452
e + No growing, new G & D arch. 3.31 116.7 0.705 0.449 3.19 0471.2 0.690 0.454
f + Large networks 2.84 129.4 0.689 0.492 2.32 0415.5 0.678 0.514
Table 1: Main results. For each training run, we selected the training snapshot with the lowest FID. We computed each metric 10 times with different random seeds and report their average. The “path length” column corresponds to the PPL metric, computed based on path endpoints in [24]. For LSUN datasets, we report path lengths without the center crop that was originally proposed for FFHQ. The FFHQ dataset contains 70k images, and we showed the discriminator 25M images during training. For LSUN Car the corresponding numbers were 893k and 57M.

We have now baked the entire style block to a single convolution layer whose weights are adjusted based on using Equations 1 and 3 (Figure 2d). Compared to instance normalization, our demodulation technique is weaker because it is based on statistical assumptions about the signal instead of actual contents of the feature maps. Similar statistical analysis has been extensively used in modern network initializers [13, 18], but we are not aware of it being previously used as a replacement for data-dependent normalization. Our demodulation is also related to weight normalization [37]

that performs the same calculation as a part of reparameterizing the weight tensor. Prior work has identified weight normalization as beneficial in the context of GAN training

[42].

Our new design removes the characteristic artifacts (Figure 3) while retaining full controllability, as demonstrated in the accompanying video. FID remains largely unaffected (Table 1, rows ab), but there is a notable shift from precision to recall. We argue that this is generally desirable, since recall can be traded into precision via truncation, whereas the opposite is not true [27]. In practice our design can be implemented efficiently using grouped convolutions, as detailed in Appendix B. To avoid having to account for the activation function in Equation 3, we scale our activation functions so that they retain the expected signal variance.

3 Image quality and generator smoothness

Figure 3: Replacing normalization with demodulation removes the characteristic artifacts from images and activations.

While GAN metrics such as FID or Precision and Recall (P&R) successfully capture many aspects of the generator, they continue to have somewhat of a blind spot for image quality. For an example, refer to Figures 13 and 14 that contrast generators with identical FID and P&R scores but markedly different overall quality.222

We believe that the key to the apparent inconsistency lies in the particular choice of feature space rather than the foundations of FID or P&R. It was recently discovered that classifiers trained using ImageNet

[35] tend to base their decisions much more on texture than shape [11], while humans strongly focus on shape [28]. This is relevant in our context because FID and P&R use high-level features from InceptionV3 [39] and VGG-16 [39], respectively, which were trained in this way and are thus expected to be biased towards texture detection. As such, images with, e.g., strong cat textures may appear more similar to each other than a human observer would agree, thus partially compromising density-based metrics (FID) and manifold coverage metrics (P&R).

Figure 4: Connection between perceptual path length and image quality using baseline StyleGAN (config a in Table 1). (a) Random examples with low PPL ( percentile). (b) Examples with high PPL ( percentile). There is a clear correlation between PPL scores and semantic consistency of the images.
Figure 5: (a) Distribution of PPL scores of individual images generated using a baseline StyleGAN (config a in Table 1, FID = 8.53, PPL = 924). The percentile ranges corresponding to Figure 4 are highlighted in orange. (b) Our method (config f) improves the PPL distribution considerably (showing a snapshot with the same FID = 8.53, PPL = 387).

We observe an interesting correlation between perceived image quality and perceptual path length (PPL) [24], a metric that was originally introduced for quantifying the smoothness of the mapping from a latent space to the output image by measuring average LPIPS distances [49] between generated images under small perturbations in latent space. Again consulting Figures 13 and 14, a smaller PPL (smoother generator mapping) appears to correlate with higher overall image quality, whereas other metrics are blind to the change. Figure 4 examines this correlation more closely through per-image PPL scores computed by sampling latent space around individual points in on StyleGAN trained on LSUN Cat: low PPL scores are indeed indicative of high-quality images, and vice versa. Figure 5a shows the corresponding histogram of per-image PPL scores and reveals the long tail of the distribution. The overall PPL for the model is simply the expected value of the per-image PPL scores.

It is not immediately obvious why a low PPL should correlate with image quality. We hypothesize that during training, as the discriminator penalizes broken images, the most direct way for the generator to improve is to effectively stretch the region of latent space that yields good images. This would lead to the low-quality images being squeezed into small latent space regions of rapid change. While this improves the average output quality in the short term, the accumulating distortions impair the training dynamics and consequently the final image quality.

This empirical correlation suggests that favoring a smooth generator mapping by encouraging low PPL during training may improve image quality, which we show to be the case below. As the resulting regularization term is somewhat expensive to compute, we first describe a general optimization that applies to all regularization techniques.

3.1 Lazy regularization

Typically the main loss function (e.g., logistic loss 

[15]) and regularization terms (e.g.,  [30]) are written as a single expression and are thus optimized simultaneously. We observe that typically the regularization terms can be computed much less frequently than the main loss function, thus greatly diminishing their computational cost and the overall memory usage. Table 1, row c shows that no harm is caused when regularization is performed only once every 16 minibatches, and we adopt the same strategy for our new regularizer as well. Appendix B gives implementation details.

3.2 Path length regularization

Excess path distortion in the generator is evident as poor local conditioning: any small region in becomes arbitrarily squeezed and stretched as it is mapped by . In line with earlier work [33], we consider a generator mapping from the latent space to image space to be well-conditioned if, at each point in latent space, small displacements yield changes of equal magnitude in image space regardless of the direction of perturbation.

At a single , the local metric scaling properties of the generator mapping are captured by the Jacobian matrix . Motivated by the desire to preserve the expected lengths of vectors regardless of the direction, we formulate our regularizer as

(4)

where

are random images with normally distributed pixel intensities, and

, where are normally distributed. We show in Appendix C that, in high dimensions, this prior is minimized when is orthogonal (up to a global scale) at any

. An orthogonal matrix preserves lengths and introduces no squeezing along any dimension.

To avoid explicit computation of the Jacobian matrix, we use the identity

, which is efficiently computable using standard backpropagation

[6]. The constant is set dynamically during optimization as the long-running exponential moving average of the lengths , allowing the optimization to find a suitable global scale by itself.

Our regularizer is closely related to the Jacobian clamping regularizer presented by Odena et al. [33]. Practical differences include that we compute the products analytically whereas they use finite differences for estimating with . It should be noted that spectral normalization [31] of the generator [45]

only constrains the largest singular value, posing no constraints on the others and hence not necessarily leading to better conditioning.

In practice, we notice that path length regularization leads to more reliable and consistently behaving models, making architecture exploration easier. Figure 5b shows that path length regularization clearly improves the distribution of per-image PPL scores. Table 1, row d shows that regularization reduces PPL, as expected, but there is a tradeoff between FID and PPL in LSUN Car and other datasets that are less structured than FFHQ. Furthermore, we observe that a smoother generator is easier to invert (Section 5).

4 Progressive growing revisited

Figure 6: Progressive growing leads to “phase” artifacts. In this example the teeth do not follow the pose but stay aligned to the camera, as indicated by the blue line.

Progressive growing [23] has been very successful in stabilizing high-resolution image synthesis, but it causes its own characteristic artifacts. The key issue is that the progressively grown generator appears to have a strong location preference for details; the accompanying video shows that when features like teeth or eyes should move smoothly over the image, they may instead remain stuck in place before jumping to the next preferred location. Figure 6 shows a related artifact. We believe the problem is that in progressive growing each resolution serves momentarily as the output resolution, forcing it to generate maximal frequency details, which then leads to the trained network to have excessively high frequencies in the intermediate layers, compromising shift invariance [48]. Appendix A shows an example. These issues prompt us to search for an alternative formulation that would retain the benefits of progressive growing without the drawbacks.

4.1 Alternative network architectures


(a) MSG-GAN (b) Input/output skips (c) Residual nets

Figure 7: Three generator (above the dashed line) and discriminator architectures. Up and Down denote bilinear up and downsampling, respectively. In residual networks these also include 11 convolutions to adjust the number of feature maps. tRGB and fRGB convert between RGB and high-dimensional per-pixel data. Architectures used in configurations e and f are highlighted in green.

While StyleGAN uses simple feedforward designs in the generator (synthesis network) and discriminator, there is a vast body of work dedicated to the study of better network architectures. In particular, skip connections [34, 22], residual networks [17, 16, 31], and hierarchical methods [7, 46, 47] have proven highly successful also in the context of generative methods. As such, we decided to re-evaluate the network design of StyleGAN and search for an architecture that produces high-quality images without progressive growing.

Figure 7a shows MSG-GAN [22], which connects the matching resolutions of the generator and discriminator using multiple skip connections. The MSG-GAN generator is modified to output a mipmap [41] instead of an image, and a similar representation is computed for each real image as well. In Figure 7b we simplify this design by upsampling and summing the contributions of RGB outputs corresponding to different resolutions. In the discriminator, we similarly provide the downsampled image to each resolution block of the discriminator. We use bilinear filtering in all up and downsampling operations. In Figure 7

c we further modify the design to use residual connections.

333In residual network architectures, the addition of two paths leads to a doubling of signal variance, which we cancel by multiplying with . This is crucial for our networks, whereas in classification resnets [17] the issue is typically hidden by batch normalization. This design is similar to LAPGAN [7] without the per-resolution discriminators employed by Denton et al.

FFHQ D original D input skips D residual
FID PPL FID PPL FID PPL
G original 4.32 237 4.18 207 3.58 238
G output skips 4.33 149 3.77 116 3.31 117
G residual 4.35 187 3.96 201 3.79 203

LSUN Car D original D input skips D residual
FID PPL FID PPL FID PPL
G original 3.75 905 3.23 758 3.25 802
G output skips 3.77 544 3.86 316 3.19 471
G residual 3.93 981 3.40 667 2.66 645
Table 2: Comparison of generator and discriminator architectures without progressive growing. The combination of generator with output skips and residual discriminator corresponds to configuration e in the main result table.

Table 2 compares three generator and three discriminator architectures: original feedforward networks as used in StyleGAN, skip connections, and residual networks, all trained without progressive growing. FID and PPL are provided for each of the 9 combinations. We can see two broad trends: skip connections in the generator drastically improve PPL in all configurations, and a residual discriminator network is clearly beneficial for FID. The latter is perhaps not surprising since the structure of discriminator resembles classifiers where residual architectures are known to be helpful. However, a residual architecture was harmful in the generator — the lone exception was FID in LSUN Car when both networks were residual.

For the rest of the paper we use a skip generator and a residual discriminator, and do not use progressive growing. This corresponds to configuration e in Table 1, and as can be seen table, switching to this setup significantly improves FID and PPL.

4.2 Resolution usage

Figure 8: Contribution of each resolution to the output of the generator as a function of training time. The vertical axis shows a breakdown of the relative standard deviations of different resolutions, and the horizontal axis corresponds to training progress, measured in millions of training images shown to the discriminator. We can see that in the beginning the network focuses on low-resolution images and progressively shifts its focus on larger resolutions as training progresses. In (a) the generator basically outputs a image with some minor sharpening for , while in (b) the larger network focuses more on the high-resolution details.

The key aspect of progressive growing, which we would like to preserve, is that the generator will initially focus on low-resolution features and then slowly shift its attention to finer details. The architectures in Figure 7 make it possible for the generator to first output low resolution images that are not affected by the higher-resolution layers in a significant way, and later shift the focus to the higher-resolution layers as the training proceeds. Since this is not enforced in any way, the generator will do it only if it is beneficial. To analyze the behavior in practice, we need to quantify how strongly the generator relies on particular resolutions over the course of training.

Since the skip generator (Figure 7b) forms the image by explicitly summing RGB values from multiple resolutions, we can estimate the relative importance of the corresponding layers by measuring how much they contribute to the final image. In Figure 8a, we plot the standard deviation of the pixel values produced by each tRGB layer as a function of training time. We calculate the standard deviations over 1024 random samples of and normalize the values so that they sum to 100%.

Dataset Resolution StyleGAN (a) Ours (f)
FID PPL FID PPL
LSUN Car 512384 3.27 1485 2.32 416
LSUN Cat 256256 8.53 0924 6.93 439
LSUN Church 256256 4.21 0742 3.86 342
LSUN Horse 256256 3.83 1405 3.43 338
Table 3: Improvement in LSUN datasets measured using FID and PPL. We trained Car for 57M images, Cat for 88M, Church for 48M, and Horse for 100M images.

At the start of training, we can see that the new skip generator behaves similar to progressive growing — now achieved without changing the network topology. It would thus be reasonable to expect the highest resolution to dominate towards the end of the training. The plot, however, shows that this fails to happen in practice, which indicates that the generator may not be able to “fully utilize” the target resolution. To verify this, we inspected the generated images manually and noticed that they generally lack some of the pixel-level detail that is present in the training data — the images could be described as being sharpened versions of images instead of true images.

This leads us to hypothesize that there is a capacity problem in our networks, which we test by doubling the number of feature maps in the highest-resolution layers of both networks.444We double the number of feature maps in resolutions while keeping other parts of the networks unchanged. This increases the total number of trainable parameters in the generator by 22% (25M 30M) and in the discriminator by 21% (24M 29M). This brings the behavior more in line with expectations: Figure 8b shows a significant increase in the contribution of the highest-resolution layers, and Table 1, row f shows that FID and Recall improve markedly.

Table 3 compares StyleGAN and our improved variant in several LSUN categories, again showing clear improvements in FID and significant advances in PPL. It is possible that further increases in the size could provide additional benefits.

5 Projection of images to latent space

Inverting the synthesis network is an interesting problem that has many applications. Manipulating a given image in the latent feature space requires finding a matching latent vector for it first. Also, as the image quality of GANs im-proves, it becomes more important to be able to attribute a potentially synthetic image to the network that generated it.

Previous research [1, 10] suggests that instead of finding a common latent vector , the results improve if a separate is chosen for each layer of the generator. The same approach was used in an early encoder implementation [32]. While extending the latent space in this fashion finds a closer match to a given image, it also enables projecting arbitrary images that should have no latent representation. Focusing on the forensic detection of generated images, we concentrate on finding latent codes in the original, unextended latent space, as these correspond to images that the generator could have produced.

Our projection method differs from previous methods in two ways. First, we add ramped-down noise to the latent code during optimization in order to explore the latent space more comprehensively. Second, we also optimize the stochastic noise inputs of the StyleGAN generator, regularizing them to ensure they do not end up carrying coherent signal. The regularization is based on enforcing the autocorrelation coefficients of the noise maps to match those of unit Gaussian noise over multiple scales. Details of our projection method can be found in Appendix D.

5.1 Detection of generated images

It is possible to train classifiers to detect GAN-generated images with reasonably high confidence [29, 44, 40, 50]. However, given the rapid pace of progress, this may not be a lasting situation. Projection-based methods are unique in that they can provide evidence, in the form of a matching latent vector, that an image was synthesized by a specific network [2]. There is also no reason why their effectiveness would diminish as the quality of synthetic images improves, unlike classifier-based methods that may have fewer clues to work with in the future.

Figure 9: LPIPS distances between original and projected images. Distance histograms for generated images shown in blue, real images in orange. Despite the higher image quality of our improved generator, it is much easier to project the generated images into its latent space . The same projection method was used in all cases.
Figure 10:

Example images and their projected and re-synthesized counterparts. For each configuration, top row shows the target images and bottom row shows the synthesis of the corresponding projected latent vector and noise inputs. Top: With the baseline StyleGAN, projection often finds a reasonably close match for generated images, but especially the backgrounds differ from the originals. Middle: The images generated using our best architecture can be projected almost perfectly back into generator inputs, allowing unambiguous attribution to the generating model. Bottom: Projected real images (from the training set) show clear differences to the originals, as expected. All tests were done using the same projection method and hyperparameters.

It turns out that our improvements to StyleGAN make it easier to detect generated images using projection-based methods, even though the quality of generated images is higher. We measure how well the projection succeeds by computing the LPIPS [49] distance between original and re-synthesized image as , where is the image being analyzed and denotes the approximate projection operation. Figure 9 shows histograms of these distances for LSUN Car and FFHQ datasets using the original StyleGAN and our best architecture, and Figure 10 shows example projections. As the latter illustrates, the images generated using our improved architecture can be projected into generator inputs so well that they can be unambiguously attributed to the generating network. With original StyleGAN, even though it should technically be possible to find a matching latent vector, it appears that the latent space is in practice too complex for this to succeed reliably. Our improved model with smoother latent space suffers considerably less from this problem.

6 Conclusions and future work

We have identified and fixed several image quality issues in StyleGAN, improving the quality further and considerably advancing the state of the art in several datasets. In some cases the improvements are more clearly seen in motion, as demonstrated in the accompanying video. Appendix A includes further examples of results obtainable using our method. Despite the improved quality, it is easier to detect images generated by our method using projection-based methods, compared to the original StyleGAN.

Training performance has also improved. At resolution, the original StyleGAN (config a in Table 1) trains at 37 images per second on NVIDIA DGX-1 with 8 Tesla V100 GPUs, while our config e trains 40% faster at 61 img/s. Most of the speedup comes from simplified dataflow due to weight demodulation, lazy regularization, and code optimizations. Config f (larger networks) trains at 31 img/s, and is thus only slightly more expensive to train than original StyleGAN. With config f, the total training time was 9 days for FFHQ and 13 days for LSUN Car.

As future work, it could be fruitful to study further improvements to the path length regularization, e.g., by replacing the pixel-space distance with a data-driven feature-space metric.

7 Acknowledgements

We thank Ming-Yu Liu for an early review, Timo Viitanen for his help with code release, and Tero Kuosmanen for compute infrastructure.

References

Appendix A Image quality

Figure 11: Four hand-picked examples illustrating the image quality and diversity achievable using our method (config f).
Figure 12: Uncurated results for each dataset used in Tables 1 and 3. The images correspond to random outputs produced by our generator (config f), with truncation applied at all resolutions using [24].

Model 1: FID = 8.53, P = 0.64, R = 0.28, PPL = 924


Model 2: FID = 8.53, P = 0.62, R = 0.29, PPL = 387

Figure 13: Uncurated examples from two generative models trained on LSUN Cat without truncation. FID, precision, and recall are similar for models 1 and 2, even though the latter produces cat-shaped objects more often. Perceptual path length (PPL) indicates a clear preference for model 2. Model 1 corresponds to configuration a in Table 3, and model 2 is an early training snapshot of configuration f.

Model 1: FID = 3.27, P = 0.70, R = 0.44, PPL = 1485


Model 2: FID = 3.27, P = 0.67, R = 0.48, PPL = 437

Figure 14: Uncurated examples from two generative models trained on LSUN Car without truncation. FID, precision, and recall are similar for models 1 and 2, even though the latter produces car-shaped objects more often. Perceptual path length (PPL) indicates a clear preference for model 2. Model 1 corresponds to configuration a in Table 3, and model 2 is an early training snapshot of configuration f.
Figure 15: An example of the importance of the droplet artifact in StyleGAN generator. We compare two generated images, one successful and one severely corrupted. The corresponding feature maps were normalized to the viewable dynamic range using instance normalization. For the top image, the droplet artifact starts forming in resolution, is clearly visible in , and increasingly dominates the feature maps in higher resolutions. For the bottom image, is qualitatively similar to the top row, but the droplet does not materialize in . Consequently, the facial features are stronger in the normalized feature map. This leads to an overshoot in , followed by multiple spurious droplets forming in subsequent resolutions. Based on our experience, it is rare that the droplet is missing from StyleGAN images, and indeed the generator fully relies on its existence.
Figure 16: Progressive growing leads to significantly higher frequency content in the intermediate layers. This compromises shift-invariance of the network and makes it harder to localize features precisely in the higher-resolution layers.

We include several large images that illustrate various aspects related to image quality. Figure 11 shows hand-picked examples illustrating the quality and diversity achievable using our method in FFHQ, while Figure 12 shows uncurated results for all datasets mentioned in the paper.

Figures 13 and 14 demonstrate cases where FID and P&R give non-intuitive results, but PPL seems to be more in line with human judgement.

We also include images relating to StyleGAN artifacts. Figure 15 shows a rare case where the blob artifact fails to appear in StyleGAN activations, leading to a seriously broken image. Figure 16 visualizes the activations inside Table 1 configurations a and f. It is evident that progressive growing leads to higher-frequency content in the intermediate layers, compromising shift invariance of the network. We hypothesize that this causes the observed uneven location preference for details when progressive growing is used.

Appendix B Implementation details

We implemented our techniques on top of the official TensorFlow implementation of StyleGAN555https://github.com/NVlabs/stylegan corresponding to configuration a in Table 1. We kept most of the details unchanged, including the dimensionality of and (512), mapping network architecture (8 fully connected layers, 100 lower learning rate), equalized learning rate for all trainable parameters [23], leaky ReLU activation with , bilinear filtering [48] in all up/downsampling layers [24], minibatch standard deviation layer at the end of the discriminator [23], exponential moving average of generator weights [23], style mixing regularization [24], non-saturating logistic loss [15] with regularization [30], Adam optimizer [25] with the same hyperparameters (), and training datasets [24, 43]. We performed all training runs on NVIDIA DGX-1 with 8 Tesla V100 GPUs using TensorFlow 1.14.0 and cuDNN 7.4.2.

Generator redesign

In configurations bf we replace the original StyleGAN generator with our revised architecture. In addition to the changes highlighted in Section 2, we initialize components of the constant input using and simplify the noise broadcast operations to use a single shared scaling factor for all feature maps. We employ weight modulation and demodulation in all convolution layers, except for the output layers (tRGB in Figure 7) where we leave out the demodulation. With output resolution, the generator contains a total of 18 affine transformation layers where the first one corresponds to resolution, the next two correspond to , and so forth.

Weight demodulation

Considering the practical implementation of Equations 1 and 3, it is important to note that the resulting set of weights will be different for each sample in a minibatch, which rules out direct implementation using standard convolution primitives. Instead, we choose to employ grouped convolutions [26] that were originally proposed as a way to reduce computational costs by dividing the input feature maps into multiple independent groups, each with their own dedicated set of weights. We implement Equations 1 and 3 by temporarily reshaping the weights and activations so that each convolution sees one sample with groups — instead of samples with one group. This approach is highly efficient because the reshaping operations do not actually modify the contents of the weight and activation tensors.

Lazy regularization

In configurations cf we employ lazy regularization (Section 3.1) by evaluating the regularization terms ( and path length) in a separate regularization pass that we execute once every training iterations. We share the internal state of the Adam optimizer between the main loss and the regularization terms, so that the optimizer first sees gradients from the main loss for iterations, followed by gradients from the regularization terms for one iteration. To compensate for the fact that we now perform training iterations instead of , we adjust the optimizer hyperparameters , , and , where . We also multiply the regularization term by to balance the overall magnitude of its gradients. We use for the discriminator and for the generator.

Path length regularization

Configurations df include our new path length regularizer (Section 3.2). We initialize the target scale to zero and track it on a per-GPU basis as the exponential moving average of using decay coefficient . We weight our regularization term by

(5)

where specifies the output resolution (e.g. ). We have found these parameter choices to work reliably across all configurations and datasets. To ensure that our regularizer interacts correctly with style mixing regularization, we compute it as an average of all individual layers of the synthesis network. Appendix C provides detailed analysis of the effects of our regularizer on the mapping between and image space.

Progressive growing

In configurations ad we use progressive growing with the same parameters as Karras et al. [24] (start at resolution and learning rate , train for 600k images per resolution, fade in next resolution for 600k images, increase learning rate gradually by ). In configurations ef we disable progressive growing and set the learning rate to a fixed value , which we found to provide the best results. In addition, we use output skips in the generator and residual connections in the discriminator as detailed in Section 4.1.

Dataset-specific tuning

Similar to Karras et al. [24], we augment the FFHQ dataset with horizontal flips to effectively increase the number of training images from 70k to 140k, and we do not perform any augmentation for the LSUN datasets. We have found that the optimal choices for the training length and regularization weight tend to vary considerably between different datasets and configurations. We use for all training runs except for configuration e in Table 1, as well as LSUN Church and LSUN Horse in Table 3, where we use . It is possible that further tuning of could provide additional benefits.

Performance optimizations

We profiled our training runs extensively and found that — in our case — the default primitives for image filtering, up/downsampling, bias addition, and leaky ReLU had surprisingly high overheads in terms of training time and GPU memory footprint. This motivated us to optimize these operations using hand-written CUDA kernels. We implemented filtered up/downsampling as a single fused operation, and bias and activation as another one. In configuration e at resolution, our optimizations improved the overall training time by about 30% and memory footprint by about 20%.

Appendix C Effects of path length regularization

The path length regularizer described in Section 3.2 is of the form:

(6)

where is a unit normal distributed random variable in the space of generated images (of dimension , namely the RGB image dimensions), is the Jacobian matrix of the generator function at a latent space point , and is a global value that expresses the desired scale of the gradients.

c.1 Effect on pointwise Jacobians

The value of this prior is minimized when the inner expectation over is minimized at every latent space point separately. In this subsection, we show that the inner expectation is (approximately) minimized when the Jacobian matrix is orthogonal, up to a global scaling factor. The general strategy is to use the well-known fact that, in high dimensions , the density of a unit normal distribution is concentrated on a spherical shell of radius . The inner expectation is then minimized when the matrix scales the function under expectation to have its minima at this radius. This is achieved by any orthogonal matrix (with suitable global scale that is the same at every ).

We begin by considering the inner expectation

We first note that the radial symmetry of the distribution of , as well as of the

norm, allows us to focus on diagonal matrices only. This is seen using the Singular Value Decomposition

, where and are orthogonal matrices, and is a horizontal concatenation of a diagonal matrix

and a zero matrix

[14]. Because rotating a unit normal random variable by an orthogonal matrix leaves the distribution unchanged, and rotating a vector leaves its norm unchanged, the expression simplifies to

Furthermore, the zero matrix in drops the dimensions of beyond , effectively marginalizing its distribution over those dimensions. The marginalized distribution is again a unit normal distribution over the remaining dimensions. We are then left to consider the minimization of the expression

over diagonal square matrices , where is unit normal distributed in dimension . To summarize, all matrices that share the same singular values with produce the same value for the original loss.

Next, we show that this expression is minimized when the diagonal matrix

has a specific identical value at every diagonal entry, i.e., it is a constant multiple of an identity matrix. We first write the expectation as an integral over the probability density of

:

Observing the radially symmetric form of the density, we change into a polar coordinates , where is the distance from origin, and is a unit vector, i.e., a point on the -dimensional unit sphere. This change of variables introduces a Jacobian factor :

The probability density is then an -dimensional unit normal density expressed in polar coordinates, dependent only on the radius and not on the angle. A standard argument by Taylor approximation shows that when is high, for any the density is well approximated by density , which is a (unnormalized) one-dimensional normal density in , centered at of standard deviation [4]. In other words, the density of the -dimensional unit normal distribution is concentrated on a shell of radius . Substituting this density into the integral, the loss becomes approximately

(7)

where the approximation becomes exact in the limit of infinite dimension .

To minimize this loss, we set such that the function obtains minimal values on the spherical shell of radius . This is achieved by , whereby the function becomes constant in and the expression reduces to

where is the surface area of the unit sphere (and like the other constant factors, irrelevant for minimization). Note that the zero of the parabola coincides with the maximum of the probability density, and therefore this choice of minimizes the inner integral in Eq. 7 separately for every .

In summary, we have shown that — assuming a high dimensionality of the latent space — the value of the path length prior (Eq. 6) is minimized when all singular values of the Jacobian matrix of the generator are equal to a global constant, at every latent space point , i.e., they are orthogonal up to a globally constant scale.

While in theory merely scales the values of the mapping without changing its properties and could be set to a fixed value (e.g., ), in practice it does affect the dynamics of the training. If the imposed scale does not match the scale induced by the random initialization of the network, the training spends its critical early steps in pushing the weights towards the required overall magnitudes, rather than enforcing the actual objective of interest. This may degrade the internal state of the network weights and lead to sub-optimal performance in later training. Empirically we find that setting a fixed scale reduces the consistency of the training results across training runs and datasets. Instead, we set dynamically based on a running average of the existing scale of the Jacobians, namely . With this choice the prior targets the scale of the local Jacobians towards whatever global average already exists, rather than forcing a specific global average. This also eliminates the need to measure the appropriate scale of the Jacobians explicitly, as is done by Odena et al. [33] who consider a related conditioning prior.

Figure 17: The mean and standard deviation of the magnitudes of sorted singular values of the Jacobian matrix evaluated at random latent space points

, with largest eigenvalue normalized to

. In both datasets, path length regularization (Config D) and novel architecture (Config F) exhibit better conditioning; notably, the effect is more pronounced in the Cars dataset that contains much more variability, and where path length regularization has a relatively stronger effect on the PPL metric (Table 1).

Figure 17 shows empirically measured magnitudes of singular values of the Jacobian matrix for networks trained with and without path length regularization. While orthogonality is not reached, the eigenvalues of the regularized network are closer to one another, implying better conditioning, with the strength of the effect correlated with the PPL metric (Table 1).

c.2 Effect on global properties of generator mapping

In the previous subsection, we found that the prior encourages the Jacobians of the generator mapping to be everywhere orthogonal. While Figure 17 shows that the mapping does not satisfy this constraint exactly in practice, it is instructive to consider what global properties the constraint implies for mappings that do. Without loss of generality, we assume unit global scale for the matrices to simplify the presentation.

The key property is that that a mapping with everywhere orthogonal Jacobians preserves the lengths of curves. To see this, let parametrize a curve in the latent space. Mapping the curve through the generator , we obtain a curve in the space of images. Its arc length is

(8)

where prime denotes derivative with respect to

. By chain rule, this equals

(9)

where is the Jacobian matrix of evaluated at . By our assumption, the Jacobian is orthogonal, and consequently it leaves the 2-norm of the vector unaffected:

(10)

This is the length of the curve in the latent space, prior to mapping with . Hence, the lengths of and are equal, and so preserves the length of any curve.

In the language of differential geometry, isometrically embeds the Euclidean latent space into a submanifold in  — e.g., the manifold of images representing faces, embedded within the space of all possible RGB images. A consequence of isometry is that straight line segments in the latent space are mapped to geodesics, or shortest paths, on the image manifold: a straight line that connects two latent space points cannot be made any shorter, so neither can there be a shorter on-manifold image-space path between the corresponding images than . For example, a geodesic on the manifold of face images is a continuous morph between two faces that incurs the minimum total amount of change (as measured by difference in RGB space) when one sums up the image difference in each step of the morph.

Isometry is not achieved in practice, as demonstrated in empirical experiments in the previous subsection. The full loss function of the training is a combination of potentially conflicting criteria, and it is not clear if a genuinely isometric mapping would be capable of expressing the image manifold of interest. Nevertheless, a pressure to make the mapping as isometric as possible has desirable consequences. In particular, it discourages unnecessary “detours”: in a non-constrained generator mapping, a latent space interpolation between two similar images may pass through any number of distant images in RGB space. With regularization, the mapping is encouraged to place distant images in different regions of the latent space, so as to obtain short image paths between any two endpoints.

Appendix D Projection method details

Given a target image , we seek to find the corresponding and per-layer noise maps denoted where is the layer index and denotes the resolution of the th noise map. The baseline StyleGAN generator in 10241024 resolution has 18 noise inputs, i.e., two for each resolution from 44 to 10241024 pixels. Our improved architecture has one fewer noise input because we do not add noise to the learned 44 constant (Figure 2).

Before optimization, we compute by running 10 000 random latent codes through the mapping network . We also approximate the scale of by computing , i.e., the average square Euclidean distance to the center.

At the beginning of optimization, we initialize and for all . The trainable parameters are the components of as well as all components in all noise maps . The optimization is run for 1000 iterations using Adam optimizer [25] with default parameters. Maximum learning rate is , and it is ramped up from zero linearly during the first 50 iterations and ramped down to zero using a cosine schedule during the last 250 iterations. In the first three quarters of the optimization we add Gaussian noise to when evaluating the loss function as , where goes from one to zero during the first 750 iterations. This adds stochasticity to the optimization and stabilizes the finding of the global optimum.

Given that we are explicitly optimizing the noise maps, we must be careful to avoid the optimization from sneaking actual signal into them. Thus we include several noise map regularization terms in our loss function, in addition to an image quality term. The image quality term is the LPIPS [49] distance between target image and the synthesized image: . For increased performance and stability, we downsample both images to 256256 resolution before computing the LPIPS distance. Regularization of the noise maps is performed on multiple resolution scales. For this purpose, we form for each noise map greater than 88 in size a pyramid down to 88 resolution by averaging 22 pixel neighborhoods and multiplying by 2 at each step to retain the expected unit variance. These downsampled noise maps are used for regularization only and have no part in synthesis.

Generated target image Real target image




No noise With noise No noise With noise
regularization regularization regularization regularization

Figure 18: Effect of noise regularization in latent-space projection where we also optimize the contents of the noise inputs of the synthesis network. Top to bottom: target image, re-synthesized image, contents of two noise maps at different resolutions. When regularization is turned off in this test, we only normalize the noise maps to zero mean and unit variance, which leads the optimization to sneak signal into the noise maps. Enabling the noise regularization prevents this. The model used here corresponds to configuration f in Table 1.

Let us denote the original noise maps by and the downsampled versions by . Similarly, let be the resolution of an original () or downsampled () noise map so that . The regularization term for noise map is then

where the noise map is considered to wrap at the edges. The regularization term is thus sum of squares of the resolution-normalized autocorrelation coefficients at one pixel shifts horizontally and vertically, which should be zero for a normally distributed signal. The overall loss term is then . In all our tests, we have used noise regularization weight . In addition, we renormalize all noise maps to zero mean and unit variance after each optimization step. Figure 18 illustrates the effect of noise regularization on the resulting noise maps.