1 Introduction
In recent years, unconditional Generative Adversarial Networks (GANs) have achieved impressive photorealism for image synthesis tasks.
While this has hampered the identification of generated images based on visual cues, multiple recent works show that it is straightforward to distinguish real and generated images based on their highfrequency content Jiang2020ARXIV ; Wang2020CVPR ; Dzanic2020NIPS ; Durall2020CVPR ; LiuZ2020CVPR ; Zhang2019IEEE ; Frank2020ICML . This has aroused considerable interest because it reveals a fundamental problem in stateoftheart GANs:
Existing approaches evidently struggle to learn the correct data distribution.
While GAN training is notoriously hard, learning the distribution of highfrequency content is particularly problematic Jiang2020ARXIV ; Wang2020CVPR ; Dzanic2020NIPS ; Durall2020CVPR ; LiuZ2020CVPR ; Zhang2019IEEE ; Frank2020ICML .
This indicates a systematic problem in existing approaches that could make training suboptimal.
For example, if generating highfrequencies is difficult for the generator but detecting them is straightforward for the discriminator this imbalance could impair the stability of the training.
Conversely, if the discriminator struggles to detect high frequencies generating fine details also becomes more difficult which could impede convergence.
Therefore, we argue that it is important to better understand the expressivity of both the generator and the discriminator.
Indeed, numerous works suggest that the architecture of the generator can hamper the generation of high frequencies Chandrasegaran2021CVPR ; Durall2020CVPR ; Frank2020ICML ; Khayatkhoei2020ARXIV ; Jiang2020ARXIV ; Gal2021ARXIV .
However, another line of works question the quality of the training signal provided by the discriminator Jung2021AAAI ; Chen2021AAAI ; Gal2021ARXIV .
But is there indeed a frequency bias that prevents learning high frequencies in existing GAN models?
This is a nontrivial question as GAN training involves two players where architectures for both the generator and discriminator, loss functions, as well as the dataset statistics can all affect the generated images. To narrow down potential factors, we first develop isolated testbeds for both the generator and discriminator. Then, we extend our findings to stateoftheart GANs on largescale datasets.



Generator: The majority of works attribute the highfrequency artifacts to the upsampling operations in the generator Chandrasegaran2021CVPR ; Durall2020CVPR ; Frank2020ICML ; Odena2016Distill ; Zhang2019IEEE
. Upsampling typically follows a predefined scheme, e.g. bilinear or nearest neighbor interpolation, or zero insertion between pixels. While the latter introduces too many high frequencies, interpolation has been shown to reduce artifacts
Chandrasegaran2021CVPR ; Durall2020CVPR . But is interpolation always a good scheme for upsampling? Our experiments indicate that both bilinear and nearest neighbor upsampling bias the generator towards predicting little highfrequency content, see Fig. 0(b). While zero insertion can be more flexible, it is prone to introducing checkerboard artifacts, i.e. too many high frequencies Durall2020CVPR ; Chandrasegaran2021CVPR ; Odena2016Distill . But why do the learnable filters in subsequent layers not learn to compensate for these artifacts? Depending on the training objective, checkerboard artifacts might be penalized only slightly. Our experiments evidence that when the loss function is sensitive to these artifacts the generator is indeed capable of compensating for them.Discriminator: This leads us to question the quality of the training signal: Can the discriminator detect high frequencies and provide the necessary supervision? Chen et al. Chen2021AAAI argue that the discriminator cannot detect highfrequency information due to the downsampling operations. Our experiments corroborate that downsampling can introduce artifacts in the training signal. However, we also observe that the discriminator can indeed provide a meaningful training signal for the spectral statistics at high frequencies when their magnitude is large enough, e.g., Fig. 0(c). Other works propose to train both the generator and discriminator in wavelet space Gal2021ARXIV or add a discriminator on the spectrum of the images to reduce highfrequency artifacts Jung2021AAAI . Motivated by these approaches we ask: Is it enough to consider only the spatial domain? In agreement with these works, our results suggest that the discriminator can benefit from (additional) input in frequencybased domains. Further, our testbed yields insights on which of these measures is most effective and reveals that the training signal from the discriminator might remain problematic.
Contributions: We take a sober look at explanations for highfrequency artifacts in generated images and unify the efforts that have been done so far. In particular, we develop isolated testbeds for both the generator and the discriminator which allows us to analyze what makes existing efforts effective. The conclusions drawn by this paper shed new light on limitations of common design choices for both generator and discriminator: i) Bilinear and nearest neighbor upsampling bias the generator towards predicting little highfrequency content. ii) Zero insertion is prone to producing checkerboard artifacts in the generated images. However, with a suitable loss function, the learnable filters of the generator can compensate for the artifacts. This indicates that the upsampling in the generator alone cannot explain the spectral discrepancies. iii) In general, the discriminator is able to detect high frequencies and provide supervision to learn the correct spectral statistics. However, while the exponential decay of the spectrum for natural images creates the impression of the discriminator being insensitive to high frequencies, it actually struggles with low magnitudes. iv) We find that all commonly used downsampling operations in the discriminator can impair the quality of the training signal.
Lastly, we demonstrate that these findings extend to full GAN training. In agreement with Jung2021AAAI
, we find that spectral discriminators can bring us one step closer to matching the spectral statistics. Nonetheless, generated images remain straightforward to classify based on their spectral statistics only. While recent works on GANs largely focus on improving the generator, e.g.,
Karras2018ICLR ; Karras2019CVPR ; Karras2020CVPR , our findings suggest that the design of the discriminator plays an equally important role and deserves more attention in future work. We believe that our testbeds for both the generator and discriminator can be a useful tool for future investigations. We release our code and dataset at https://github.com/autonomousvision/frequency_bias.2 Preliminaries
GAN Training: A generative adversarial network consists of a generator and a discriminator that are trained jointly in a 2player game. Given latent variables , the generator synthesizes images while the discriminator tries to distinguish the synthesized images from real images , sampled from data distribution . Let and denote a generator and a discriminator with parameters and , respectively. The parameters of the models are updated in alternating steps using a nonsaturating GAN objective Goodfellow2014NIPS which is often combined with R1regularization to stabilize training Mescheder2018ICML
(1) 
where and controls the strength of the regularizer.
Image Processing in the Frequency Domain:
The discrete 2D Fourier transform maps a gray scale image
to the frequency domain:
(2) 
for and
. The power spectral density is estimated by the squared magnitudes of the Fourier components
. Similar to Durall2020CVPR ; Dzanic2020NIPS we consider the reduced spectrum , i.e. the azimuthal average over the spectrum in normalized polar coordinates ,(3) 
Since images are discretized in space, the maximum frequency is determined by the Nyquist frequency. For a square image, , it is given by , i.e. for .
Spectral Classifier: To detect generated images, Dzanic et al. Dzanic2020NIPS propose to classify real and generated images based on their reduced spectrum. More specifically, they fit a powerlaw function to the tail of each reduced spectrum for frequencies above a given threshold and train a binary classifier on the fit parameters of each spectrum.
3 Are high frequencies more difficult to generate?
In this section, we investigate if there is a frequency bias in the generator that impedes the generation of high frequencies. Therefore, we first consider the generator in an isolated setting assuming pixellevel supervision – otherwise, even with a perfect discriminator it would be impossible to generate a correct image. We extend our findings to the full GAN setting in Section 5.
There are two lines of argumentation in existing works: The first investigates artifacts that arise from the upsampling operations in the generator Chandrasegaran2021CVPR ; Durall2020CVPR ; Frank2020ICML . These works analyze the spectral properties of the generator predictions at convergence. We extend this analysis from the perspective of a frequency bias to see if low frequencies are learned earlier during training.
The second line of works argues for a frequency bias of the learnable filters Khayatkhoei2020ARXIV ; Jiang2020ARXIV . Motivated by their analysis we investigate if learnable filters are at all able to compensate for the artifacts introduced by upsampling.
Experimental Setting:
To isolate the generator from the discriminator we consider a conditional reconstruction task. In particular, we take 10 images from a dataset and pair them with 10 latent codes drawn from a normal distribution. Given a latent code, the generator is optimized to reconstruct the corresponding image with a pixelwise L2loss
(4) 
We choose the generator from PGAN Karras2018ICLR because of its simple architecture comprising only convolutions and upsampling operations. Further, when combined with R1regularization it trains stably in a GAN setting Mescheder2018ICML , allowing us to use the same generator in Section 5. We reduce the number of channels for faster training because our primary focus is on the spectral properties. As upsampling operations, we investigate bilinear and nearest neighbor interpolation, zero insertion, and reshaping in the channel dimension ShiCVPR2016 . We ensure that all networks have a similar number of parameters. More details are provided in the supplementary.
Datasets: In natural images, the spectral density follows an exponential decay. To isolate the effects of frequency range and magnitute, we create a Toyset of images with two Gaussian peaks , of equal magnitude in the spectrum, see Fig. 1(a), 1(b). Based on the Nyquist frequency , we create samples using and draw and
. Together with a uniformly distributed phase, we apply the inverse Fourier transform to create images. We further test our setting on natural images with a downsampled version of CelebA
Liu2015ICCVa . Both datasets have a resolution of pixels.Evaluation Metrics: On the spatial domain, we report PSNR on the RGB values and on the frequency domain we evaluate the reduced spectrum, see Eq. (3). To analyze frequencydependent convergence we further visualize the evolution of the spectrum similar to Rahaman2019ICML . Here, the xaxis denotes the (frequency) radius in normalized polar coordinates and the yaxis denotes the training iterations. The color corresponds to the relative error of the average predicted reduced spectrum wrt. the ground truth, where positive and negative values indicate too many and too few predicted frequencies, respectively. We clip the colorbar at , i.e., when the relative error exceeds .
Do generators exhibit a frequency bias? The spectral evolution in Fig. 2 shows different behavior for bilinear and nearest neighbor upsampling compared to zero insertion and reshaping. Particularly on the Toyset, a generator with bilinear or nearest neighbor upsampling learns the lower frequencies earlier in training than the high frequencies, see Fig. 1(c) and 1(d). While the generator with bilinear upsampling struggles with high frequencies throughout training, for nearest neighbor upsampling it eventually fits the highfrequency peak, suggesting that its bias towards little highfrequency content is not as strong. In contrast, a generator with zero insertion or reshaping learns both peaks approximately at equal speed but is prone to generating checkerboard artifacts as indicated by the large error at the highest frequency in Fig. 1(e) and 1(f). Between the peaks of the toyset, where frequencies have small magnitudes, all methods struggle to match the statistics of the training data because the L2loss penalizes errors at frequencies with low magnitude only slightly. This explains why for CelebA there is an overall trend towards learning lower frequencies earlier during training. The PSNR values in Table 2 support these findings in the spatial domain. The lack of high frequencies for bilinear and nearest neighbor upsampling manifests in a low PSNR, particularly for the Toyset which contains many high frequencies by construction.





We show one sample from the dataset in the first column and the mean and standard deviation of the reduced spectra for all 10 samples in the second column for reference. For the Toyset the dashed lines mark the mean of the Gaussian peaks at
and . Upsampling with bilinear or nearest neighbor interpolation biases the generator towards predicting little highfrequency content. Conversely, zero insertion and reshaping are prone to introducing checkerboard artifacts, indicated by the large errors at the highest frequency. The color corresponds to the relative error of the average predicted reduced spectrum wrt. the ground truth and is clipped at , i.e. when the relative error exceeds .Can the generator learn to compensate for the artifacts? Since the L2loss is less sensitive to frequencies with low magnitudes (see supplementary for a formal derivation), we now add an L2loss on the logarithm of the reduced spectrum:
(5) 
The logarithm penalizes errors at low magnitudes more strongly and can therefore reduce the checkerboard artifacts introduced by zero insertion and reshaping, see Fig. 3. Hence, given a suitable objective, learnable filters can indeed compensate for highfrequency artifacts introduced by zero insertion and reshaping. Interestingly, bilinear upsampling does not benefit as much from the spectral loss. This suggests that a strong bias towards little highfrequency content can be more difficult to compensate for. As expected, the additional loss does not alter the evaluation in the spatial domain significantly and yields similar PSNR values in Table 2.
Implications: Overall, these results lead us to conclude that different upsampling operations bias the generator towards different spectral properties. Nearest neighbor and particularly bilinear upsampling introduce a bias towards fitting functions with little highfrequency content. In Section 5, we will see that this can also be beneficial when working with natural images as their spectral density follows an exponential decay. On the other hand, zero insertion and reshaping introduce a bias towards checkerboard artifacts. However, with a suitable loss function, the network filters can learn to compensate for these artifacts. Therefore, the upsampling in the generator alone cannot explain the spectral discrepancies. This suggests that the training signal provided by the discriminator might be suboptimal in the first place.
4 Can the discriminator provide a good training signal?
In this section, we investigate how good the training signal is that the discriminator can provide. Only a few existing works consider the discriminator as a cause for the spectral discrepancies. Chen et al. Chen2021AAAI attribute the highfrequency artifacts to information loss in the downsampling operations. In particular, they analyze how high frequencies affect the output of the discriminator at convergence. Instead, we consider how downsampling affects the training signal by assessing the input to the discriminator. Further, we analyze how the spectral statistics evolve during training to see if downsampling introduces a bias towards correcting low frequencies earlier than high frequencies. To reduce spectral discrepancies, Chen et al. Chen2021AAAI propose a regularizer based on the reduced spectrum to perform hard example mining in the frequency domain, cf. Eq. (3). Similarly, Jung et al. Jung2021AAAI define an additional discriminator on the reduced spectrum. While spectral discrepancies are not the main focus of their work, Gal et al. Gal2021ARXIV also argue for "unfavorable loss functions" and propose to train both the generator and discriminator in wavelet space. In the second part of this section, we therefore aim to understand what makes (additional) inputs from other domains valuable and which of the proposed measures are most effective to correct the highfrequency artifacts.
Experimental Setting: Similar to the generator experiments in Section 3
, we propose a conditional reconstruction task to assess the quality of the training signal independently of the generator architecture. More specifically, we train a classconditional GAN with a single sample per class. To minimize the impact of the generator, we directly optimize the pixel values of the fake images. In practice, we pair 10 images with 10 labels as training data and optimize 10 learnable tensors conditioned on the labels. We optimize the learnable tensors and discriminator weights in an alternating fashion using the GAN twoplayer game setting. Similar to Section
3, we use the PGAN discriminator Karras2018ICLR with a reduced number of channels because our primary focus is on the spectral properties. As downsampling operations, we investigate strided convolution, average pooling, and blurring with subsequent average pooling Zhang2019ICML . We further train an MLP on the flattened input image as a baseline without any downsampling operations. When adding a spectrum discriminator, we weigh both discriminator losses equally as in Jung2021AAAI . Note, that training a GAN on such few images is a nontrivial task that requires careful tuning to train stably. We ensured that this is the case for our results, see supplementary for details.Datasets and Metrics: We use the same datasets and metrics as in Section 3 and evaluate them on the reconstructions guided by the gradients from the discriminator. This allows us to assess the quality of the training signal.
How does downsampling affect the training signal?
Fig. 5 shows the evolution of the average spectrum of the reconstructed images.
On the Toyset, both the low and highfrequency peaks are learned approximately at an equal pace.
This suggests that the discriminator can indeed detect and correct the spectral statistics at high frequencies, regardless of the choice of downsampling operation.
However, all approaches struggle to correct frequencies of low magnitude.
For natural images, the power spectrum decays exponentially which creates the impression that the discriminator struggles with high frequencies, while it actually struggles with the low magnitude of the highfrequency content.
In image space, amongst the downsampling operations strided convolution achieves the best PSNR.
However, it is much lower than the values for the generator in Table 2 which reflects the harder task due to instance instead of pixellevel supervision.
Fig. 6 shows the learned tensors at the last training iteration. While the reconstruction from the MLP is reasonably good, the downsampling operations introduce artifacts in the training signal provided by the discriminator.
Is it enough to train in the spatial domain?
In Fig. 5 we compare the spectrum evolution for an additional spectral discriminator (SD) Jung2021AAAI , hard example mining in the frequency domain (FMining) Chen2021AAAI and training in wavelet space (Wavelet) Gal2021ARXIV .
Since the spectral discriminator is directly applied to the reduced spectrum, it can guide the generator on the spectral statistics of the dataset and greatly reduces the error in Fig. 4(b).
Instead, Fmining and wavelets only slightly reduce the spectral discrepancies, e.g., on CelebA for frequencies between .
While Fmining increases the weight of samples with poor spectral realness, it does not make the discriminator more sensitive to slight changes in image space. This is in agreement with the findings in Chen2021AAAI .
Wavelets separate the input wrt. the frequency but not wrt. the magnitude which could explain why low magnitude artifacts remain difficult to detect.
In the spatial domain, however, the reconstructions with the spectral discriminator in Fig. 5(e) and 5(f) reveal a caveat: While the spectrum is better aligned, the reconstruction with the additional spectral discriminator qualitatively becomes worse.
As the spectrum computation and the azimuthal integration discard information, images with the same reduced spectra can look very different. Consequently, penalizing the reduced spectrum might not be sufficient for improving the training signal alone.
This also reflects in the PSNR in Table 2 which, except for the MLP, remains largely unaffected by the spectral discriminator.
In the supplementary, we verify that replacing the reduced spectrum with the full Fourier transform indeed improves the image fidelity. However, as the spectral discriminator is an MLP, this does not trivially scale to realworld settings.
Implications: In contrast to existing hypotheses, our findings evidence that the discriminator generally struggles with frequencies that have a low magnitude but that high frequencies are not per se more difficult to detect. An additional discriminator on the reduced spectrum can greatly facilitate learning the spectral statistics of the data but might not improve the image fidelity. Even in our simple testbed, none of the convolutional architectures with downsampling is able to provide artifactfree supervision. This indicates that the discriminator might play a more important role in reducing highfrequency artifacts than currently anticipated in the field. While a simple spectral discriminator provides a good first step, we conclude that more work in this area is required to solve the problem. We believe that one key is to reduce the downsampling artifacts, e.g., by exploring alternative downsampling operations or by considering approaches that allow for pixellevel supervision of the discriminator as in Shocher2020ARXIV or Schonfeld2020CVPR .
5 Improving GANs
We now analyze the full GAN training, i.e., when generator and discriminator are two competing neural networks. First, we extend the discriminator analysis from Section
4 and verify if these results are also valid for full GAN training. Next, we investigate if the most effective discriminator is able to resolve highfrequency artifacts of the different upsampling strategies discussed in Section 3. Lastly, we analyze its effect on StyleGAN2 Karras2020CVPR , a stateoftheart GAN model, which shows the characteristic peak at the highest frequencies in the reduced spectrum.Experimental Setting: We start by combining the architectures from Section 3 and Section 4 to obtain PGAN Karras2018ICLR with a reduced number of channels, which we train from scratch with R1regularization and without progressive growing Karras2020CVPR ; Mescheder2018ICML until the discriminator has seen M images. Next, we extend our analysis to StyleGAN2 on resolutions up to pixels. The StyleGAN2 generator uses bilinear upsampling which, according to our previous findings, should not cause the elevated amount of high frequencies. Therefore, we finetune pretrained models with the most effective discriminator following the training protocol from Karras2020NIPS until the discriminator has seen M images.
Datasets: We consider largescale realworld datasets in this section. We train our version of PGAN on a downsampled version of FFHQ Karras2019CVPR at resolution pixels and a downsampled version of 200k images from LSUN Cats Yu2015ARXIV at resolution pixels. We finetune StyleGAN2 on LSUN Cats ( pixels), AFHQ Dog Choi2020CVPR ( pixels) and FFHQ ( pixels). For AFHQ Dog, we use adaptive discriminator augmentation due to the small size of the dataset Karras2020NIPS .
Evaluation Metrics: In the spatial domain, we report FID Heusel2017NIPS on the full dataset and 50k generated images. To measure spectral discrepancies we deploy the spectral classifier described in Section 2. We follow the proposal in Chandrasegaran2021CVPR
and replace the KNN classifier with an SVM to obtain a stronger classifier. We use
real and fake images each and use and for training and evaluation, respectively. Here, a classification accuracy of around is ideal as this indicates a perfect generator because the classifier is forced to make a random guess.How do the isolated settings transfer to full GAN training?
We ablate PGAN with discriminators on different input domains in Table 4.
In agreement with the findings from Section 4, the wavelet discriminator (Wavelet) and hard frequency mining (FMining) improve the spectral statistics only slightly. Hence, generated images can still be classified with high accuracy. The most effective method to learn the spectral statistics remains the additional spectral discriminator (SD) as indicated by the lower accuracy of the spectral classifier on all datasets. Consistent with our observation on the testbed, the image quality in the spatial domain remains largely unaffected.
Recalling the implication of Section 3, that the generator can learn to compensate for highfrequencies artifacts given a suited training objective, we now investigate whether the spectral discriminator satisfies such a requirement for different upsampling operations in the generator.
Consistent with our observation on the generator testbed, Fig. 7 shows that the spectrum discriminator is also able to significantly reduce the peak at the highest frequency for both zero insertion and reshaping upsampling. However, the magnitude at the highest frequencies remains slightly elevated because the generator only receives supervision through the discriminator (real vs. fake) instead of full ground truth spectra considered in the testbed.
On the other hand, the bias towards little highfrequency content for bilinear and nearest neighbor upsampling aligns well with the spectral statistics of the datasets.
This also reflects in Table 4 where images generated with zero insertion and reshaping are still detected with higher accuracy than images generated with bilinear and nearest neighbor upsampling.
Considering both the spatial statistics and image fidelity, we observe that upsampling with nearest neighbor yields the best performance.
Lastly, we evaluate the effect of the spectral discriminator when applied to StyleGAN2 on datasets with higher resolution, namely LSUN Cats (), AFHQ Dog () and FFHQ (). The results in Table 5 show mixed results: On AFHQ Dog the spectral discriminator results in a strong improvement on the spectrum but also significantly increases the FID. In contrast, on FFHQ the FID stays similar but the spectral discriminator also only slightly improves the spectral statistics and is not able to correct the peak at the highest frequency^{1}^{1}1Jung et al. Jung2021AAAI conduct a similar experiment but evaluate the average over the logarithmic reduced spectra, i.e., while we consider . We find that the peak at the highest frequencies is only significant in the latter computation and confirmed this difference with the authors.. This suggests that in existing architectures there is a tradeoff between perceptual image quality and matching the spectral statistics of the data.
Implications: Our experiments show that both of our testbeds on the generator and the discriminator provide consistent observations which are aligned with those for full GAN training. Our findings suggest that the spectral discriminator can reduce the spectral discrepancies but the misalignment in the high frequencies is not fully addressed. We further observe that nearest neighbor upsampling together with an additional spectral discriminator is a good combination for natural images where highfrequency content is low. However, when applying existing measures to StyleGAN2, we observe that none of them can solve the spectral discrepancies completely. This suggests that the highfrequency artifacts in GANs are still an open problem that require further research. Our work takes an important step towards understanding the underlying mechanisms and sheds light on the effectiveness and limitations of existing approaches.
6 Other Related Work
Deepfake Detection: In the last years, great progress on unconditional Generative Adversarial Networks enabled photorealistic image synthesis with deep neural networks Goodfellow2014NIPS ; Karras2020CVPR ; Karras2019CVPR ; Karras2018ICLR ; Mescheder2018ICML ; Anokhin2021CVPR ; Skorokhodov2021CVPR . With the rapid development in image synthesis, fake image detection becomes equally important MarraGCV2018 ; Roessler2019ICCV ; Wang2020CVPR . Albeit very high photorealism, multiple works show that generated images can still be easily distinguished from real images based on their highfrequency content Jiang2020ARXIV ; Wang2020CVPR ; Dzanic2020NIPS ; Durall2020CVPR ; LiuZ2020CVPR ; Zhang2019IEEE ; Frank2020ICML . Consistent behavior across various architectures suggests a systematic problem in stateoftheart GANs. Thus, it is important to understand if there is a frequency bias in convolutional generators or discriminators.
Spectral Bias:
Fully connected ReLU networks are known to fit lowfrequency modes of a target function faster than its highfrequency modes
Jacot2018NIPS ; Rahaman2019ICML ; Xu2019ARXIV ; Arora2019ICML ; Basri2019NIPS . This behavior, referred to as spectral bias, has been shown to greatly impact the practical expressiveness of such models due to frequency dependent learning rates Jacot2018NIPS ; Rahaman2019ICML ; Xu2019ARXIV ; Arora2019ICML ; Basri2019NIPS . For CNNs, a spectral bias is often mentioned to explain the good generalization of largely overparameterized networks WangH2020CVPR ; Ulyanov2018CVPR ; Xu2019ARXIV but is theoretically less well understood. Inspired by Rahaman et al. Rahaman2019ICML who study the spectral bias of fully connected ReLU networks, we analyze if a similar bias exists in convolutional generators and discriminators. Note that Rahaman et al. Rahaman2019ICML consider onedimensional regression tasks in which the spectrum is evaluated over the output samples. Instead, we calculate the spectrum over all pixels of one image. To highlight this difference, we use the term frequency bias in our work instead.7 Conclusion
In this work, we take a thorough look at existing explanations for systematic artifacts in the spectral statistics of generated images and unify the efforts that have been done so far. While our experiments suggest that upsampling introduces a frequency bias in the generator this alone does not explain the spectral discrepancies. In agreement with Jung2021AAAI ; Chen2021AAAI , we find that advancing the discriminator brings us one step closer to learning the correct data distribution. However, none of the existing measures can faithfully recover the spectral statistics yet, such that generated images are still straightforward to classify solely based on their spectra. While stateoftheart GANs benefit from highly engineered generator architectures, e.g., Karras2018ICLR ; Karras2019CVPR ; Karras2020CVPR , our findings suggest that the design of the discriminator plays an equally important role and deserves more attention in future work. We encourage future work to explore alternative downsampling operations and to consider approaches that enable pixellevel supervision for the discriminator. Interesting starting points could for example be Shocher2020ARXIV or Schonfeld2020CVPR . Another interesting direction might be to explore discriminator augmentation Karras2020NIPS ; Zhao2020NIPS wrt. highfrequency artifacts. We believe that our testbeds for both the generator and discriminator may serve as useful tools for future investigations and will make our code publicly available upon acceptance. Lastly, we remark that working on generative models always bears the risk of manipulation and the creation of misleading content. While the insights from our work could be used to make Deepfakes harder to detect, we believe that a better understanding of existing architectures and their limitations is key to advancing the field, such that the benefits of our work outweigh the societal risks.
Acknowledgments
We acknowledge the financial support by the BMWi in the project KI Delta Learning (project number 19A19013O) and the support from the BMBF through the Tuebingen AI Center (FKZ: 01IS18039A). We thank the International Max Planck Research School for Intelligent Systems (IMPRSIS) for supporting Katja Schwarz. This work was supported by an NVIDIA research gift.
References

(1)
I. Anokhin, K. Demochkin, T. Khakhulin, G. Sterkin, V. Lempitsky, and
D. Korzhenkov.
Image generators with conditionallyindependent pixel synthesis.
Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)
, 2021. 
(2)
S. Arora, S. S. Du, W. Hu, Z. Li, and R. Wang.
Finegrained analysis of optimization and generalization for
overparameterized twolayer neural networks.
In
Proc. of the International Conf. on Machine learning (ICML)
, 2019.  (3) R. Basri, D. W. Jacobs, Y. Kasten, and S. Kritchman. The convergence rate of neural networks for learned functions of different frequencies. In Advances in Neural Information Processing Systems (NIPS), 2019.
 (4) K. Chandrasegaran, N. Tran, and N. Cheung. A closer look at fourier spectrum discrepancies for cnngenerated images detection. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.

(5)
Y. Chen, G. Li, C. Jin, S. Liu, and T. Li.
SSDGAN: measuring the realness in the spatial and spectral
domains.
In
Proc. of the Conf. on Artificial Intelligence (AAAI)
, 2021.  (6) Y. Chen, S. Liu, and X. Wang. Learning continuous image representation with local implicit image function. arXiv.org, 2012.09161, 2020.
 (7) Y. Choi, Y. Uh, J. Yoo, and J.W. Ha. Stargan v2: Diverse image synthesis for multiple domains. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
 (8) R. Durall, M. Keuper, and J. Keuper. Watch your upconvolution: CNN based generative deep neural networks are failing to reproduce spectral distributions. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
 (9) T. Dzanic, K. Shah, and F. D. Witherden. Fourier spectrum discrepancies in deep network generated images. In Advances in Neural Information Processing Systems (NIPS), 2020.
 (10) J. Frank, T. Eisenhofer, L. Schönherr, A. Fischer, D. Kolossa, and T. Holz. Leveraging frequency analysis for deep fake image recognition. In Proc. of the International Conf. on Machine learning (ICML), 2020.
 (11) R. Gal, D. Cohen, A. Bermano, and D. CohenOr. SWAGAN: A stylebased waveletdriven generative model. CoRR, 2102.06108, 2021.
 (12) I. J. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), 2014.
 (13) M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter. Gans trained by a two timescale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (NIPS), 2017.
 (14) A. Jacot, C. Hongler, and F. Gabriel. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems (NIPS), 2018.
 (15) L. Jiang, B. Dai, W. Wu, and C. C. Loy. Focal frequency loss for generative models. arXiv.org, 2012.12821, 2020.
 (16) S. Jung and M. Keuper. Spectral distribution aware image generation. In Proc. of the Conf. on Artificial Intelligence (AAAI), 2021.
 (17) T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In Proc. of the International Conf. on Learning Representations (ICLR), 2018.
 (18) T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila. Training generative adversarial networks with limited data. In Advances in Neural Information Processing Systems (NIPS), 2020.
 (19) T. Karras, S. Laine, and T. Aila. A stylebased generator architecture for generative adversarial networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
 (20) T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of StyleGAN. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
 (21) M. Khayatkhoei and A. Elgammal. Spatial frequency bias in convolutional generative adversarial networks. arXiv.org, 2010.01473, 2020.
 (22) D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proc. of the International Conf. on Learning Representations (ICLR), 2015.
 (23) Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2015.

(24)
Z. Liu, X. Qi, and P. H. S. Torr.
Global texture enhancement for fake face detection in the wild.
In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.  (25) F. Marra, D. Gragnaniello, D. Cozzolino, and L. Verdoliva. Detection of gangenerated fake images over social networks. In Proc. IEEE Conf. on Multimedia Information Processing and Retrieval (MIPR), 2018.
 (26) L. Mescheder, A. Geiger, and S. Nowozin. Which training methods for gans do actually converge? In Proc. of the International Conf. on Machine learning (ICML), 2018.
 (27) T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In Proc. of the International Conf. on Learning Representations (ICLR), 2018.
 (28) A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 2016.
 (29) N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. A. Hamprecht, Y. Bengio, and A. C. Courville. On the spectral bias of neural networks. In Proc. of the International Conf. on Machine learning (ICML), 2019.
 (30) A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner. Faceforensics++: Learning to detect manipulated facial images. In Proc. of the IEEE International Conf. on Computer Vision (ICCV), 2019.
 (31) E. Schönfeld, B. Schiele, and A. Khoreva. A unet based discriminator for generative adversarial networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.

(32)
W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert,
and Z. Wang.
Realtime single image and video superresolution using an efficient subpixel convolutional neural network.
In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2016.  (33) A. Shocher, B. Feinstein, N. Haim, and M. Irani. From discrete to continuous convolution layers. arXiv.org, 2020.
 (34) I. Skorokhodov, S. Ignatyev, and M. Elhoseiny. Adversarial generation of continuous images. Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2021.
 (35) D. Ulyanov, A. Vedaldi, and V. S. Lempitsky. Deep image prior. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018.
 (36) H. Wang, X. Wu, Z. Huang, and E. P. Xing. Highfrequency component helps explain the generalization of convolutional neural networks. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
 (37) S. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros. Cnngenerated images are surprisingly easy to spot… for now. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
 (38) Z. J. Xu, Y. Zhang, T. Luo, Y. Xiao, and Z. Ma. Frequency principle: Fourier analysis sheds light on deep neural networks. arXiv.org, 1901.06523, 2019.
 (39) F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. Lsun: Construction of a largescale image dataset using deep learning with humans in the loop. arXiv.org, 1506.03365, 2015.
 (40) R. Zhang. Making convolutional networks shiftinvariant again. In Proc. of the International Conf. on Machine learning (ICML), 2019.
 (41) X. Zhang, S. Karaman, and S. Chang. Detecting and simulating artifacts in GAN fake images. In IEEE International Workshop on Information Forensics and Security, 2019.
 (42) S. Zhao, Z. Liu, J. Lin, J. Zhu, and S. Han. Differentiable augmentation for dataefficient GAN training. In Advances in Neural Information Processing Systems (NIPS), 2020.
Appendix A Implementation
a.1 Generator
The generator architecture is specified in Table 6 and is the same as the PGAN Karras2018ICLR generator except for a reduced number of channels. For the model, we base our implementation on https://github.com/rosinality/progressiveganpytorch. Following Karras2018ICLR , we use a slope of
for the LeakyRelu and apply pixel normalization after every convolution. The first convolution uses a padding of
, while the remaining convolutions retain the input resolution.Note that upsampling by reshaping reduces the channel dimension by a factor of . Therefore, in this case, we multiply the output channels , cf. Table 6, of each convolutional layer before the upsampling by . To keep a similar amount of total parameters, we further divide all channel dimensions by .
We train the model endtoend using Adam optimizer Kingma2015ICLR with a learning rate of and a batch size of . When training with both an L2loss in image space and an L2loss on the reduced spectrum, we weigh both losses equally. We train on a single NVIDIA Tesla V100SXM232GB.
Teaser: For the experiment in Fig. 1b of the main paper, we train on a single image of resolution . We use the settings mentioned before but use a batch size of .
Layer Type  Kernel Size  Activation  Normalization  Repetitions  

Conv  4  LeakyRelu  PixelNorm  1  
Conv  3  LeakyRelu  PixelNorm  1  
Upsample  –  –  PixelNorm  
Conv  3  LeakyRelu  PixelNorm  
Conv  3  LeakyRelu  PixelNorm  
Upsample  –  –  PixelNorm  
Conv  3  LeakyRelu  PixelNorm  
Conv  3  LeakyRelu  PixelNorm  
Upsample  –  –  PixelNorm  
Conv  3  LeakyRelu  PixelNorm  
Conv  3  LeakyRelu  PixelNorm  
Conv  1  –  PixelNorm  1 
a.2 Discriminator
Layer Type  Kernel Size  Activation  Repetitions  

Conv  1  LeakyRelu  1  
Conv  3  LeakyRelu  1  
Conv  3  LeakyRelu  1  
Downsample  –  –  
Conv  3  LeakyRelu  
Conv  3  LeakyRelu  
Downsample  –  –  
Conv  3  LeakyRelu  
Conv  3  LeakyRelu  
Downsample  –  –  
Conv  3  LeakyRelu  
Conv  4  LeakyRelu  
Linear  –  –  1 
The discriminator architecture is specified in Table 7 and is the same as the PGAN Karras2018ICLR discriminator except for a reduced number of channels. For the model, we base our implementation on https://github.com/rosinality/progressiveganpytorch. Following Karras2018ICLR , we use a slope of 0.2 for the LeakyRelu, and append the minibatch standard deviation after the last downsampling operation which increases in Table 7 from to . The last convolution uses no padding to reduce the spatial size from to before the linear layer, while the remaining convolutions retain the spatial size.
For the reconstruction task, we train a classconditional discriminator with a single sample per class. Hence, the linear layer outputs logits, where is the number of classes, i.e., samples. In our experiments, we use .
The generator is replaced by learnable tensors which we optimize jointly with the discriminator. We train the model endtoend using Adam optimizer Kingma2015ICLR with a learning rate of for both the tensors and the discriminator, and a batch size of .
To stabilize training we train with R1regularization using a regularization strength of and use an exponential moving average with decay for the learnable tensors to produce the images. For all of our experiments, we ensure that training is stable by verifying that the discriminator converges to equilibrium, i.e., that the logits of the discriminator approach zero during training.
For our testbed, we build on the framework from https://github.com/LMescheder/GAN_stability.git because it is intuitive and straightforward to work with.
We train on a single NVIDIA Tesla V100SXM232GB.
Teaser:
For the experiment in Fig. 1c of the main paper, we train on a single image of resolution . We use the settings mentioned before but use a batch size of .
a.3 Gan
Our code framework for training the full GAN setting is based on the publicly available code for StyleGAN2ada Karras2020NIPS : https://github.com/NVlabs/stylegan2adapytorch.git because it is optimized for largescale multiGPU GAN training.
We train PGAN on two NVIDIA Tesla V100SXM232GB GPUs with a batch size of and a learning rate of for both the generator and the discriminator. The strength of the regularizer is and we use a minibatch size of to compute the standard deviation within the batches.
We do not train PGAN with adaptive discriminator augmentation and adhere to the training protocol from Karras2020CVPR .
For finetuning StyleGAN2 we follow the training process of Karras2020CVPR using four V100SXM232GB or four GEFORCE GTX 1080 Ti.
Appendix B Additional Analysis of the Generator Testbed
b.1 Number of Images
We ablate how the number of training images impacts our generator testbed. Fig. 9 shows that the findings from the main paper remain consistent with a varying number of training images.
b.2 Low Magnitude Errors
In this section, we derive that an L2loss penalizes errors with low magnitudes in the frequency domain less than errors with high magnitudes. Let us first consider a 1Dsignal with discrete values , . According to Parseval’s theorem, for the discrete Fourier transform it holds that
(6) 
where is the magnitude at frequency . For an L2loss on a predicted signal with values and a target signal with values we obtain
(7)  
(8) 
Let us now consider some frequency . When the error at has a low magnitude, then is small and therefore contributes only slightly to the sum in Eq. (8).
For a gray scale image with pixel values , , , this follows similarly from Parseval’s theorem in 2D
(9) 
Appendix C Additional Analysis of the Discriminator Testbed
c.1 Number of Images
We ablate how the number of training images impacts our discriminator testbed. Fig. 9 shows that the findings from the main paper remain consistent with a varying number of training images.
c.2 Penalizing the Spectrum
Jung et al. Jung2021AAAI propose an additional discriminator on the reduced spectrum (SD). However, penalizing the reduced spectrum might not be sufficient for improving the training signal alone because the spectrum computation and the azimuthal
integration discard information, see Section 4 of the main paper. Therefore, we investigate if using the full Fourier transform instead of the reduced spectrum can improve performance.
We compare two different additional discriminators to the version from Jung2021AAAI : The first is an MLP that operates on the flattened 2D Fourier transform of the image and therefore uses neither convolutions nor downsampling (SDFT). Since this is only feasible for images with a small resolution, we also investigate a convolutional spectral discriminator (SDCNNFT) with the same architecture as the spatial discriminator.
We weigh the spatial and spectral discriminator equally as done in Jung2021AAAI . Further, we ensure that the models of all approaches have a similar number of parameters. Note that using an MLP on the flattened 2D Fourier transform increases the number of parameters from k to k. Hence, we increase the channel dimensions of the discriminator for both SD and SDCNNFT to obtain models of comparable size.
The PSNR values in Table 9 show a significant improvement for all downsampling techniques for SDFT. However, with the convolutional variant, SDCNNFT, the PSNR does not improve wrt. SD.
This is consistent with the qualitative results for BlurPool downsampling in Fig. 10. Even with the increased amount of parameters, SD cannot remove the downsampling artifacts. While SDFT closely reconstructs the ground truth, it can only be used with lowresolution images due to its fully connected architecture. Unfortunately, naïvely applying a convolutional architecture on the full spectrum also suffers from artifacts in the reconstructions due to downsampling. These observations suggest that integrating frequency domain supervision effectively and efficiently remains an open question in a GAN setting.
Appendix D Additional Analysis of Full GAN Training
d.1 Pgan
Discriminators on Different Input Domains: For our ablation on different input domains for the discriminator, Fig. 10(b) shows spectrum plots corresponding to Table 3 in the main paper. In agreement with the observations in Chen2020ARXIV , Fmining alters the spectral statistics at the highest frequencies only slightly. Wavelets reduce the peak at the highest frequencies but also predict too few frequencies below . Similar to wavelets, the additional spectral discriminator (SD) reduces the peak at the highest frequencies but matches the spectral statistics below more closely. These observations are consistent across both datasets and support the reported classification accuracy in Table 3 of the main paper.
Qualitative Results for PGAN: In Fig. 12, we include some qualitative results for our smaller version of PGAN in the original setting. While not completely photorealistic, the model can still reproduce characteristic features of the ground truth data. For our analysis, this is sufficient because the focus of these experiments is not on image fidelity but to study the spectral properties of both the generator and the discriminator in combination.
d.2 Sngan
In this section, we consider different discriminators for SNGAN Miyato2018ICLR to investigate if our analysis from Section 4 in the main paper is consistent across architectures. We base our framework on the official implementation of Chen2021AAAI , https://github.com/cyq373/SSDGAN.git. Similar to Section 4 in the main paper, we compare an additional spectral discriminator (SD) Jung2021AAAI , hard example mining in the frequency domain (FMining) Chen2021AAAI , and training in wavelet space (Wavelet) Gal2021ARXIV .
Consistent with our analysis on PGAN Karras2018ICLR , Fig. 13 shows that the additional spectral discriminator is the most effective to reduce the spectral discrepancies.
However, interestingly, the corresponding classification accuracies on the reduced spectra in Table 9 are similar for wavelets and the additional spectral discriminator. This suggests that training the spectral classifier on the fitted decay parameters, as proposed in Dzanic2020NIPS , can occasionally produce overly optimistic results with an accuracy closer to chance, i.e., .
However, for all results in the main paper the classification accuracy is consistent with the qualitative alignment of the reduced spectra, see Fig. 10(b), Fig. 14, and Fig. 7 in the main paper.
To adhere to the same metric as Dzanic2020NIPS ; Chandrasegaran2021CVPR , we hence decide to report the classification accuracy on the fitted decay parameters.
While the FID in Table 9 is mixed for the different approaches, all values are lower than the corresponding values for PGAN in Table 3 of the main paper. This is expected, as our version of PGAN with the reduced channels has significantly fewer parameters than the original SNGAN.
d.3 StyleGAN2
For finetuning StyleGAN2 Karras2020CVPR with the spectral discriminator Jung2021AAAI , Fig. 14 shows spectrum plots corresponding to Table 5 in the main paper. While the spectral discriminator largely improves the spectral statistics on AFHQ Dog, it cannot fully resolve the peak at the highest frequencies for the remaining datasets. This also reflects in the high accuracy of the spectral classifier on these datasets in Table 5 of the main paper.


Fig. 15 shows qualitative results before and after finetuning on AFHQ dog and FFHQ. For AFHQ dog, finetuning prevailingly changes the background to correct the spectral statistics but qualitatively this reduces the image fidelity. This supports the results in Table 5 in the main paper, where the accuracy of the spectral classifier is reduced but FID becomes worse. The finetuned images on FFHQ have no background artifacts and have a similar FID as the original images. However, the spectral statistics improve only slightly, see Fig. 14. This again suggests that the reduced spectrum might not contain enough information to correct both the spectral statistics and image fidelity.
Appendix E Datasets
Toyset: We will include the scripts for generating our Toyset in our code release upon acceptance. Licenses: The datasets used in this paper, CelebA Liu2015ICCVa , FFHQ Karras2019CVPR (Creative Commons BYNCSA 4.0), LSUN Cats Yu2015ARXIV , and AFHQ Choi2020CVPR (Creative Commons BYNC 4.0) are available for noncommercial research purposes and are therefore suitable for our work.
Comments
There are no comments yet.