Log In Sign Up

Watch your Up-Convolution: CNN Based Generative Deep Neural Networks are Failing to Reproduce Spectral Distributions

by   Ricard Durall, et al.

Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100 To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.


page 3

page 4

page 7

page 12


Simpler is better: spectral regularization and up-sampling techniques for variational autoencoders

Full characterization of the spectral behavior of generative models base...

A Closer Look at Fourier Spectrum Discrepancies for CNN-generated Images Detection

CNN-based generative modelling has evolved to produce synthetic images i...

Stabilizing GANs with Octave Convolutions

In this preliminary report, we present a simple but very effective techn...

DeepTriangle: A Deep Learning Approach to Loss Reserving

We propose a novel approach for loss reserving based on deep neural netw...

Focal Frequency Loss for Generative Models

Despite the remarkable success of generative models in creating photorea...

A Method for Evaluating the Capacity of Generative Adversarial Networks to Reproduce High-order Spatial Context

Generative adversarial networks are a kind of deep generative model with...

Generative Adversarial Network (GAN) based Image-Deblurring

This thesis analyzes the challenging problem of Image Deblurring based o...

Code Repositories


Repo for our CVPR Paper: Watch your Up-Convolution: CNN Based Generative Deep Neural Networks areFailing to Reproduce Spectral Distributions

view repo

1 Introduction

Figure 1: Common up-convolution methods are inducing heavy spectral distortions into generated images. The top

figure shows the statistics (mean and variance) after azimuthal integration over the power-spectrum (see Section

2.1) of real and GAN generated images. Evaluation on the CelebA [34] data set, here all GANs (DCGAN [47], DRAGAN [32], LSGAN [37], WGAN-GP [20]) are using “transposed convolutions” (see Section 2.2) for up-sampling.
Bottom: Results of the same experiments as above, adding our proposed spectral loss during GAN training.

Generative convolutional deep neural networks have recently been used in a wide range of computer vision tasks: generation of photo-realistic images

[29, 6], image-to-image [45, 26, 61, 9, 42, 30] and text-to-image translations [48, 11, 58, 59], style transfer [27, 60, 61, 25]

, image inpainting

[45, 54, 33, 26, 56]

, transfer learning

[5, 10, 15] or even for training semantic segmentation tasks [35, 53], just to name a few.

The most prominent generative neural network architectures are Generative Adversarial Networks (GAN) [18] and Variational Auto Encoders (VAE) [46]

. Both basic approaches try to approximate a latent-space model of the underlying (image) distributions from training data samples. Given such a latent-space model, one can draw new (artificial) samples and manipulate their semantic properties in various dimensions. While both GAN and VAE approaches have been published in many different variations, e.g. with different loss functions

[18, 4, 20], different latent space constraints [41, 13, 13, 21, 30] or various deep neural network (DNN) topologies for the generator networks [47, 43], all of these methods have to follow a basic data generation principle: they have to transform samples from a low dimensional (often 1D) and low resolution latent space to the high resolution (2D image) output space. Hence, these generative neural networks must provide some sort of (learnable) up-scaling properties.

While all of these generative methods are steering the learning of their model parameters by optimization of some loss function, most commonly used losses are focusing exclusively on properties of the output image space, e.g. using convolutional neural networks (CNN) as discriminator networks for the implicit loss in an image generating GAN. This approach has been shown to be sufficient in order to generate visually sound outputs and is able to capture the data (image) distribution in image-space to some extent. However, it is well known that up-scaling operations notoriously alter the spectral properties of a signal

[28], causing high frequency distortions in the output.

In this paper, we investigate the impact of up-sampling techniques commonly used in generator networks. The top plot of Figure 1 illustrates the results of our initial experiment, backing our working hypotheses that current generative networks fail to reproduce spectral distributions. Figure 1 also shows that this effect is independent of the actual generator network.

1.1 Related Work

1.1.1 Deepfake Detection

We show the practical impact of our findings for the task of Deepfake detection. The term deepfake [22, 8] describes the recent phenomenon of people misusing advances in artificial face generation via deep generative neural networks [7] to produce fake image content of celebrities and politicians. Due to the potential social impact of such fakes, deepfake detection has become a vital research topic of its own. Most approaches reported in the literature, like [38, 3, 57], are themselves relying on CNNs and thus require large amounts of annotated training data. Likewise, [24] introduces a deep forgery discriminator with a contrastive loss function and [19]

incorporates temporal domain information by employing Recurrent Neural Networks (RNNs) on top of CNNs.

1.1.2 GAN Stabilization

Regularizing GANs in order to facilitate a more stable training and to avoid mode collapse has recently drawn some attention. While [40] stabilize GAN training by unrolling the optimization of the discriminator, [50] propose regularizations via noise as well as an efficient gradient-based approach. A stabilized GAN training based on octave convolutions has recently been proposed in [16]. None of these approaches consider the frequency spectrum for regularization. Yet, very recently, band limited CNNs have been proposed in [17] for image classification with compressed models. In [55], first observations have been made that hint towards the importance of the power spectra on model robustness, again for image classification. In contrast, we propose to leverage observations on the GAN generated frequency spectra for training stabilization.

1.2 Contributions

The contributions of our work can be summarized as follows:

  • We experimentally show the inability of current generative neural network architectures to correctly approximate the spectral distributions of training data.

  • We exploit these spectral distortions to propose a very simple but highly accurate detector for generated images and videos, i.e. a DeepFake detector that reaches up to 100% accuracy on public benchmarks.

  • Our theoretical analysis and further experiments reveal that commonly used up-sampling units, i.e. up-convolutions, are causing the observed effects.

  • We propose a novel spectral regularization term which is able to compensate spectral distortions.

  • We also show experimentally that using spectral regularization in GAN training leads to more stable models and increases the visual output quality.

The remainder of the paper is organized in as follows: Section 2 introduces common up-scaling methods and analyzes their negative effects on the spectral properties of images. In Section 3, we introduce a novel spectral-loss that allows to train generative networks that are able to compensate the up-scaling errors and generate correct spectral distributions. We evaluate our methods in Section 4 using current architectures on public benchmarks.

2 The Spectral Effects of Up-Convolutions

2.1 Analyzing Spectral Distributions of Images using Azimuthal Integration over the DFT Power Spectrum

In order to analyze effects on spectral distributions, we rely on a simple but characteristic 1D representation of the Fourier power spectrum. We compute this spectral representation from the discrete Fourier Transform

of 2D (image) data of size ,


via azimuthal integration over radial frequencies


assuming square images111. We are aware that this notation is abusive, since is discrete. However, fully correct discrete notation would only over complicated a side aspect of our work. A discrete implementations of AI is provided on . Figure 2 gives a schematic impression of this processing step.

Figure 2: Example for the azimuthal integral (AI). (Left) 2D Power Spectrum of an image. (Right) 1D Power Spectrum: each frequency component is the radial integral over the 2D spectrum (red and green examples).

2.2 Up-convolutions in generative DNNs

Generative neural architectures like GANs produce high dimensional outputs, e.g

. images, from very low dimensional latent spaces. Hence, all of these approaches need to use some kind of up-scaling mechanism while propagating data through the network. The two most commonly used up-scaling techniques in literature and popular implementations frameworks (like TensorFlow


and PyTorch

[44]) are illustrated in Figure 3:

up-convolution by interpolation

(up+conv) and transposed convolution (transconv) .
We use a very simple auto encoder (AE) setup (see Figure 4) for an initial investigation of the effects of up-convolution units on the spectral properties of 2d images after up-sampling. Figure 5 shows the different, but massive impact of both approaches on the frequency spectrum. Figure 6 gives a qualitative result for a reconstructed image and shows that the mistakes in the frequency spectrum are relevant for the visual appearance.

Figure 3: Schematic overview of the two most common up-convolution units. Left: low resolution input image (here ); Center: up-convolution by interpolation (up+conv) - the input is scaled via interpolation (bi-linear or nearest neighbor) and then convolved with a standard learnable filter kernel (of size ) to form the 5x5 output (green), Right: transposed convolution

(transconv) - the input is padded with a “bed of nails” scheme (gray grid points are zero) and then convolved with a standard filter kernel to form the

output (green).
Figure 4: Schematic overview of the simple auto encoder (AE) setup used to demonstrate the effects of up-convolutions in Figure 5, using only a standard MSE reconstruction loss (bottom) to train the AE on real images. We down-scale the input by a factor of two and then use the different up-convolution methods to reconstruct the original image size. In Section 3 we use the additional spectral loss (top) to compensate the spectral distortions (see Figure 7)


Figure 5: Effects of single up-convolution units (setup see Figure 4) on the frequency spectrum (azimuthal integral) of the output images. Both up-convolution methods have massive effects on the spectral distributions of the outputs. Transposed convolutions add large amounts high frequency noise while interpolation based methods (up+conv) are lacking high frequencies.
Figure 6: Effects of spectral distortions on the image outputs in our simple AE setting. Left: original image; Center: AE output image; Right: filtered difference image . The top row shows the blurring effect of missing high frequencies in the (up+conv) case; Bottom row shows the high frequency artifacts induces by (transconv).

2.3 Theoretical Analysis

For the theoretic analysis, we consider, without loss of generality, the case of a one-dimensional signal and its discrete Fourier Transform


If we want to increase ’s spatial resolution by factor , we get


where for ”bed of nails” interpolation (as used by transconv) and for bi-linear interpolation (as used by up+conv).

Let us first consider the case of , i.e. ”bed of nails” interpolation. There, the second term in Eq. (2.3) is zero. The first term is similar to the original Fourier Transform, yet with the parameter being replaced by . Thus, increasing the spatial resolution by a factors of leads to a scaling of the frequency axes by a factor of . Let us now consider the effect from a sampling theory based viewpoint. It is


since the point-wise multiplication with the Dirac impulse comb only removes values for which . Assuming a periodic signal and applying the convolution theorem [31], we get


which equals to


by Eq. (2.3). Thus, the ”bed of nails upsampling” will create high frequency replica of the signal in . To remove these frequency replica, the upsampled signal needs to be smoothed appropriately. All observed spatial frequencies beyond are potential upsampling artifacts. While it is obvious from a theoretical point of view, we also demonstrate practically in Figure 8 that the correction of such a large frequency band is (assuming medium to high resolution images) is not possible with the commonly used convolutional filters.

In the case of bilinear interpolation, we have in Eq. (2.3), which corresponds to an average filtering of the values of adjacent to . This is equivalent to a point-wise multiplication of spectrum

with a sinc function by their duality and the convolution theorem, which suppresses artificial high frequencies. Yet, the resulting spectrum is expected to be overly low in the high frequency domain.

3 Learning to Generate Correct Spectral Distributions

The experimental evaluations of our findings in the previous section and their application to detect generated content (see Section 4.1), raise the question if it would be possible to correct the spectral distortion induced by the up-convolution units used in generative networks. After all, usual network topologies contain learnable convolutional filters which follow the up-convolutions and potentially could correct such errors.

3.1 Spectral Regularization

Since common generative network architectures are mostly exclusively using image-space based loss functions, it is not possible to capture and correct spectral distortions directly. Hence, we propose to add an additional spectral term to the generator loss:


where is the hyper-parameter that weights the influence of the spectral loss. Since we are already measuring spectral distortions using azimuthal integration (see Eq. (2.1)), and is differentiable, a simple choice for is the binary cross entropy between the generated output and the mean obtained from real samples:


Notice that is the image size and we use normalization by the coefficient () in order to scale the values of the azimuthal integral to .

The effects of adding our spectral loss to the AE setup from Section 2.2 for different values of are shown in Figure 7. As expected based on our theoretical analysis in sec. 2.3, the observed effects can not be corrected by a single, learned filter, even for large values . We thus need to reconsider the architecture parameters.

Figure 7: Auto encoder (AE) results with spectral loss by . Even if the spectral loss has a high weight, spectral distortions can not be corrected with a single convolutional layer. This result is in line with the findings from Section 2.3.

3.2 Filter Sizes on Up-Convolutions

In Figure 8, we evaluate our spectral loss on the AE from Section 2.2 with respect to filter sizes and the number of convolutional layers. We consider varying decoder filter sizes from to and 1 or 3 convolutional layers. While the spectral distortions from the up-sampling can not be removed with a single and even not with three convolutions, it can be corrected by the proposed loss when more, larger filters are learned.

Figure 8: AE results with spectral loss by filter size of the convolution following the up-sampling step. The result heavily depends on the chosen filter size and number of convolutional layers. With three convolutional filters available, the AE can greatly reduce spectral distortions using the proposed spectral loss.

4 Experimental Evaluation

We evaluate the findings of the previous sections in three different experiments, using prominent GAN architectures on public face generation datasets. Section 4.1 shows that common face generation networks produce outputs with strong spectral distortions which can be used to detect artificial or “fake” images. In Section 4.2, we show that our spectral loss is sufficient to compensate artifacts in the frequency domain of the same data. Finally, we empirically show in Section 4.3 that spectral regularization also has positive effects on the training stability of GANs.

4.1 Deepfake Detection

Figure 9:

Overview of the processing pipeline of our approach. It contains two main blocks, a feature extraction block using DFT and a training block, where a classifier uses the new transformed features to determine whether the face is real or not. Notice that input images are converted to grey-scale before DFT.

In this section, we show that the spectral distortions caused by the up-convolutions in state of the art GANs can be used to easily identify “fake” image data. Using only a small amount of annotated training data, or even an unsupervised setting, we are able to detect generated faces from public benchmarks with almost perfect accuracy.

4.1.1 Benchmarks

We evaluate our approach on three different data sets of facial images, providing annotated data with different spacial resolutions:

  • FaceForensics++ [49] contains a DeepFake detection data set with 363 original video sequences of 28 paid actors in 16 different scenes, as well as over 3000 videos with face manipulations and their corresponding binary masks. All videos contain a trackable, mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries. The resolution of the extracted face images varies, but is usually around pixels.

  • The CelebFaces Attributes (CelebA) dataset [34] consists of 202,599 celebrity face images with 40 variations in facial attributes. The dimensions of the face images are , which can be considered to be a medium resolution in our context.

  • In order to evaluate high resolution images, we provide the new Faces-HQ 222Faces-HQ data has a size of 19GB. Download: Also refer to [14]. data set, which is a annotated collection of 40k publicly available images from CelebA-HQ [29], Flickr-Faces-HQ dataset [30], 100K Faces project [1] and

4.1.2 Method

Figure 9 illustrates our simple processing pipeline, extracting spectral features from samples via azimuthal integration (see Figure 2) and then using a basic SVM [51] classifier333SVM hyper-parameters can be found in the source code

for supervised and K-Means

[36] for unsupervised fake detection. For each experiment, we randomly select training sets of different sizes and use the remaining data for testing. In order to handle input images of different sizes, we normalize the 1D power spectrum by the

coefficient and scale the resulting 1D feature vector to a fixed size.

4.1.3 Results

Figure 15 shows that real and “fake” faces form well delineated clusters in the high frequency range of our spectral feature space. The results of the experiments in Table 3 confirm that the distortions of the power spectrum, caused by the up-sampling units, are a common problem and allow an easy detection of generated content. This simple indicator even outperforms complex DNN based detection methods using large annotated training sets444Note: results of all other methods as reported by [57]. The direct comparison of methods might be biased since [57] used the same real data but generated the fake data independently with different GANs..

Figure 10: AI (1D power spectrum) statistics (mean and variance) of 1000 samples from each Faces-HQ sub-dataset. Clearly, real and “fake” images can be distinguished by their AI representation.
80% (train) - 20% (test)
data set method # samples supervised unsupervised
Faces-HQ ours 1000 100% 82%
Faces-HQ ours 100 100% 81%
Faces-HQ ours 20 100% 75%
CelebA ours 2000 100% 96%
CelebA [57] 100000 99.43% -
CelebA [39] 100000 86.61% -
FaceForensics++ ours 2000 85% -
FaceForensics++ ours 2000 90% -
Table 1: Test accuracy. Our methods use SVM (supervised) and k-means (unsupervised) under different data settings. A) Evaluated on single frames. B) Accuracy on full video sequences via majority vote of single frame detections.
(a) DCGAN.
(c) LSGAN.
(d) WGAN.
Figure 11: Samples from the different types of GAN and their 1D Power Spectrum. Top row: samples produced by standard topologies. Bottom row: samples produced by standard topologies together with our spectral regularization technique.
Figure 12: Correlation between FID values and GAN outputs for a DCGAN baseline on CelebA through out a training run. Low FID scores correspond to diverse but visually sound face image outputs. High FID scores indicate poor quality outputs and “mode collapse” scenarios where all generated images are bound to a very narrow sub-space of the original distribution.

4.2 Applying Spectral Regularization

In this section, we evaluate the effectiveness of our regularization approach on the CelebA benchmark, as in the experiment before. Based our theoretic analysis (see Section 2.3) and first AE experiments in Section 3, we extend existing GAN architectures in two ways: first, we add a spectral loss term (see Eq. (3.1)) to the generator loss. We use

unannotated real samples from the data set to estimate

, which is needed for the computation of the spectral loss (see Eq. (3.1)). Second, we change the convolution layers after the last up-convolution unit to three filter layers with kernel size . The bottom plot of Figure1 shows the results for this experiment in direct comparison to the original GAN architectures. Several qualitative results produced without and with our proposed regularization are given in Figure 11.

4.3 Positive Effects of Spectral Regularization

By regularizing the spectrum, we achieve the direct benefit of producing synthetic images that not only look realistic, but also mimic the behaviour in the frequency domain. In this way, we are one step closer to sample images from the real distribution. Additionally, there is an interesting side-effect of this regularization. During our experiments, we noticed that GANs with a spectral loss term appear to be much more stable in terms of avoiding “mode-collapse” [18] and better convergence. It is well known that GANs can suffer from challenging and unstable training procedures and there is little to no theory explaining this phenomenon. This makes it extremely hard to experiment with new generator variants, or to employ them in new domains, which drastically limits their applicability.

In order to investigate the impact of spectral regularization on the GAN training, we conduct a series of experiments. By employing a set of different baseline architectures, we assess the stability of our spectral regularization, providing quantitative results on the CelebA dataset. Our evaluation metric is the

Fréchet Inception Distance (FID) [23], which uses the Inception-v3 [52]

network pre-trained on ImageNet

[12] to extract features from an intermediate layer.

Figure 13: FID (lower is better) over training time for DCGAN baselines with and without spectral loss (here ). While the up+conv variant of DCGAN is failing to improve, the FID score over the training time in the transconv version is converging but unstable. Only our spectral loss variant is able to achieve low and stable FID scores.
Figure 14: FID (lower is better) over training time for LSGAN baselines with and without spectral loss (here ). As for DCGANS, is the up+conv variant of LSGAN failing to improve the FID score over the training time. The transconv version is converging but unstable. Again, only our spectral loss variant is able to achieve low and stable FID scores.

Figures 13 and 14

show the FID evolution along the training epochs, using a baseline GAN implementation with different up-convolution units and a corresponding version with spectral loss. These results show an obvious positive effect in terms of the FID measure, where spectral regularization k.pdf a stable and low FID through out the training while unregularized GANs tend to “collapse”. Figure

12 visualizes the correlation between high FID values and failing GAN image generations.

5 Discussion and Conclusion

We showed that common “state of the art” convolutional generative networks, like popular GAN image generators fail to approximate the spectral distributions of real data. This finding has strong practical implications: not only can this be used to easily identify generated samples, it also implies that all approaches towards training data generation or transfer learning are fundamentally flawed and it can not be expected that current methods will be able to approximate real data distributions correctly. However, we showed that there are simple methods to fix this problem: by adding our proposed spectral regularization to the generator loss function and increasing the filter sizes of the final generator convolutions to at least , we were able to compensate the spectral errors. Experimentally, we have found strong indications that the spectral regularization has a very positive effect on the training stability of GANs. While this phenomenon needs further theoretical investigation, intuitively this makes sense as it is known that high frequent noise can have strong effects on CNN based discriminator networks, which might cause overfitting of the generator.

Source code available:


  • [1] 100,000 faces generated. Note: Cited by: 3rd item, §6.1.1, Table 2.
  • [2] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2015) TensorFlow: large-scale machine learning on heterogeneous systems. Note: Software available from External Links: Link Cited by: §2.2.
  • [3] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen (2018) Mesonet: a compact facial video forgery detection network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–7. Cited by: §1.1.1.
  • [4] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §1.
  • [5] S. Bartunov and D. Vetrov (2018) Few-shot generative modelling with generative matching networks. In

    International Conference on Artificial Intelligence and Statistics

    pp. 670–678. Cited by: §1.
  • [6] A. Brock, J. Donahue, and K. Simonyan (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §1.
  • [7] M. Brundage, S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, A. Dafoe, P. Scharre, T. Zeitzoff, B. Filar, et al. (2018) The malicious use of artificial intelligence: forecasting, prevention, and mitigation. arXiv preprint arXiv:1802.07228. Cited by: §1.1.1.
  • [8] R. Chesney and D. Citron (2019) Deepfakes and the new disinformation war: the coming age of post-truth geopolitics. Foreign Aff. 98, pp. 147. Cited by: §1.1.1.
  • [9] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018)

    Stargan: unified generative adversarial networks for multi-domain image-to-image translation


    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 8789–8797. Cited by: §1.
  • [10] L. Clouâtre and M. Demers (2019) FIGR: few-shot image generation with reptile. arXiv preprint arXiv:1901.02199. Cited by: §1.
  • [11] B. Dai, S. Fidler, R. Urtasun, and D. Lin (2017) Towards diverse and natural image descriptions via a conditional gan. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2970–2979. Cited by: §1.
  • [12] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: §4.3.
  • [13] J. Donahue, P. Krähenbühl, and T. Darrell (2016) Adversarial feature learning. arXiv preprint arXiv:1605.09782. Cited by: §1.
  • [14] R. Durall, M. Keuper, F. Pfreundt, and J. Keuper (2019) Unmasking deepfakes with simple features. arXiv preprint arXiv:1911.00686. Cited by: footnote 2.
  • [15] R. Durall, F. Pfreundt, and J. Keuper (2019) Semi few-shot attribute translation. arXiv preprint arXiv:1910.03240. Cited by: §1.
  • [16] R. Durall, F. Pfreundt, and J. Keuper (2019) Stabilizing gans with octave convolutions. arXiv preprint arXiv:1905.12534. Cited by: §1.1.2.
  • [17] A. Dziedzic, J. Paparrizos, S. Krishnan, A. Elmore, and M. Franklin (2019-09–15 Jun) Band-limited training and inference for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, K. Chaudhuri and R. Salakhutdinov (Eds.), Proceedings of Machine Learning Research, Vol. 97, Long Beach, California, USA, pp. 1745–1754. Cited by: §1.1.2.
  • [18] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1, §4.3.
  • [19] D. Güera and E. J. Delp (2018) Deepfake video detection using recurrent neural networks. In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. Cited by: §1.1.1.
  • [20] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777. Cited by: Figure 1, §1, §6.1.2.
  • [21] S. Gurumurthy, R. Kiran Sarvadevabhatla, and R. Venkatesh Babu (2017) Deligan: generative adversarial networks for diverse and limited data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 166–174. Cited by: §1.
  • [22] D. Harris (2018) Deepfakes: false pornography is here and the law cannot protect you. Duke L. & Tech. Rev. 17, pp. 99. Cited by: §1.1.1.
  • [23] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §4.3.
  • [24] C. Hsu, C. Lee, and Y. Zhuang (2018) Learning to detect fake face images in the wild. In 2018 International Symposium on Computer, Consumer and Control (IS3C), pp. 388–391. Cited by: §1.1.1.
  • [25] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189. Cited by: §1.
  • [26] S. Iizuka, E. Simo-Serra, and H. Ishikawa (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (ToG) 36 (4), pp. 107. Cited by: §1.
  • [27] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §1.
  • [28] A. K. Jain (1989) Fundamentals of digital image processing. Englewood Cliffs, NJ: Prentice Hall,. Cited by: §1.
  • [29] T. Karras, T. Aila, S. Laine, and J. Lehtinen (2017) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: §1, 3rd item, §6.1.1, Table 2.
  • [30] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §1, §1, 3rd item, §6.1.1, Table 2.
  • [31] Y. Katznelson (2004) An introduction to harmonic analysis. Cambridge University Press. Cited by: §2.3.
  • [32] N. Kodali, J. Abernethy, J. Hays, and Z. Kira (2017) On convergence and stability of gans. arXiv preprint arXiv:1705.07215. Cited by: Figure 1, §6.1.2.
  • [33] Y. Li, S. Liu, J. Yang, and M. Yang (2017) Generative face completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919. Cited by: §1.
  • [34] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738. Cited by: Figure 1, 2nd item, §6.1.2.
  • [35] P. Luc, C. Couprie, S. Chintala, and J. Verbeek (2016) Semantic segmentation using adversarial networks. arXiv preprint arXiv:1611.08408. Cited by: §1.
  • [36] J. MacQueen et al. (1967) Some methods for classification and analysis of multivariate observations. In

    Proceedings of the fifth Berkeley symposium on mathematical statistics and probability

    Vol. 1, pp. 281–297. Cited by: §4.1.2.
  • [37] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley (2017) Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2794–2802. Cited by: Figure 1, §6.1.2.
  • [38] F. Marra, D. Gragnaniello, D. Cozzolino, and L. Verdoliva (2018) Detection of gan-generated fake images over social networks. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 384–389. Cited by: §1.1.1.
  • [39] F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi (2018) Do gans leave artificial fingerprints?. CoRR abs/1812.11842. External Links: Link, 1812.11842 Cited by: Table 1.
  • [40] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein (2016) Unrolled generative adversarial networks. External Links: 1611.02163 Cited by: §1.1.2.
  • [41] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §1.
  • [42] S. Mo, M. Cho, and J. Shin (2019) Instance-aware image-to-image translation. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • [43] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune (2016)

    Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

    In Advances in Neural Information Processing Systems, pp. 3387–3395. Cited by: §1.
  • [44] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §2.2.
  • [45] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros (2016) Context encoders: feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536–2544. Cited by: §1.
  • [46] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin (2016)

    Variational autoencoder for deep learning of images, labels and captions

    In Advances in neural information processing systems, pp. 2352–2360. Cited by: §1.
  • [47] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: Figure 1, §1, §6.1.2.
  • [48] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. Cited by: §1.
  • [49] A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner (2019) FaceForensics++: learning to detect manipulated facial images. In International Conference on Computer Vision (ICCV), Cited by: 1st item, §6.1.3.
  • [50] K. Roth, A. Lucchi, S. Nowozin, and T. Hofmann (2017-05) In NIPS 2017, pp. . Cited by: §1.1.2.
  • [51] B. Scholkopf and A. J. Smola (2001)

    Learning with kernels: support vector machines, regularization, optimization, and beyond

    MIT press. Cited by: §4.1.2.
  • [52] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §4.3.
  • [53] Y. Xue, T. Xu, H. Zhang, L. R. Long, and X. Huang (2018) Segan: adversarial network with multi-scale l 1 loss for medical image segmentation. Neuroinformatics 16 (3-4), pp. 383–392. Cited by: §1.
  • [54] R. A. Yeh, C. Chen, T. Yian Lim, A. G. Schwing, M. Hasegawa-Johnson, and M. N. Do (2017) Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493. Cited by: §1.
  • [55] D. Yin, R. G. Lopes, J. Shlens, E. D. Cubuk, and J. Gilmer (2019) A fourier perspective on model robustness in computer vision. CoRR abs/1906.08988. External Links: Link, 1906.08988 Cited by: §1.1.2.
  • [56] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang (2018) Generative image inpainting with contextual attention. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514. Cited by: §1.
  • [57] N. Yu, L. Davis, and M. Fritz (2019-10) Attributing fake images to gans: learning and analyzing gan fingerprints. In International Conference on Computer Vision (ICCV), External Links: Link Cited by: §1.1.1, Table 1, footnote 4.
  • [58] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas (2017) Stackgan: text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915. Cited by: §1.
  • [59] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas (2018) Stackgan++: realistic image synthesis with stacked generative adversarial networks. IEEE transactions on pattern analysis and machine intelligence 41 (8), pp. 1947–1962. Cited by: §1.
  • [60] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232. Cited by: §1.
  • [61] J. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O. Wang, and E. Shechtman (2017) Toward multimodal image-to-image translation. In Advances in Neural Information Processing Systems, pp. 465–476. Cited by: §1.

Supplemental Material

The supplementary material of our paper contains additional details on the presented experiments, as well as some support experiments that might help to get a better understanding of the spectral properties of up-convolution units.

6 Using Spectral Distortions to Detect Deepfakes

In this section, we provide more detailed results of the experiments presented in section 4.1 of the paper.

6.1 More Details on the used Datasets

6.1.1 Faces-HQ

To the best of our knowledge, currently no public dataset is providing high resolution images with annotated fake and real faces. Therefore, we have created our own data set from established sources, called Faces-HQ555Faces-HQ data has a size of 19GB. Download: In order to have a sufficient variety of faces, we have chosen to download and label the images available from the CelebA-HQ data set [29], Flickr-Faces-HQ data set [30], 100K Faces project [1] and In total, we have collected 40K high quality images, half of them real and the other half fake faces. Table 2 contains a summary.
Training Setting: we divide the transformed data into training and testing sets, with 20% for the testing stage and use the remaining 80% as the training set. Then, we train a classifier with the training data and finally evaluate the accuracy on the testing set.

# of samples category label
CelebA-HQ data set [29] 10000 Real 0
Flickr-Faces-HQ data set [30] 10000 Real 0
100K Faces project [1] 10000 Fake 1 10000 Fake 1
Table 2: Faces-HQ data set structure.

6.1.2 CelebA

The CelebFaces Attributes (CelebA) dataset [34] consists of 202,599 celebrity face images with 40 variations in facial attributes. The dimensions of the face images are 178x218x3, which can be considered to be a medium-resolution in our context.
Training Setting: While we can use the real images from the CelebA dataset directly, we need to generate the fake examples on our own.Therefore we use the real dataset to train one DCGAN [47], one DRAGAN [32], one LSGAN [37] and one WGAN-GP [20] to generate realistic fake images. We split the dataset into 162,770 images for training and 39,829 for testing, and we crop and resize the initial 178x218x3 size images to 128x128x3. Once the model is trained, we can conduct the classification experiments on medium-resolution scale.

6.1.3 FaceForensics++

FaceForensics++ [49] is a collection of image forensic datasets, containing video sequences that have been modified with different automated face manipulation methods. One subset is the DeepFakeDetection Dataset, which contains 363 original sequences from 28 paid actors in 16 different scenes as well as over 3000 manipulated videos using DeepFakes and their corresponding binary masks. All videos contain a trackable, mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries.
Training Setting: the employed pipeline for this dataset is the same as for Faces-HQ dataset and CelebA, but with an additional block. Since the DeepFakeDetection dataset contains videos, we first need to extract the frame and then crop the inner faces from them. Due to the different content of the scenes of the videos, these cropped faces have different sizes. Therefore, we interpolate the 1D Power Spectrum to a fix size (300) and normalizes it dividing it by the frequency component.

6.2 Experimental Results

6.2.1 Spectral Distributions

The following figures 15, 16 and 17 show the spectral (AI) distributions of all datasets. In all three cases, it is evident that a classifier should be able to separate real and fake samples. Also, based on our theoretical analysis (see section 2.3 in the paper), one can assume that the generators in used Face-HQ and FaceForensics++ datasets used up+conv based up-convolutions or successively blurred the generated images (due to the drop in high frequencies). CelebA based fakes used transconv.

Figure 15: Statistics (mean and variance) of the Faces-HQ dataset.
Figure 16: Statistics (mean and variance) of the FaceForensics++, DeepFakeDetection dataset.
Figure 17: Statistics (mean and variance) of the CelebA dataset: average of images generated by the different GAN schemes (DCGAN, DRAGAN, LSGAN and WGAN-GP).

Figure 18 gives some additional data examples and their according spectral properties for the FaceForensics++ data.

Figure 18: FaceForensics++ data. Top: example of one real face (left) and two deepfake faces, fake 1 (center) and fake 2 (right). Notice that the modifications only affect the inner face. Bottom: normalized and interpolated 1D Power Spectrum from the previous images.

6.2.2 T-SNE Evaluation

Figure 19 shows the clustering properties of our AI features. It is quite obvious that a classifier should not have problems to separate both classes (real and fake).

Figure 19: T-SNE visualization of 1D Power Spectrum on a random subset from Faces-HQ data set. We used a perplexity of 4 and 4000 iterations to produce the plot.

6.2.3 Detection Results Depending on the Number of Available Samples

In this section, we show some additional results on the DeepFake detection task (table 1 in the paper). In tables 3, 4 and 5, we focus on the effect of the available number of data samples during training. As shown in the paper, our approach works quite well in an unsupervised setting and needs as little as 16 annotated training samples to achieve 100% classification accuracy in a supervised setting.

80% (train) - 20% (test)
# samples SVM Logistic Reg. K-Means
4000 100% 100% 82%
1000 100% 100% 82%
100 100% 100% 81%
20 100% 100% 75%
Table 3:

Faces-HQ: Test accuracy using SVM, logistic regression and k-means under different data settings.

80% (train) - 20% (test)
# samples SVM Logistic Reg. K-Means
2000 100% 100% 96%
100 100% 95% 100%
20 100% 85% 100%
Table 4: CelebA: Test accuracy using SVM, logistic regression and k-means.
80% (train) - 20% (test)
# samples SVM Logistic Reg.
2000 85% 78%
1000 82% 76%
200 77% 73%
20 66% 76%
Table 5: FaceForensics++: Test accuracy using SVM classifier and logistic regression classifier under different data settings. Evaluated on single frames.

7 Spectral Regularization on Auto-Encoder

In this second section, we show some additional results from our AE experiments (see figure 4 of the paper).

7.1 Loss during Training

Figure 20 shows the evaluation of the loss (see equations 10 and 11 in the paper) with and without spectral regularization for a decoder with 3 convolutional layers and 3 filters of kernel size each.

Figure 20: Evolution of the different losses that define the from our AE. Top: Mean Square Error (MSE) during the training (). Bottom: Binary Cross-Entropy loss (BCE) during the training ().

These results show that the spectral regularization also has a positive effect on the convergence of the AE and the quality of the generated output images (in terms of MSE).

7.2 Effect of the Spectral Regularization

Figure 21 shows the impact of the spectral regularization on the AE problem. We can notice how both transconv and up+conv suffer from different behaviour on the frequency spectrum domain, specially in high frequency components. Nevertheless, after applying our spectral regularization technique, the results get much closer to the real 1D Power Spectrum distribution, generating images closer to the real distribution.

Figure 21: AE results for the baselines (transconv and up+conv) and for the proposal with spectral loss (corrected). The corrected AE has 3 additional convolutional layers after the last transconv layer. Each layer has 32 filters of size 5x5 and

7.3 Effect of different Topologies

In this experiment, we evaluate the impact of different topology design choices. Figure 22 shows statistics of the spectral distributions for some topologies:

  • Real: original face images from CelebA

  • DCGAN_v1: a DCGAN topology with spectral regularization and one convolution layer (32 5x5 filters) after the last two up-convolutions.

  • DCGAN_v2: a DCGAN topology with spectral regularization and two convolution layers (32 5x5 filters) after the last up-convolution.

  • DCGAN_v3: a DCGAN topology with spectral regularization and one convolution layer (32 5x5 filters) after the every up-convolution.

  • DCGAN_v4: a DCGAN topology with spectral regularization and three convolution layers (32 5x5 filters) after the last up-convolution.

Figure 22: AE results for different topologies applied to DCGAN. Each version incorporates different amounts of convolutional layers to its DCGAN structure.

Following the theoretical analysis and after a rough topology search for verification, we conclude that it is sufficient to add 3 5x5 convolutional layers after the last up-convolution in order to utilize the spectral regularization.