With current advancement of deep learning and computer vision techniques, e.g., generative adversarial networks (GAN), it is possible to generate super-realistic fake images and videos. These techniques enable an attacker to manipulate an image/video by swapping its content with alternative contents and synthesize a new image/video. For instance, DeepFake and FaceSwap could generate manipulated videos about real people performing fictional things, where even human eyes have difficulties distinguishing these forgeries from real ones[7, 12]. The forgery images and videos could be further shared on social media for malicious purposes, such as spreading fake news about celebrities, influencing elections or manipulating stock prices, thus might cause serious negative impact on our society . To help mitigate their adverse effects, it is essential that we develop methods to detect the manipulated forgeries.
Current manipulated forgery detection approaches roughly fall into two branches: CNN based methods and artifacts based methods. Methods of the first category are usually formulated into a binary classification problem. Those methods take either the whole or partial image as input and then classify it as fake or not by designing diverse architectures of convolutional networks[22, 21, 1]. The second category relies on hypothesis on artifacts or inconsistencies of a video or image, such as lack of realistic eye blinking , face warping artifacts , and lacking self-consistency .
Despite abundant efforts have been devoted to forensics, it remains a challenging problem to develop detection methods which are readily applied in real world applications. The main challenge lies in the limited generalization capability on previously unseen forgeries. Firstly, as evidenced by recent work [6, 15], although detection accuracy of current methods on hold-out test set could reach 99% for most tasks, the accuracy drops to around 50% on previously unseen forgery images/videos. Secondly, in our preliminary experiments we have observed that these models fail to focus on the forgery regions to make detection, leveraging heatmaps by local interpretation methods [27, 32]. Instead, they have concentrated on non-forgery parts, and learn spurious correlations to separate true and fake images. Due to the independent and identically distributed (i.i.d.) training-test split of data, these spurious patterns happen to be predictive on hold-out test set. In contrast, forgeries generated by alternative methods may not contain these spurious correlations. This can possibly explain their high accuracy on hold-out test set and low accuracy on alternative test set. Thirdly, to some extent these methods “have solved” the training dataset, but it is still too far away from really solving the forgery detection problem. As new types of forgery manipulations emerge quickly, unfortunately the forensic methods without sufficient generalization capability cannot be readily applied to real world data [30, 15].
To bridge the generalization gap, we propose to explore two distinct characteristics of the manipulated forgeries: fine-grained nature and spatial locality. Firstly, forgery detection is a fine-grained classification task. The difference between true and fake images are so subtle, even human eyes are hard to distinguish them. Secondly, forgery occupies a certain ratio of the whole image input. For instance, DeepFake videos  use GAN-based technology to replace one’s face with anther’s. This manipulation changes human faces, while leaving the background part unchanged. Considering these two properties, a desirable detection model should be able to concentrate on the forgery region to learn effective representations. As such, the detection model needs to possess local interpretability, which indicates which region is attended by the model to make decisions . The benefit is that we can control the local interpretation explicitly by imposing extra supervision on instance interpretation in the learning process, in order to enforce the model to focus on the forgery region to learn representations.
In this work, based on aforementioned observations, we introduce Locality-aware AutoEncoder (LAE) for better generalization of forgery detection. LAE considers both fine-grained representation learning and enforcing locality in a single framework for image forensics. To guarantee fine-grained representation learning, our work builds upon an autoencoder, which employs reconstruction losses and latent space loss for capturing the distribution for the trained images. To guard against spurious correlations learned by the autoencoder, we augment local interpretability to the antoencoder and use extra pixel-wise forgery ground truth to regularize the local interpretation. As such, the LAE is enforced to capture discriminative representations from the forgery region. We further employ an active learning framework to reduce the efforts to create pixel-wise forgery masks. We evaluate and compare our approach with existing methods on three challenging forgery detection tasks. Our proposed model not only achieves state-of-the-art generalization performance on all tasks, but also shows improved interpretability. The major contributions of this paper are summarized as follows:
We propose a manipulated forgery detection method, called LAE, which makes predictions relying on correct evidence in order to boost generalization accuracy.
We present an active learning framework to reduce the annotation efforts, where less than 1% annotations are needed to regularize LAE during training.
Experimental results on three forgery detection tasks validate that LAE could achieve high generalization accuracy on previously unseen forgeries generated by alternative manipulation methods.
In this section, we first introduce the basic notations used in this paper. Then we present the generalizable forgery detection problem that we target to tackle.
Notations: Given a source dataset containing both true images and fake images generated by a forgery method. is split into training set , validation set and test set , where denotes fake and true class label respectively. A detection model is learned from the training set . Beyond the source set , there is also a target dataset , which is used to test the generalization ability of model on unseen forgery manipulations. Fake images in and in belong to the same kind of forgery task, while are not generated by the same forgery methods. For example in the DeepFake-alike human face manipulation detection task, contains forgery images created by FaceSwap , while fake images in are created by an alternative forgery method, such as Face2Face . Besides, the target dataset
only serves the testing purpose, and none of its images are used to train the model or tune hyperparameters.
Generalizable Forgery Detection: Our objective is to train a model, which could generalize well for forgeries generated by other methods, as long as they are for the same detection task. For instance, for the face manipulation detection task, we expect the model trained on FaceSwap is able to generalize to alternative manipulation methods, such as Face2Face. This is significant in the real-world scanerio, since new manipulation methods may emerge day by day, and retraining the detector is difficult and even impractical due to the lack of sufficient labeled data from the new manipulation methods.
Autoencoder for Forgery Detection
In this section, we introduce the autoencoder model for manipulated forgery detection. A key characteristic of forgery detection lies in its fine-grained nature. Thus effective representation is needed for both true and fake images in order to ensure high detection accuracy. As such, we use an autoencoder to learn more distinguishable representations which could separate true and fake images in the latent space.
The autoencoder is denoted using , which consists a sub-network encoder and decoder . This encoder maps the input image
to the low-dimensional latent vector space encoding, where is the dimension of latent vector . Then the decoder remaps latent vector back to the input space .
where and are parameters for the encoder and decoder respectively. To enforce our model to learn more meaningful and intrinsic features, we introduce the latent space loss as well as reconstruction loss.
These two losses will be elaborated in following sections.
Latent Space Loss
We make use of the latent space representation to distinguish the forgery images from the true ones. The latent space vector is first split into two parts: , and . The total activation of for the true and fake category respectively is denoted as:
The final latent space loss is defined as follows:
where is the ground truth of input image . The key idea of this loss is to enforce the activation of the true part: to be maximally activated if the input is a true image, and similarly to increase the fake part activation values for fake image inputs. At testing stage, the forgery detection is based on the activation value of the latent space partitions. The input image is considered to be true if , and vice versa.
To force the fake and true images more distinguishable in the latent space, it is essential to learn effective representations. Specifically, we use reconstruction loss which contains three parts: pixel-wise loss, perceptual loss, and adversarial loss, to learn intrinsic representation for all training samples. The overall reconstruction loss is defined as follows:
The pixel-wise loss is measured using mean absolute error (MAE) between original input image pixels and reconstructed image pixels. For perceptual loss, a pretrained comparator (e.g., VGGNet ) is used to map input image to feature space: . Then MAE difference at the feature space is calculated, which represents high-level semantic difference of and . In terms of adversarial loss, a discriminator is introduced aiming to discriminate the generated images from real ones . This subnetwork is the standard discriminator network introduced in DCGAN , and is trained concurrently with our autoencoder. The autoencoder is trained to trick the discriminator network into classifying the generated images as real. The discriminator is trained using the following objective:
Parameter , , are employed to adjust the impact of invidual losses. The three losses serve the purpose of ensuring reconstructed image to: 1) be sound in pixel space, 2) be reliable in the high-level feature space, and 3) look realistic respectively. The implicit effect is to force the vector
to learn intrinsic representation which could make it better separate fake and true images. Besides, using three losses instead of using only pixel-wise loss could help stabilize the training in less number of epoches.
Locality-aware AutoEncoder (LAE)
The key idea of LAE is that the model should focus on correct regions and exploit reasonable evidences rather than capture biases within dataset to make predictions. Due to the pure data-driven training paradigm, the autoencoder developed in last section is not guaranteed to focus on the forgery region to make predictions. Instead the autoencoder may capture certain spurious correlations which happen to be predictive in the current dataset. This would lead to decreased generalization performance on unseen data generated by alternative forgery methods. In LAE (as illustrated in Fig. 1), we explicitly enforce the model to rely on the forgery region to make detection predictions, by augmenting the model with local interpretability and regularizing the interpretation with extra supervision. Besides, we design an active leaning framework to select the challenging candidates for regularizing LAE.
Augmenting Local Interpretability
The goal of local interpretation is to identify the contributions of each pixel in the input image towards a specific model prediction . The interpretation is illustrated in the format of heatmap (or attention map). Inspired by the CNN local interpretation method Class Activation Map (CAM) , we use global average pooling (GAP) layer as ingredient in the encoder, as illustrated in Fig. 1. This enables the encoder to output attention map for each input. Let -layer denotes the last convolutional layer of the encoder, and represents the activation matrix at -channel of -layer for input image . Let also corresponds to the weight of -channel towards the unit of latent vector . The CAM attention map for unit is defined as follows:
Later we upsample to the same dimension as the input image
using bilinear interpolation. Each entry withindirectly indicates the importance of the value at that spatial grid of image leading to the activation . The final attention map for an input image is denoted as:
where denotes the -th unit of the latent vector for .
Regularizing Local Interpretation
To enforce the network to focus on the correct forgery region to make detection, a straightforward way is to use instance-level forgery ground truth to regularize the local interpretation. Specifically the regularization is achieved by minimizing the distance between individual interpretation map and the extra supervision for all the forgery images. The attention loss is defined as follows:
where denotes extra supervision, which is annotated ground truth for forgery. This ground truth is given in the format of pixel-wise binary segmentation mask (see Fig. 1 for an illustrative example). The attention loss is end-to-end trainable and can be utilized to update the model parameters. Ultimately the trained model could focus on the manipulated regions to make decisions.
Active Learning to Regularize Local Interpretation
However, generating pixel-wise segmentation masks is extremely time consuming, especially if we plan to label all forgery images within a dataset. We are interested in employing only a small ratio of data with extra supervision. In this section, we propose an active learning framework to select challenging candidates for annotation. We will describe below how the active learning works in three steps.
Channels concept ranking. Due to the hierarchical structure of encoder, the last convolutional layer has larger possibility to capture high-level semantic concepts. In our case, we have 512 channels at this layer. A desirable detector could possess some channels which are responsive to specific and semantically meaningful natural part (e.g., face, mouth, or eye), while other channels may capture concepts related to forgery, (e.g., warping artifacts, or contextual inconsistency). Nevertheless, in practice the detector may rely on some spurious patterns which only exist in the training set to make forgery predictions. Those samples leading to this concept are considered as the most challenging case, since they cause the model to overfit to dataset specific bias and artifacts.
We intend to select out a subset of channels in the last convolutional layer deemed as most influential to the forgery classification decision. The contribution of a channel towards a decision is defined as the channel’s average activation scores for an image. Specifically, the contribution of channel towards image is denoted as: , where is the number of channels. We learn a linear model based on the concepts to predict the possibility of image to be fake:
. The loss function is defined as:
After this training, we select 10 highest components of the optimized linear weight vector and the corresponding channels are considered as more relevant to the forgery decision.
Active candidate selection. After locating the most possible channels corresponding to the forgery prediction, we feed all the fake images to the LAE model. Those who have highest activation value for these top 10 channels are deemed as the challenging case. The key idea for this choice is that these highest activation images are mostly likely to contain easy patterns which can be captured by the model to separate true and fake images, and which are hard to be generalized beyond training and hold-out test set. Thus we would like to request their pixel-wise forgery masks and followed by regularizing them. Based on this criteria, we select out images as active candidates. The candidates number is less than 1% of total images and is empirically shown significant improvement on generalization accuracy. Comparing to the number of total training samples which is larger than 10k, we have dramatically reduced the labelling efforts.
Local interpretation loss. Equipped with the active image candidates, we request labeling those images for pixel-wise forgery masks . The attention loss is calculated using the distance between interpretation map and annotated forgery mask for all candidate images, which is further combined with latent space loss to update model parameters.
The overall learning algorithm of LAE is presented in Algorithm 1. We apply a two-stage optimization to derive a generalizable forgery detector. In the first stage, we use loss in Eq.(2) to learn an effective representation. In the second stage, we need the model to focus on forgery regions to learn better representations. So we exploit the active learning framework to select out challenging candidates to get their pixel-wise forgery masks. Then we reduce the learning rate one-tenth every 3 epoches and fine-tune the parameters of the encoder using the loss in Eq.(11). After training the model and during the testing stage, we use latent space activation in Eq.(3) to distinguish forgery from true ones. The test images are considered to be true if , and vice versa.
We conduct experiments to answer the following research questions. (1) Does LAE promote the generalization accuracy when processing unseen instances, especially for those produced by alternative methods? (2) Does LAE provide better attention maps after augmenting extra supervision in the training process? (3) How do different components and hyperparameters affect the performance of LAE?
In this section, we introduce the overall experimental setups, including tasks, datasets, baseline methods, networks architectures and implementation details.
Tasks and datasets. The overall empirical evaluation is performed on three types of forgery detection tasks. For each task, we use two datasets: source dataset and target dataset. The source dataset is split into training, validation and test set, which are used to train the model, tune the hyperparameters and test the model accuracy respectively. In contrast, target dataset contains forgery images generated by an alternative method, and is only utilized to assess the true generalization ability of the detection models. Corresponding dataset statistics are given in Tab. 1. All subsets of the three tasks are balanced, where the ratio of true and fake images are 1:1.
DeepFake-alike Faces Manipulation This task explores human face manipulations, where face of a person is swapped by face of another person in the video. We use videos from Faceforensics++ . The source dataset is generated using graphics-based manipulation method Face2Face , while the target one is obtained via manipulation method FaceSwap . The videos are compressed using H.264111https://www.h264encoder.com/ with quantization parameter set to 23. There are 1000 videos for each of real, source, target datasets. Each dataset is split into 720, 140, 140 for training, validation and testing respectively. Finally, we use 200 frames per video for training and 10 frames per video for testing.
GAN-based Attribute Modification In this task, we test the detection of GAN-based attribute modification images . Real images from CelebA dataset  are modified with two methods: StarGAN  and Glow , which are chosen to be source and target dataset respectively. All images are 256256 pixels. The modified attributes include changing hair color, changing smile, etc.
Baseline methods. We evaluate LAE by comparing it with six baselines (see more details in supplemental file).
SuppressNet: A generic manipulation detector . An architecture is specifically designed to adaptively suppress the high-level content of the image.
ResidualNet: Residual-based descriptors are used for forgery detection .
ForensicTransfer: AutoEncoder-based detector is designed to adapt well to novel manipulation methods . For a fair comparison with others, we use their version that is not fine-tuned on target dataset.
|Encoder layer||Output shape||Decoder layer||Output shape|
|Conv2d||[64, 128,128]||ConvTranspose2d||[256, 4,4]|
|Relu||[64, 128,128]||BatchNorm2d & Relu||[256, 4,4]|
|BatchNorm2d||[128,64,64]||BatchNorm2d & Relu||[128, 8,8]|
|Conv2d||[256,32,32]||BatchNorm2d & Relu||[64, 16,16]|
|Relu||[256,32,32]||BatchNorm2d & Relu||[32, 32,32]|
|BatchNorm2d||[512,16,16]||BatchNorm2d & Relu||[16, 64,64]|
|Conv2d||[512,16,16]||BatchNorm2d & Relu||[8, 128,128]|
Network architectures. For encoder and decoder, we use a structure similar to U-net . Details about the layers and corresponding output shapes are given in Tab. 2. The AvgPool2d corresponds to global average pooling layer, which transform the [512,16,16] activation layer into 512 dimension vector. After that, we use a Linear layer to turn it into the 128-dimension latent space vector (see Fig. 1). For comparator , we use the 16-layer version VGGNet , and the activation after 10-th convolutional layer with output shape [512,28,28] is used to calculate the perceptual loss. For discriminator , we use the standard discriminator network introduced in DCGAN .
Implementation details. The Adam optimizer  is utilized to optimize these models with betas of 0.9 and 0.999, epsilon set to , and batch size set as 64. For all tasks, the learning rate is fixed as 0.001 for the first stage in Algorithm 1. Later during finetuning, we reduce the learning rate by one-tenth every 3 epoches. We have tuned hyperparameters as mentioned in the supplemental material, and the following ones work well for all three tasks: =1.0, =1.0, =1.0, =1.0, =0.01, =0.5,
=1.0. We apply normalization with mean (0.485, 0.456, 0.406) and standard deviation (0.229, 0.224, 0.225). Besides, target dataset only serves testing purposes, and none of images is used to train model or tune hyperparameters.
Generalization Accuracy Evaluation
For three tasks, detection accuracy on hold-out test set (source) and data generated by alternative methods (target) are given in Tab. 3. There are three interesting observations.
Generalization gap. There is a dramatic accuracy gap between source and target dataset. All baseline methods have relatively high accuracy on source test set (most of them are over 90%), while having random classification (around 50%) on target dataset. Usually the detection performance of models is calculated using the prediction accuracy on the source test set. Due to the independent and identically distributed (i.i.d.) training-test split of data, especially in the presence of strong priors, detection model can succeed by simply recognize patterns that happen to be predictive on instances over the source test set. This is problematic, and source test set might fail to adequately measure how well detectors perform on previously unseen inputs .
LAE reduces generalization gap. LAE reduces the generalization gap by using a small ratio of extra supervision. LAE_100 and LAE_400 mean the number is set as 100 and 400 respectively. When using 400 annotations (less than 1% than total number of training data in Tab. 1), we achieve state-of-the-art performance on face manipulation and attribute modification tasks. LAE outperforms best baselines by 1.61% and 6.93% respectively on target dataset of two tasks. Compared to 100 annotations, using 400 annotations has boosted the detection accuracy on target set by 2.98%, 0.84%, and 0.53% respectively. This indicates that LAE has potential to further promote generalization accuracy with more annotations.
LAE can be further improved. Despite the accuracy increase on target dataset, there is still generalization gaps. We assume that the source and target distributions should be similar for a specific forgery task. But in practice the distribution difference could be very large. The accuracy increase bound of LAE depends on the distribution difference between source and target domain. Towards this end, using a small number of target dataset data to finetune model could possibly further reduce the generalization gap, and this direction would be explored in our future research.
For all three forgery detection tasks, we provide case studies to qualitatively illustrate the effectiveness of the generated explanation using attention maps shown from Fig. 2.
Comparison with baselines. LAE attention maps are compared with two baselines: MesoInception and XceptionNet, where the heatmaps for baselines are generated using Grad-CAM . The visualization indicates that LAE has truly grasped the intrinsic patterns encoded in the forgery part, instead of picking up spurious and undesirable correlation during the training process. For the first two rows (face manipulation), LAE could focus attention on eyes, noses, mouths and beards. In contrast, two baselines mistakenly highlight some background region, e.g., collar and forehead. For the third and fourth row, LAE correctly focuses on the inpainted eagle neck and the modified hair region respectively. By comparison, baselines depends more on non-forgery part, e.g., wings and eyes to make detection.
Source and target difference. Through attention map visualizations, we observe the distribution difference of source and target dataset. For example in face manipulation detection task (see Fig. 3), Face2face mainly changes lips and eye brows, while FaceSwap changes mostly nose and eyes. This validates the distribution difference between source and target dataset and brings challenges to generalization accuracy.
Ablation and Hyperparameters Analysis
We utilize models trained on Deepfake-alike face manipulation task to conduct ablation and hyperparameter analysis to study the contribution of different components in LAE.
Ablation analysis. We compare LAE with its ablations to identify the contributions of different components. Four ablations include: LAE_rec, trained only with reconstruction loss of Eq.(5); LAE_latent, using only latent space loss in Eq.(3); LAE_latent_pixel, using both latent space loss and pixel loss in Eq.(5); LAE_latent_rec, using latent space loss and whole reconstruction loss. Note that no attention loss is used in the ablations. The comparison results are given in Tab. 4. There are several key findings. Firstly, latent space loss is the most important part, without which even source test set accuracy could drop to 50.39%. Secondly, all of pixel-wise, perceptual, and adversarial losses could contribute to performance on source test set. At the same time, no significant increase is observed on the target dataset with any combination of these losses. Thirdly, attention loss based on candidates selected via active learning could significantly increase generalization accuracy on target dataset (around 10%).
Hyperparameters analysis. We evaluate the effect of different hyperparameters towards model performance by altering the values of in Eq.(5) and in Eq.(11). Corresponding results are reported in Tab. 5 (without attention loss and active learning) and Tab. 6 (with attention loss and active learning) respectively. The results indicate that increase of weights for pixel loss and perceptual loss could enhance model performance on source test set. In contrast, a small weight for adversarial loss is beneficial for accuracy improvement. As shown in Tab. 5, fixing and reducing from 1.0 to 0.5 then to 0.1 have significantly decreased the accuracy on target dataset. This confirms the significance of attention loss in improving generalization accuracy.
Random vs. active learning. For challenging candidate selection, we have compared random selection with active learning based selection. The generalization result on target dataset (FaceSwap) is illustrated in Fig. 4. There is a dramatic gap between random selection and active learning. For instance, active learning could increase target dataset accuracy by 9.81% when the annotation number is 100 ( of training data). This indicates that active learning is effective in terms of selecting challenging candidates.
Forgery ground truth number analysis. We study the effect of attention regularization by altering the number of challenging candidates() selected by active learning (see Fig. 4). There are two interesting observations. First, increasing the number of annotations typically improves model generalization, indicating the benefit of extra supervision. Second, using forgery masks for less than of training data has increased accuracy by 10%. Considering the annotation effort of pixel-wise masks, this advantage of requiring small ratio of forgery mask annotations is significant.
Conclusions and Future Work
We propose a new forgery detection method, called Locality-aware AutoEncoder (LAE), to boost the generalization accuracy by making predictions relying on correct forgery evidence. A key characteristic of LAE is the augmented local interpretability, which could be regularized using extra pixel-wise forgery masks, in order to learn intrinsic and meaningful forgery representations. We also present an active learning framework to reduce the efforts to get forgery masks (less than 1% of training data). Extensive experiments have been conducted on three types of forgery detection tasks, including DeepFake-alike face manipulation, GAN-based attribute modification, and images inpainting, to evaluate the performance of LAE. Experimental results show that our resulting models have a higher probability to look at forgery region rather than unwanted bias and artifacts to make predictions. Empirical analysis further demonstrates that LAE has superior generalization performance on data generated by alternative forgery methods.
Due to the inherent difficulty of the detection problem, we still could observe generalization gap between source test set and target dataset that is generated by alternative methods. Although they are related and belong to the same task, there remains slight distribution differences between them. Using transfer learning and other techniques to further reduce this generalization gap would be explored in our future research.
-  (2018) Mesonet: a compact facial video forgery detection network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Cited by: Introduction, 4th item.
-  (2016) A deep learning approach to universal image manipulation detection using a new convolutional layer. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, Cited by: 1st item.
Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 2nd item.
-  (2017) Xception: deep learning with depthwise separable convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 5th item.
Recasting residual-based local descriptors as convolutional neural networks: an application to image forgery detection. In Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security, Cited by: 2nd item.
-  (2018) ForensicTransfer: weakly-supervised domain adaptation for forgery detection. arXiv preprint arXiv:1812.02510. Cited by: Introduction, Introduction, 2nd item, 6th item.
-  (2019) Https://github.com/iperov/deepfacelab. Cited by: Introduction, Introduction.
-  (2016) Generating images with perceptual similarity metrics based on deep networks. In Advances in neural information processing systems (NIPS), Cited by: Reconstruction Loss.
Techniques for interpretable machine learning. Communications of the ACM (CACM). Cited by: Introduction.
-  (2018) Towards explanation of dnn-based prediction with guided feature inversion. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). Cited by: Augmenting Local Interpretability.
-  (2019) Learning credible deep neural networks with rationale regularization. In IEEE International Conference on Data Mining (ICDM), Cited by: Generalization Accuracy Evaluation.
-  (2019) Https://github.com/marekkowalski/faceswap/. Cited by: Introduction, Problem Statement, 1st item.
-  (2018) Fighting fake news: image splice detection via learned self-consistency. In European Conference on Computer Vision (ECCV), Cited by: Introduction.
-  (2017) Globally and locally consistent image completion. ACM Transactions on Graphics (ToG). Cited by: 3rd item.
Fake face detection methods: can they be generalized?. In 2018 International Conference of the Biometrics Special Interest Group (BIOSIG), Cited by: Introduction.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Experimental Setup.
-  (2018) Glow: generative flow with invertible 1x1 convolutions. In Advances in neural information processing systems (NeurIPS), Cited by: 2nd item.
-  (2018) In ictu oculi: exposing ai generated fake face videos by detecting eye blinking. IEEE Workshop on Information Forensics and Security (WIFS). Cited by: Introduction.
-  (2019) Exposing deepfake videos by detecting face warping artifacts. Workshop on Media Forensics (in conjuction with CVPR). Cited by: Introduction.
-  (2015) Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), Cited by: 2nd item.
-  (2018) Modular convolutional neural network for discriminating between computer-generated images and photographic images. In Proceedings of the 13th International Conference on Availability, Reliability and Security, Cited by: Introduction.
-  (2019) Capsule-forensics: using capsule networks to detect forged images and videos. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Cited by: Introduction.
-  (2016) Unsupervised representation learning with deep convolutional generative adversarial networks. International Conference on Learning Representations (ICLR). Cited by: Reconstruction Loss, Experimental Setup.
-  (2017) Distinguishing computer graphics from natural images using convolution neural networks. In 2017 IEEE Workshop on Information Forensics and Security (WIFS), Cited by: 3rd item.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, Cited by: Experimental Setup.
-  (2019) Faceforensics++: learning to detect manipulated facial images. IEEE International Conference on Computer Vision (ICCV). Cited by: 1st item.
-  (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision (ICCV), Cited by: Introduction, Interpretability Evaluation.
-  (2015) Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR). Cited by: Reconstruction Loss, Experimental Setup.
-  (2016) Face2face: real-time face capture and reenactment of rgb videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Problem Statement, 1st item.
-  (2019) On the generalization of gan image forensics. arXiv preprint arXiv:1902.11153. Cited by: Introduction.
Generative image inpainting with contextual attention. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: 3rd item.
Learning deep features for discriminative localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: Introduction, Augmenting Local Interpretability.