Deepfake Forensics via An Adversarial Game

03/25/2021 ∙ by Zhi Wang, et al. ∙ 0

With the progress in AI-based facial forgery (i.e., deepfake), people are increasingly concerned about its abuse. Albeit effort has been made for training classification (also known as deepfake detection) models to recognize such forgeries, existing models suffer from poor generalization to unseen forgery technologies and high sensitivity to changes in image/video quality. In this paper, we advocate adversarial training for improving the generalization ability to both unseen facial forgeries and unseen image/video qualities. We believe training with samples that are adversarially crafted to attack the classification models improves the generalization ability considerably. Considering that AI-based face manipulation often leads to high-frequency artifacts that can be easily spotted by models yet difficult to generalize, we further propose a new adversarial training method that attempts to blur out these specific artifacts, by introducing pixel-wise Gaussian blurring models. With adversarial training, the classification models are forced to learn more discriminative and generalizable features, and the effectiveness of our method can be verified by plenty of empirical evidence. Our code will be made publicly available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 5

page 6

page 7

page 9

page 10

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The rapid development of deep learning and generative modeling have promoted the progress of face manipulation and forgery technologies (

i.e., deepfake), which largely lower the bar of creating photo-realistic facial images/videos. At present, there exist a variety of AI-based methods, which can be used for identity swapping, expression editing, attribute manipulation, etc. Face2Face [47], for instance, is a method that can edit the expression of a person to make it identical to that of another person. Face replacement methods like FaceSwap [17] is capable of further replacing the facial region of somebody with that of another person in a video.

The counterfeit products of AI-based facial manipulation or forgery make it difficult for human beings and conventional facial classification systems to distinguish between real and fake. Deepfake technologies are thus at high risk of malicious abuse, posing threats to face recognition applications

e.g., face recognition based payment and access control. Malicious facial manipulation and forgery may also infringe on individual privacy and reputation. Hence, in order to protect public safety and individual privacy, it is essential to develop effective methods for detecting deepfake.

In this work, we focus on the problem of detecting facial forgeries, that is, automatically detecting whether an image (or a video) of a human face has been manipulated or was forged by AI-based technologies. We adopt the term “deepfake detection” to describe the task, following prior arts.

A variety of methods have been developed for deepfake detection. Early work relies on handcrafted features, using for example noise variance analysis 

[36] and digital shadow writing analysis [10, 19]

, to discover differences between real and synthetic images/videos. Recently, learning-based deepfake detection methods have been proposed and discussed, which advocate to use convolutional neural networks (CNNs) to achieve the goal. Although significant progress has been made over the last few years, existing methods normally fail to generalize to images/videos generated by unseen technologies or even unseen models 

[51]. They can achieve a very high accuracy of % for a manipulation technology whose output images/videos have been seen during training of the detection model, yet fail to generalize to images/videos generated by unseen technologies. Furthermore, existing deepfake detection models are also very sensitive to image quality, as will be shown in our experiments, e.g., the model performance degrades significantly when the compression rate of the test images/videos is different from that of the training samples.

There are already a few methods trying to resolve this problem and improve the performance of deepfake detection, but most of them [28, 15] rely on modifying the architecture of the classification CNNs. In this paper, we attempt to address the problem from a different perspective, by innovating the training mechanism of deepfake detection models and adopting adversarial training. Adversarial training aims at discovering challenging samples that are not easily predicted by the current classification model. We believe training with these samples encourages the model to focus on more essential and generalizable features that could be used to distinguish the evolving fake images/videos from the real ones, thereby improving the model’s generalization ability to unseen forgeries.

We evaluate several different types of adversarial examples in adversarial training, including the additive [22] and spatial-transformed adversarial examples [50]. We propose adversarially blurred examples which can be more suitable to the task, leading to improved generalization performance to both unseen forgery technologies and unseen image/video qualities in the test phase. Except for the input-gradient-based strategy for crafting adversarial examples [22, 33], a generator-based strategy is also advocated, which not only controls the computational cost for obtaining adversarial examples, but also improves their flexibility. Extensive experimental results verifies the effectiveness of our method. In summary, our contributions are:

  • We introduce adversarial training into the training process of deepfake detection models. We show that it improves the generalization ability and robustness of the models notably.

  • A novel method of generating adversarial examples based on image blurring is proposed, and it is shown to be more suitable to the adversarial training framework of deepfake detection.

  • Since our method focuses on innovating training strategies, our proposed adversarial training framework can be used together with many existing methods which modify the network structure to further improve the performance of deepfake detection model.

  • Extensive experiments show that the performance of several popular deepfake detection models can be improved by using our method, in the sense of better generalization ability on unseen forgery technologies and image/video qualities.

Ii Related Work

Deepfake Generation. Research on AI-based face manipulation and forgery technologies has a long history. Although early work show pleasing results only in very restricted scenarios [6, 7]

, over the last decade, with the rapid development of computer graphics and computer vision, facial manipulation techniques have become fantastic. For instance, Dale

et al[12] managed to swap faces in videos by reconstructing 3D face models of different people. 3D-based methods were also used by Garrido et al[20] and Thies et al[47], sometimes in combination with neural rendering technologies [46]. In addition to the graphics-based methods, there are also many vision-based methods. In particular, the recent upsurge of deep learning has made these methods (e.g., DeepFakes [13], FaceSwap [17], and ZAO [52]) popular for synthesizing photo-realistic facial images, and the term “deepfake” also comes from the trend. Generative adversarial nets (GANs) [23] can also be used for facial attribute editing [32, 8, 25], face swapping [34, 4, 35], etc. In addition, GANs [23] were also used for direct synthesis of whole facial images from noises.

Deepfake Forensics. Maliciously forged deepfake images/videos are apparently harmful to the individual privacy, and they can pose a grave threat to the society. It is of great importance to develop effective deepfake detection solutions. While early attempts [36, 19, 21, 18] focused on the internal statistics or hand-crafted features of images/videos to discover the difference between real and fake, most recent methods were designed based on deep learning features [53, 11, 38, 5, 29]

or end-to-end trained deep binary classifiers 

[1, 39, 24, 42]. As have been criticized, most of the methods suffer from overfitting to the training dataset and cannot be effectively applied to many practical scenarios. There are methods trying to cope with the generalization issue of deepfake detection models. For instance, Li et al[28] proposed a more generalizable deep face representation for achieveing the goal, by introducing an auxiliary task of predicting face x-ray. Stehouwer et al[42] modified the network architecture and introduced an attention module for the task. Auto-encoders were also considered [15]. Unlike these methods with innovated network architectures, our work in this paper focus solely on the training mechanism of deepfake detection models, thus it is orthogonal to these efforts and can be naturally combined with them to achieve even better results.

Adversarial Training. Adversarial training refers to the utilization of adversarial examples for augmenting the training set of models, which constitutes the main basis of defense against adversarial attacks [22, 27, 33, 48]. The development of adversarial training can be traced back to [22], in which the fast gradient sign method (FGSM) was proposed for improving the adversarial robustness. Madry et al[33] further proposed to use a multi-step scheme called projected gradient descent (PGD), enabling more powerful robustness than that obtained with FGSM and many of its contemporary defense methods [3]. Hussain et al[26] revealed the vulnerability of existing deepfake detection models to adversarial examples. Ruiz et al[40] adopted the method of generating adversarial examples to prevent photos from being used for generating deepfakes. In contrast to their methods, we advocate adversarial training for improving the performance of the classification-based deepfake detection model. In Sec. III, we will discuss how adversarial training can help improve the performance of deepfake detection, and we will also introduce a new type of adversarial examples which is more suitable to the deepfake detection task.

Iii Proposed Approach

This section introduces our proposed framework (based on adversarial training) for deepfake detection. First, we explain why adversarial training is advocated, and we will also revisit the basic concept of adversarial training, mostly on the basis of commonly used additive adversarial examples in Sec. III-A. Then, in Sec. III-B, we introduce a new and dedicated method for performing adversarial attacks and adversarial training, which is based on pixel-level Gaussian blurring. Finally, we introduce how generator-based adversarial training can be performed in Sec. III-C.

Fig. 1: Visualization results of various adversarial examples. From the top row to the last, we show results of the original images, our adversarially blurred examples, the FGSM examples [22], and the spatial-transformed adversarial examples [50]. On the left half, we show adversarial examples crafted on real face images, and the right half we show those crafted on forged or manipulated faces.

Iii-a Adversarial Training Framework

Deepfake detection is normally cast into a binary classification task. Predictions can be made on the basis of one image (as model input) or a sequence of images in a single video. For simplicity reasons, here we consider the case where only an image is fed to the model, and we note that our method can naturally generalize to models whose inputs are a sequence of images. Given a training set that includes a large number of images and their corresponding labels. The problem of many existing deepfake detection models is that normal training on barely guarantees the generalization to fake images generated by unseen technologies or compressed with different quality factors. A plausible solution to the problem is to introduce an “adversary” that keeps refining the training fake images and removing obvious artifacts that could easily be spotted by the deepfake detection model, such that the detection model learns to correctly classify more advanced fake images. This is in coordinate with the spirit of adversarial learning.

Let us revisit the traditional adversarial learning problem first. Assume that a classification model (e.g., the deepfake detection model) attempts to minimize the prediction loss for any given data (i.e., an image paired with its label ), in which collects all learnable parameters in the classification model, (, , and represent height, width, and number of channels of , respectively), and . The goal of the normal training mechanism is to find an appropriate set of parameters to minimize the empirical risk , while, targeting addressing the adversarial vulnerability of deep models, adversarial training aims at strengthening the models by generating adversarial examples and injecting them into the training set.

Over the past few years, a variety of adversarial examples have been proposed. The most popular method of generating adversarial examples is to add pixel-level perturbations to clean images, e.g., FGSM [22] suggests to obtain each adversarial example by adding a scaled input-gradient sign to each original image . That is:

(1)

The above equation is derived from an optimization problem maximizing the prediction loss of an input obtained by adding a perturbation (whose norm is no greater than ) to a clean image  [22].

Adversarial training plays a zero-sum game which includes an auxiliary process that generates adversarial examples which maximizes the classification loss. The generated adversarial examples can be used instead of the original benign examples or in combination with them for training. For the former, we have

(2)

while for the latter, we can use the following optimization problem instead of (2):

(3)

Note that we introduce a set to constrain the allowable disturbance from each adversarial examples to its corresponding “clean” image. For FGSM as introduced in Eq. (1), we have . In general,

guarantees the visual similarity between the adversarial example and the original image.

Fig. 2: An overview of our two-generator-based blurring adversarial training (Two-Gen-BAT). The right panel shows how the adversarial blurring is performed. For each pixel on the original image, we use the corresponding to generate a Gaussian kernel , and then use to perform pixel-wise Gaussian blurring on by considering its surrounding pixels to obtain a pixel of the adversarial example, i.e., .

Recently, some other kinds of adversarial examples have also been proposed. For instance, instead of imposing pixel-level additive perturbations, Xiao et al[50] proposed to calculate an adversarial optical flow to spatially transform each pixel of the clean images accordingly. Let us use to represent the pixel on the -th row and

-th column of the original image, an adversarial optical flow vector

is learned in the method to transform to the corresponding position on the adversarial image . The magnitude of the adversarial optical flow is encouraged to be small while leading to large prediction loss of the adversarial example. Training with this sort of examples will be called spatial-transformed adversarial training (SAT), and training with the result of Eq. (1) will be called additive adversarial training (AAT) in this paper. Besides the introduced ones, there exist other types of adversarial examples, e.g., those utilizes white-balance [2].

Iii-B Blurring Adversarial Training

Albeit adversarial training based on the aforementioned examples have achieved improved robustness under adversarial attacks, their performance in enhancing the generalization ability of deepfake detection models is unclear. In fact, on natural image classification tasks (e.g

., on ImageNet 

[14]), it has been demonstrated that adversarial training barely contributes to the generalization ability to normal test data, on account of the distribution drift between these adversarial examples and normal test samples, and the same problem might also exist in the task of deepfake detection. Here we propose a new type of adversarial examples that is shown to be more effective in the adversarial training framework for deepfake detection.

We know from prior work [49] that some high-frequency components of the fake images/videos are very easily spotted by models yet also very specific and difficult to generalize. That is, introducing Gaussian blur and JPEG compression [49]

augmentations probably improves the deep classification CNNs, and it might be even more effective to introduce a blurring-based adversarial training mechanism. Specifically, given an input image

whose height, width, and number of channels are , , and , respectively, we obtain by performing pixel-wise Gaussian blur on . We use to represent the -th pixel of , and we attempt to learn a single-channel map with a size of , each of whose entries (e.g.,

) represents the standard deviation of a Gaussian kernel to be applied to the region centered at the corresponding pixel of image

, i.e., . In details, for obtaining the value of , we first collect , then use it to calculate the kernel for performing Gaussian blur around . Suppose that the kernel size is chosen as , then we calculate the inner product between and (i.e., a region of pixels centered at the pixel with a radius of ). That is:

(4)
(5)

in which and represent the relative coordinates to the centre pixel in . Such a pixel-wise Gaussian blur can be easily implemented as a vectorized operation and thus is computationally very efficient. The map basically controls how much blurring is to be performed on the original training image. Larger entries of should lead to more blurry images and leave less obvious artifacts from the deepfake generator, while on the contrary, smaller entries of leave more obvious artifacts for the classification model to learn. We have , as all the entries of approach zero.

We aim at learning a reasonable map for each training image. Since the adversarial blurring is to be performed in a pixel-wise manner, we are able to blur more on image regions with less generalizable features. Similar to other adversarial examples, here the blurring-based adversarial examples are suggested to have less distortions from the original images, and we introduce a simple one-step scheme to achieve the goal, just like FGSM (except for the sign function):

(6)

in which is obtained by Eq. (5) and is an initialization of . In practice, we let be a matrix whose entries share a common value (e.g., ). One can further extend the simple one-step scheme in Eq. (6) to a multi-step scheme, just like from FGSM [22] to PGD [33]. This requires iteratively performing Eq. (6), and it can indeed help achieve higher attack success rates. However, it also leads to longer training time in our adversarial training framework. Adversarial training can further be adopted on the crafted examples and we will call this method blurring adversarial training (BAT).

Iii-C Generator-based Methods

Most adversarial training methods craft adversarial examples using the input-gradient of the loss function

. As has been mentioned, powerful adversarial examples are normally designed with a multi-step scheme, therefore the computationally cost increases as the the number of steps increases. We propose an alternative way of generating adversarial examples, by introducing a CNN-based generator to control the training complexity, somewhat similar to [41]. We consider a CycleGAN [54] generator for generating . We emphasize that, unlike the work of Rusak et al.’s [41], for each original training image , we generate a specific map for it, considering the fact that the most transferable features on different images can reside in different spatial regions. Denote by and the set of learnable parameters for the deepfake detection model and that for the generator, respectively, we opt to playing the following min-max game:

(7)

The introduced generator can also be considered as an enhancement model for the original deepfake generator(s). The optimization problem Eq. (7) allows to train a generator whose goal is opposite to the deepfake detection model, e.g., to remove obvious artifacts and synthesize more realistic deepfakes that can invalidate the deepfake detection model. If the generator indeed learns to synthesize more realistic fake images, then the classification model can learn more about deepfake and thus becomes more generalizable. The generator in turn learns to further improve its generation ability. More importantly, the generator-based method can be more flexible, and it suffices to learn to conceal more generalizable features in different images in combination with BAT, as will be shown in our experiments. In practice, our generator is used to “enhance” both fake and real training images, to balance training data from both classes.

Our generator-based BAT is also related to GAN [23], which contains a pair of generator and discriminator as well. What makes our method significantly different is that our goal is to improve the deepfake detection model (i.e., our discriminator), while GAN aims to improve its generator. In our case, the generator is used to craft adversarial examples, while in a GAN, the goal of its generator is to capture the distribution of natural images. Moreover, our discriminator is used to distinguish fake from real images (all “enhanced” by the generator), while the GAN discriminator is used to distinguish whether an image is a synthesized one (synthesized by the GAN generator) or a natural image (directly collected from the training set).

Two Generators. Since the distribution of the real images and that of the fake images are different, we might need to introduce a very large generator to adapt it to both the two classes. Furthermore, the generator will have to first predict whether its input is real or fake and then attempt to process it to make it more like a fake one or a real one, to achieve the aforementioned goal. On this point, we propose to use two generators for images from the two classes, respectively, to alleviate this problem. That is, we introduce which only processes real images and which only processes fake images. By introducing the two different generators, each of them will only be responsible for images from a single class, and we can expect them to learn more specific adversarial strategies for the two classes. Experimental results in Sec. IV-A will show the empirical effectiveness of introducing the extra generator.

Iv Experiments

In this section, we first demonstrate the superiority of our method on the task of deepfake detection through a large number of experiments, and then we illustrate the effectiveness of our Gaussian blur adversarial attack through white box attack experiments on ImageNet 

[14].

Method NT
NT DF F2F FS DFD Celeb-DF Avg
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
EfficientNet [45] 98.75 95.40 83.75 60.42 61.15 51.64 43.58 48.74 74.34 57.48 57.32 69.84 62.70
+ Grad-AAT 98.00 93.69 85.91 59.13 71.12 52.54 44.97 49.33 75.38 62.17 61.00 72.93 63.14
+ Grad-SAT 97.83 93.40 85.03 59.13 67.44 51.99 44.60 49.01 75.19 60.05 59.07 71.69 62.52
+ Grad-BAT 98.38 94.60 87.09 60.62 72.50 53.91 47.00 50.21 77.02 63.13 65.05 74.19 64.88
+ Gen-BAT 98.12 94.96 87.50 67.52 69.82 54.66 47.12 50.04 77.02 66.05 66.90 74.27 66.82
+ Two-Gen-BAT 98.72 95.26 87.51 69.40 74.65 56.19 48.99 50.43 76.60 66.84 67.91 75.55 67.84
+ Combined AT 98.40 94.95 88.90 71.08 76.13 57.90 50.13 51.14 77.74 68.45 69.00 76.63 68.81

TABLE I: Comparison between different adversarial training settings in improving the generalization to unseen forgery technologies. All models were trained on the NT C23 data in FF++ [39] and tested on data generated using different technologies (indicated by “NT ”). DFD is extremely imbalanced so we only report the AUC socres on it, since the overall accuracy makes less sense on imbalanced data.
Method Raw C23 C40
Raw C23 C40 Raw C23 C40 Raw C23 C40
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
EfficientNet [45] 99.51 99.24 66.35 51.05 56.03 50.60 98.41 94.09 98.75 95.40 69.40 54.85 86.27 78.36 90.28 80.85 89.95 81.65
+ Grad-AAT 99.11 99.04 66.49 53.02 56.42 50.83 97.34 93.12 98.00 93.69 69.01 53.93 86.33 78.42 87.35 79.00 88.69 80.15
+ Grad-SAT 99.02 98.99 65.44 50.70 55.33 50.45 97.21 92.80 97.83 93.40 67.46 51.33 86.00 76.49 86.66 78.44 87.84 79.93
+ Grad-BAT 99.05 99.10 92.90 68.84 60.51 54.62 98.80 94.80 98.38 94.60 73.55 61.13 89.75 79.70 90.67 81.95 89.05 80.52
+ Gen-BAT 98.80 97.75 95.52 80.71 68.53 56.71 98.77 94.95 98.12 94.96 73.71 61.84 89.81 80.93 91.54 83.48 89.00 81.05
+ Two-Gen-BAT 99.46 99.04 95.93 84.19 72.18 61.87 98.92 95.52 98.72 95.26 74.73 62.53 90.58 82.91 93.51 85.36 94.19 87.02
+ Combined AT 99.41 98.95 95.80 82.99 71.02 59.82 98.73 94.80 98.40 94.95 75.43 62.93 89.99 82.00 92.80 84.95 92.00 85.84
TABLE II: Comparison between different adversarial training settings in improving the generalization to unseen image/video qualities (indicated by ). The train and test data were split from the NT data in FF++ [39], of possibly different qualities. The symbol indicates the models were trained on a specific image/video quality, and were expected to generalize to other image/video qualities.

Dataset. FaceForensics++ (FF++) [39] is a recently released large scale deepfake video detection dataset, containing 1,000 real videos, in which 720 videos were used for training, 140 videos were reserved for verification, and 140 videos were used for test. Each real video in the dataset was manipulated using four advanced methods, including DeepFakes (DF) [13], Face2Face (F2F) [47], FaceSwap (FS) [17], and NeuralTextures (NT) [46], to generate four fake videos. We followed the official split of the training, validation, and test sets in our experiments. Each video in the dataset were processed to have three video qualities, namely RAW, C23 (which is compressed from the raw data but has relatively high quality), and C40 (which is compressed to have relatively low quality). For each quality, there are 5,000 (real and fake) videos in total, and we extracted 270 frames from each video following the official implementation of face detection and alignment in [39]. In order to evaluate the generalization ability of the model, we trained models on videos from one specific method and evaluated on those generated by a variety of manipulation methods and image/video qualities.

To make the evaluation more comprehensive, we introduced two more deepfake datasets: DFD [16] and Celeb-DF [30]. DFD [16] contains 3,068 deepfake videos, which were forged based on 363 real videos. We used all the real and fake videos and randomly selected 10 frames from each of them for test. Celeb-DF contains 590 real videos and 5,639 fake videos, and we used its official test set.

Implementation Details. We apply our adversarial training mechanism to existing deepfake detection models to testify its effectiveness. We first considered EfficientNet [45] (which is a common choice of many winning solutions to the Deepfake Detection Challenge 111https://www.kaggle.com/c/deepfake-detection-challenge

). EfficientNet was originally designed for image classification and it was transferred to the deepfake detection task by replacing its highest fully-connected layer with one that outputs two dimensional logits. This layer was randomly initialized and the other layers of the model were all pre-trained on ImageNet 

[14]. We used the RAdam [31] optimizer with and to train the model. We adopted a weight decay of . The learning rate was initialized to and cut by every epochs. For models trained along with generators, the learning rate of the generators was initialized to . For experiments involving Gaussian blur, we set the blur kernel size to , unless otherwise clarified. To ensure numerical stability, we optimize or generate rather than

for BAT in practice. All experiments were performed in a PyTorch 

[37] environment, running with an Intel Xeon Gold 6130 CPU and an Nvidia Tesla V100 GPU. Besides EfficientNet, we also considered an Xception [9] model following [39] and a recent model proposed by Stehouwer et al[42]

. We used the prediction accuracy and the area under the receiver operating characteristic curve (AUC) as evaluation metrics. Following prior arts, we report the AUC scores in percentage for comparison in the paper.

Metric EfficientNet [45] + Grad-AAT + Grad-SAT + Grad-BAT + Gen-BAT + Two-Gen-BAT + Combined AT
AUC (%) 88.81 88.08 87.47 89.95 90.20 90.36 90.45
ACC (%) 81.35 80.20 80.11 83.25 83.67 84.10 85.02

TABLE III: When training was perform on NT data of all qualities (i.e., RAW, C23, and C40), and test was performed on the same qualities.
Method NT
NT DF F2F FS DFD Celeb-DF Avg
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
EfficientNet [45] 98.75 95.40 83.75 60.42 61.15 51.64 43.58 48.74 74.34 57.48 57.32 69.84 62.70
+ Gaussian Noise 98.66 95.19 83.78 60.49 61.19 51.76 43.68 48.85 74.44 58.06 57.96 69.97 62.85
+ Gaussian Blur 98.78 95.95 83.84 60.48 61.17 51.81 43.64 48.86 74.45 58.43 58.24 70.05 63.07
+ JPEG Compression 98.64 95.14 83.78 60.45 61.17 51.76 43.62 48.82 74.45 58.19 58.01 69.98 62.84
+ Combined Traditional 98.85 95.98 83.86 60.48 61.26 51.83 43.67 48.89 74.56 58.55 58.49 70.13 63.13
+ Two-Gen-BAT (ours) 98.72 95.26 87.51 69.40 74.65 56.19 48.99 50.43 76.60 66.84 67.91 75.55 67.84
+ Combined AT (ours) 98.40 94.95 88.90 71.08 76.13 57.90 50.13 51.14 77.74 68.45 69.00 76.63 68.81

TABLE IV: Comparison between adversarial training and traditional data augmentation in improving generalization to unseen forgery technologies. The models were trained on the NT C23 data and tested on data generated using various technologies (indicated by “NT ”).
Method Raw C23 C40
Raw C23 C40 Raw C23 C40 Raw C23 C40
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
EfficientNet [45] 99.51 99.24 66.35 51.05 56.03 50.60 98.41 94.09 98.75 95.40 69.40 54.85 86.27 78.36 90.28 80.85 89.95 81.65
+ Gaussian Noise 99.38 99.17 69.66 54.75 57.24 50.97 98.47 94.25 98.66 95.19 70.04 56.18 87.50 78.98 90.78 81.13 89.78 80.55
+ Gaussian Blur 99.47 99.24 77.63 57.46 59.24 51.29 98.88 94.65 98.78 95.95 71.08 57.89 89.54 79.83 90.92 82.03 90.75 81.83
+ JPEG Compression 99.39 99.17 95.07 80.19 67.95 56.18 98.25 94.30 98.64 95.14 73.19 61.21 89.33 80.46 91.19 83.20 91.95 82.66
+ Combined Traditional 99.46 99.26 83.57 73.52 68.11 57.29 98.98 94.90 98.85 95.98 73.26 61.58 89.69 80.78 91.75 83.36 92.07 83.75
+ Two-Gen-BAT (ours) 99.46 99.04 95.93 84.19 72.18 61.87 98.92 95.52 98.72 95.26 74.73 62.53 90.58 82.91 93.51 85.36 94.19 87.02
+ Combined AT (ours) 99.41 98.95 95.80 82.99 71.02 59.82 98.73 94.80 98.40 94.95 75.43 62.93 89.99 82.00 92.80 84.95 92.00 85.84

TABLE V: Comparison between adversarial training and the traditional data augmentation in improving generalization to unseen image/video qualities. The train and test data were split from the NT data in FF++ [39], with possibly different qualities. The symbol indicates the models were trained on a specific image/video quality, and were expected to generalize to other image/video qualities.

Iv-a Different Settings for Adversarial Training

Fig. 3: Our combined adversarial training. First, we generate the map for performing adversarial blurring, then we craft following the right panel of Figure 2 and finally incorporate the additive perturbation to get from .

Since several different ways of generating adversarial examples and performing adversarial training have been introduced, we compare them first, including: (i) Grad-AAT: input-gradient-based additive adversarial training, (ii) Grad-SAT: input-gradient-based spatially-transformed adversarial training, (iii) Grad-BAT: input-gradient-based blurring adversarial training, (iv) Gen-BAT: generator-based blurring adversarial training, (v) Two-Gen-BAT: BAT with two generators, and, partially inspired by a general image degeneration process, (vi) Combined AT: combining Two-Gen-BAT with Grad-AAT (see Figure 3 for an overview). All competing models were trained on the NT data only and might be tested on the other fake data. Note that generator-based AAT and generator-based SAT did not show any improvement to the baseline solution in our experiment thus their results will not be shown. The number of real and fake videos in DFD [16] is seriously unbalanced, so we did not show the accuracy on DFD dataset.

It can be seen from Table I that the considered various types of adversarial training all improve the generalization ability of the obtained models to unseen forgery types (i.e., DF, F2F, FS, DFD, and Celeb-DF). Table II further shows that equipped with the operation of pixel-wise blurring, our Grad-BAT, Gen-BAT, and the combined AT also improve the generalization performance of the obtained models for different image/video qualities, though other adversarial training methods can hardly contribute under such circumstance. Also, the results demonstrate the performance of our BAT equipped with two generators is superior to that only equipped with a single generator. Moreover, although adversarial training leads to slightly degraded performance when the training and test data come from the same deepfake forgery technology and share the same image/video quality (c.f. the first column of Table I and II), introducing two generators mitigate such performance degradation to some extent. We also considered when data of different image/video qualities was trained altogether, and test was performed on all the qualities. Somewhat surprisingly, we found that the performance degradation (observed in Table II) was well-mitigated with our adversarial training (see Table III).

The experiment in this section is mostly performed based on EfficientNet [45], a common choice of many winning deepfake detection solutions. Similar observations can be made on other deepfake detection models as well, e.g., Xception [9]. See Appendix A for detailed results on Xception.

Iv-B Comparison to Other Methods

Adversarial training vs data augmentation. Adversarial training can be regarded as an advanced way of performing data augmentation. On this point, we further compare the proposed method to some traditional data augmentation strategies, including introducing traditional Gaussian noise, traditional Gaussian blur, and JPEG compression [49]. Tables IV and V compare adversarial training (i.e., Two-Gen-BAT and combined AT) to these strategies and their combination (named “combined Traditional” in the tables). It can be seen that these traditional data augmentation strategies and their combination hardly improve model generalization to unseen forgery technologies, despite some unsurprising improvement in generalization to unseen image/video qualities. In addition to the results on EfficientNet, see Table X & XI in the appendices for similar results based on Xception.

Incorporating with other deepfake detection models. We would also like to see whether our adversarial training could improve more advanced models (than Xception [9] and EfficientNet [45]) similarly. We tested with the one proposed by Stehouwer et al[42], and we tried applying our combined AT and Two-Gen-BAT on it. The whole FF++ training set (including DF, F2F, FS, and NT data of all qualities) was used for training to better suit the model setting in [42], and we tested the model performance on DFD, Celeb-DF, and the test set of FF++. It can be seen from Table VI that the adversarially trained models generalize considerably better on unseen forgery technologies, i.e., DFD and Celeb-DF.

Method FF++
FF++ DFD Celeb-DF
AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%)
Xception [9] 96.77 92.51 84.85 - 55.11 54.88
+ Two-Gen-BAT (ours) 97.45 93.25 86.49 - 69.28 63.17
+ Combined AT (ours) 97.62 93.38 87.10 - 70.61 64.85
EfficientNet [45] 97.58 93.49 85.67 - 56.87 55.89
+ Two-Gen-BAT (ours) 98.46 94.00 87.58 - 72.69 66.46
+ Combined AT (ours) 98.62 94.12 87.95 - 73.46 67.52
Stehouwer et al. [42] 96.79 94.17 86.84 - 59.74 59.36
+ Two-Gen-BAT (ours) 97.57 95.16 88.84 - 74.56 69.50
+ Combined AT (ours) 97.59 95.11 89.33 - 76.03 70.45
TABLE VI: Evaluation of our adversarial training on different recent deepfake detection baseline models. All these models were trained on the FF++ data of all qualities (i.e., RAW, C23, and C40) [39] and tested on FF++ [39], DFD [16], and Celeb-DF [30].

Iv-C Adversarial Blurring as An Attack

We also tested how adversarial blurring performed as an attack. Following prior work, we used the ImageNet [14] data and models to test the attack. We selected 50,000 images from the official ImageNet verification set and crafted adversarial examples on the basis of these benign images. We adopted the adversarial accuracy (i.e., the prediction accuracy of the victim model on adversarial examples) as an evaluation metric, and we chose three ImageNet models for the experiment, namely Inception v3 (Inc-v3) [44] (top-1 accuracy: 77.21%), Inception v4 (Inc-v4) [43] (top-1 accuracy: 80.12%), and Inception-ResNet v2 (IncRes-v2) [43] (top-1 accuracy: 80.33%). We tested our (single-step) adversarial blurring and FGSM [22] in this experiment222To save space, we provide their iterative versions for attack in Appendix D., and we specifically tested the prediction accuracy of a model on adversarial examples crafted using other models, as an evaluation of the adversarial transferability (or say generalization). Table VII summarizes these results, and it can be seen that the adversarially blurred examples transfer reasonably well across ImageNet models. We let the kernel size be and be 0.1 for our adversarial blurring, while for FGSM, we let . All entries of in Eq. (6) were set to . More results can be found in the Appendix B.

Source Target
Inc-v3 Inc-v4 IncRes-v2
FGSM BAT FGSM BAT FGSM BAT
Inc-v3 25.02% 21.48% 72.46% 71.04% 72.74% 71.18%
Inc-v4 69.36% 65.86% 36.92% 30.84% 72.36% 71.18%
IncRes-v2 69.60% 65.84% 71.62% 71.04% 44.38% 33.20%
TABLE VII: Adversarial accuracy of ImageNet models on our adversarial examples and the FGSM examples.

V Conclusion

We aim at improving the generalization ability of deepfake detection models, by introducing adversarial training to them. Towards the goal, we have developed a new type of adversarial attacks on the basis of image blurring, which can be effectively and efficiently crafted by introducing two generators for training the deepfake detection models. Our method encourages models to learn essential and generalizable features to distinguish fake from real, rather than those obvious artifacts that are too specific. The proposed adversarial blurring-based method can further be combined with the other adversarial methods (e.g., FGSM) to achieve further improvement in generalization. Extensive experiments have shown that our adversarial training improves the generalization ability of deepfake detection models to unseen image/video qualities and deepfake technologies. The performance of the proposed adversarial examples in attacking ImageNet models has also been tested.

Appendix A Using Xception As A Baseline

Here we report the results of models that use Xception [9] as backbone.

Table VIII and Table IX compare a variety of plausible settings of adversarial training. Consistent with the results in our main paper, here we still compare: (i) Grad-AAT: input-gradient-based additive adversarial training, (ii) Grad-SAT: input-gradient-based spatially-transformed adversarial training, (iii) Grad-BAT: input-gradient-based blurring adversarial training, (iv) Gen-BAT: generator-based blurring adversarial training, (v) Two-Gen-BAT: our BAT with two generators, and, partially inspired by a general image degeneration process, (vi) Combined AT: combining Two-Gen-BAT with Grad-AAT. It can be seen that, similar to the observations on EfficientNet [45], our BAT with two generators (i.e., Two-Gen-BAT) achieves significantly superior performance in comparison with many other settings, and, when combined with Grad-AAT, further improvement can be obtained.

Table X and Table XI compare our adversarial training (i.e., Two-Gen-BAT and combined AT) to the traditional data augmentation strategies together with their combinations, i.e., the traditional Gaussian noise, traditional Gaussian blur, JPEG compression [49], and “combined traditional”).

Appendix B More Results for Attack

In addition to the results reported in Section IV, we further report results for performing attacks with different settings of the blurring kernel. We also provide iterative version of BAT and FGSM (or say PGD) for attack (see Table XIV).

Fig. 4: Adversarial blurred examples crafted with different kernel sizes. Obviously, the image becomes more and more blurry as the kernel size increases. Zoom in for better observation.
Method NT
NT DF F2F FS DFD Celeb-DF Avg
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
Xception [9] 98.06 94.26 82.66 60.54 60.96 51.07 42.94 48.37 69.82 53.90 53.75 68.06 61.60
+ Grad-AAT 97.71 93.42 84.10 60.91 65.68 51.85 43.55 48.72 71.09 56.88 55.98 69.84 62.18
+ Grad-SAT 97.56 93.33 82.89 60.53 63.21 51.43 43.06 48.40 70.34 56.29 54.70 68.89 61.68
+ Grad-BAT 98.40 93.80 85.57 62.10 68.45 52.96 44.61 49.93 72.15 58.44 57.42 71.27 63.57
+ Gen-BAT 98.44 93.91 86.09 64.13 69.42 53.81 45.10 49.94 73.10 59.33 59.05 71.91 64.17
+ Two-Gen-BAT 98.61 94.15 86.18 64.54 72.02 55.35 45.87 50.47 73.15 59.69 60.29 72.59 65.16
+ Combined AT 98.68 93.99 86.58 65.42 72.75 55.58 46.16 50.80 74.45 61.47 62.06 73.35 65.57
TABLE VIII: Comparison between different adversarial training settings in improving generalization to unseen face forgery technologies (based on Xception [9]). All models were trained on the NT C23 data in FF++ [39] and tested on data generated using different forgery technologies (indicated by “NT ”). The DFD dataset is extremely imbalanced so we only report the AUC scores on it, since the overall accuracy makes less sense on imbalanced data.
Method Raw C23 C40
Raw C23 C40 Raw C23 C40 Raw C23 C40
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
Xception [9] 99.21 99.16 65.70 51.11 55.17 50.04 98.05 92.65 98.06 94.26 68.42 54.30 85.90 74.26 86.11 77.14 86.59 78.35
+ Grad-AAT 98.91 98.10 79.47 58.48 58.74 52.96 98.07 92.74 97.71 93.42 70.86 57.33 87.25 76.22 86.07 77.95 86.10 78.08
+ Grad-SAT 98.72 98.03 73.59 56.93 57.77 52.09 97.98 92.61 97.56 93.33 69.68 55.48 86.14 75.15 86.08 77.20 85.87 77.96
+ Grad-BAT 99.27 98.36 90.40 67.45 59.72 54.10 98.38 93.13 98.40 93.80 72.41 58.54 87.62 76.53 86.39 78.46 86.18 78.44
+ Gen-BAT 97.75 97.31 94.63 79.49 68.28 56.41 98.46 93.34 98.44 93.91 72.85 58.99 88.41 77.96 87.32 79.93 86.15 78.32
+ Two-Gen-BAT 97.86 97.66 95.28 81.34 69.97 59.27 98.42 93.35 98.61 94.15 73.40 59.40 88.78 79.45 90.77 83.39 89.50 81.80
+ Combined AT 98.56 98.20 95.14 81.30 69.66 58.61 98.40 93.46 98.68 93.99 72.87 59.76 88.91 79.42 90.77 83.45 88.52 80.75
TABLE IX: Comparison between different adversarial training settings for improving generalization to unseen image/video qualities (based on Xception [9]). All models were trained on NT data in FF++ [39], and tested on NT data of possibly different qualities. The symbol indicates the models were trained on a specific image/video quality, and were expected to generalize to other image/video qualities.
Method NT
NT DF F2F FS DFD Celeb-DF Avg
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
Xception [9] 98.06 94.26 82.66 60.54 60.96 51.07 42.94 48.37 69.82 53.90 53.75 68.06 61.60
+ Gaussian Noise 98.53 94.58 82.75 61.60 69.96 51.16 42.99 48.46 69.75 55.01 53.73 69.85 61.91
+ Gaussian Blur 98.87 95.05 82.75 61.64 70.02 51.20 43.02 48.52 69.95 54.98 54.68 69.93 62.22
+ JPEG Compression 98.44 94.53 82.69 61.57 70.00 51.14 42.98 48.42 69.88 55.10 53.93 69.85 61.92
+ Combined Traditional 98.99 95.08 82.77 61.70 70.07 51.28 43.10 48.53 70.03 55.16 54.79 70.02 62.28
+ Two-Gen-BAT (ours) 98.61 94.15 86.18 64.54 72.02 55.35 45.87 50.47 73.15 59.69 60.29 72.59 65.16
+ Combined AT (ours) 98.68 93.99 86.58 65.42 72.75 55.58 46.16 50.80 74.45 61.47 62.06 73.35 65.57
TABLE X: Comparison between adversarial training and traditional data augmentation in improving generalization to unseen face forgery technologies (based on Xception [9]). The models were trained using the NT C23 data in FF++, and tested on data generated using different forgery technologies (indicated by “NT ”).
Method Raw C23 C40
Raw C23 C40 Raw C23 C40 Raw C23 C40
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
Xception [9] 99.21 99.16 65.70 51.11 55.17 50.04 98.05 92.65 98.06 94.26 68.42 54.30 85.90 74.26 86.11 77.14 86.59 78.35
+ Gaussian Noise 98.79 98.77 74.09 56.18 56.95 51.86 97.82 92.59 98.53 94.58 70.68 59.82 87.38 75.45 86.44 78.53 86.39 78.35
+ Gaussian Blur 99.26 99.14 78.42 59.65 57.59 52.34 98.15 92.98 98.87 95.05 70.89 59.98 87.58 75.69 86.83 78.76 86.44 78.45
+ JPEG Compression 98.92 98.88 83.86 67.76 66.24 54.35 97.90 92.69 98.44 94.53 71.96 61.27 87.45 76.33 87.01 79.39 87.53 79.56
+ Combined Traditional 99.14 99.18 84.85 68.90 66.75 55.38 98.51 92.97 98.99 95.08 72.16 62.03 87.57 76.52 87.45 80.40 87.57 79.06
+ Two-Gen-BAT (ours) 97.86 97.66 95.28 81.34 69.97 59.27 98.42 93.35 98.61 94.15 73.40 59.40 88.78 79.45 90.77 83.39 89.50 81.80
+ Combined AT (ours) 98.56 98.20 95.14 81.30 69.66 58.61 98.40 93.46 98.68 93.99 72.87 59.76 88.91 79.42 90.77 83.45 88.52 80.75
TABLE XI: Comparison between adversarial training and traditional data augmentation in improving generalization to unseen image/video qualities (based on Xception [9]). The train and test data were split from the NT data in FF++ [39], of possibly different qualities. The symbol indicates the models were trained on a specific image/video quality, and were expected to generalize to other image/video qualities.
Target
Kernel Size Source Inc-v3 Inc-v4 IncRes-v2
3 Inc-v3 29.08% 75.60% 76.68%
Inc-v4 72.30% 38.78% 76.28%
IncRes-v2 72.30% 75.60% 39.92%
5 Inc-v3 21.48% 71.04% 71.18%
Inc-v4 65.86% 30.84% 71.18%
IncRes-v2 65.84% 71.04% 33.20%
7 Inc-v3 16.80% 65.68% 65.98%
Inc-v4 60.12% 23.04% 65.98%
IncRes-v2 60.12% 65.52% 27.28%
9 Inc-v3 15.88% 62.30% 61.92%
Inc-v4 56.14% 20.92% 62.02%
IncRes-v2 56.22% 62.00% 25.62%
TABLE XII: Adversarial accuracy under different kernel size.

Table XII summarizes how the kernel size of adversarial blurring affects the attack performance. We keep using the adversarial accuracy (i.e., the prediction accuracy of the victim model on adversarial examples) as an evaluation metric. Figure 4 demonstrates the adversarial examples generated using different kernel sizes. We let be and set be a map of all ones for Table XII and Figure 4. As expected, as the kernel size increases, we can obtain more blurry images and thus achieve higher attack success rates.

Table XIII summarizes the influence of initialization of the map (i.e., by settings different maps) for performing the adversarial blurring attack. We let be 0.1 and fixed the kernel size to for obtaining results in Table XIII.

Target
Initial value Source Inc-v3 Inc-v4 IncRes-v2
0.1 Inc-v3 49.08% 76.06% 77.28%
Inc-v4 75.10% 54.68% 78.28%
IncRes-v2 75.10% 78.06% 58.38%
1 Inc-v3 21.48% 71.04% 71.18%
Inc-v4 65.86% 30.84% 71.18%
IncRes-v2 65.84% 71.04% 33.20%
10 Inc-v3 18.12% 69.96% 70.24%
Inc-v4 63.76% 27.58% 69.24%
IncRes-v2 63.78% 69.98% 29.30%
100 Inc-v3 17.20% 68.78% 69.24%
Inc-v4 62.86% 25.64% 68.22%
IncRes-v2 62.84% 68.18% 28.70%

TABLE XIII: Adversarial accuracy of ImageNet [14] models under different initialization of .
Source Target
Inc-v3 Inc-v4 IncRes-v2
PGD BAT PGD BAT PGD BAT
Inc-v3 22.06% 18.53% 69.73% 69.02% 69.39% 68.26%
Inc-v4 67.63% 62.95% 33.84% 28.89% 68.84% 68.06%
IncRes-v2 65.92% 63.13% 68.29% 67.72% 41.84% 31.74%
TABLE XIV: Adversarial accuracy of ImageNet models on our adversarial examples and the PGD examples. We choose the iteration number to be 3

Appendix C Multi-step Scheme in Adversarial Training

As has been mentioned in the main paper, multi-step adversarial examples can be introduced in the input-gradient-based adversarial training, though the training complexity shall increase drastically. We report some experimental results in the multi-step scheme. We tested the performance of using multi-step FGSM (or called iterative FGSM [27]) and multi-step (input-gradient-based) adversarial blurring in training deepfake detection models. We considered three steps in this experiment, and it was performed based on the EfficientNet backbone. Our results showed that the three-step FGSM in general did not lead to superior test set AUCs (NT: 98.61%, DF: 85.12%, F2F: 68.05%, FS: 44.58%) in comparison to the single-step FGSM (i.e., Grad-AAT, NT: 98.00%, DF: 85.91%, F2F: 71.12%, FS: 44.97%). For our adversarial blurring, the three-step scheme seems slightly more effective in generalizing to the DF and FS data (NT: 98.75%, DF: 86.48%, F2F: 69.67%, FS: 46.72%) in comparison to the single-step input-gradient-based scheme, yet further increasing the number of steps issued in similar or worse AUCs.

The training complexity of our two-generator-based BAT is similar to that of the three-step input-gradient-based BAT, showing that the generator-based method well trades off the test-set accuracy against the training complexity.

Appendix D The Role of Kernel Size in Adversarial Training

In order to better explore the impact of kernel size on our approach, we conducted a set of experiments with only different kernel sizes, and compare the performance of obtained models. It can be seen from Table XV and XVI that when the kernel size is increased, the generalization performance of the model is also improved. Future work can consider even larger kernel sizes.

NT
Kernel Size NT DF F2F FS DFD Celeb-DF Avg
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
3 98.11 95.00 86.50 66.92 71.31 55.05 46.98 49.34 76.05 65.43 66.94 74.06 66.65
5 98.37 95.14 87.04 68.01 72.83 55.48 47.72 49.87 76.26 66.29 67.39 74.75 67.18
7 98.54 95.21 87.38 68.93 73.77 55.86 48.52 50.12 76.43 66.58 67.74 75.20 67.57
9 98.72 95.26 87.51 69.40 74.65 56.19 48.99 50.43 76.60 66.84 67.91 75.55 67.84

TABLE XV: Comparison between Two-Gen-BAT settings with different kernel sizes in improving the generalization to unseen forgery technologies. Except for the kernel size, all configurations are consistent with Table I.
Raw C23 C40
Kernel Size Raw C23 C40 Raw C23 C40 Raw C23 C40
AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC
(%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%) (%)
3 99.11 98.20 95.75 82.13 69.79 59.14 98.64 94.70 98.11 95.00 73.42 61.91 90.02 82.01 92.48 84.43 89.02 80.93
5 99.39 98.64 95.84 83.07 70.74 60.26 98.82 94.98 98.37 95.14 73.86 62.20 90.32 82.43 92.81 84.77 91.13 83.04
7 99.43 98.89 95.89 83.81 71.55 61.05 98.95 95.36 98.54 95.21 74.21 62.45 90.46 82.74 93.29 85.04 92.84 85.46
9 99.46 99.04 95.93 84.19 72.18 61.87 98.92 95.52 98.72 95.26 74.73 62.53 90.58 82.91 93.51 85.36 94.19 87.02
TABLE XVI: Comparison between Two-Gen-BAT settings with different kernel sizes in improving the generalization to unseen image/video qualities (indicated by ). Except for the kernel size, all configurations are consistent with Table II.

Appendix E Hyper-parameter Tuning

In order to compare the traditional data augmentation methods and our method fairly, we fine-tuned each hyper-parameter carefully on the validation set and report test results using those obtained the best validation accuracies. For data augmentation using the traditional Gaussian noise, we fine-tuned the following hyper-parameters: , i.e., the probability of adding noise, and the mean and variance of the noise, in the range of and , respectively. We saw the best validation performance when using , , and an uniformly random variance in the range of for each training image. Testing with the traditional Gaussian blur augmentation, we set the kernel size to 9, and we fine-tuned the variance of each Gaussian kernel and also . In this setting, the best validation performance was obtained when each training image took a uniformly random variance in the range of . For JPEG compression, we let the compression quality for each image be randomly sampled between a lower bound and an upper bound. Empirical results on the validation set suggested that the two bounds be and , respectively. For the combination of traditional augmentations, we took the same values for all these hyper-parameters.

References

  • [1] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen (2018) Mesonet: a compact facial video forgery detection network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–7. Cited by: §II.
  • [2] M. Afifi and M. S. Brown (2019)

    What else can fool deep learning? addressing color constancy errors on deep neural network performance

    .
    In Proceedings of the IEEE International Conference on Computer Vision, pp. 243–252. Cited by: §III-A.
  • [3] A. Athalye, N. Carlini, and D. Wagner (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420. Cited by: §II.
  • [4] J. Bao, D. Chen, F. Wen, H. Li, and G. Hua (2018) Towards open-set identity preserving face synthesis. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    ,
    pp. 6713–6722. Cited by: §II.
  • [5] B. Bayar and M. C. Stamm (2016) A deep learning approach to universal image manipulation detection using a new convolutional layer. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, pp. 5–10. Cited by: §II.
  • [6] D. Bitouk, N. Kumar, S. Dhillon, P. Belhumeur, and S. K. Nayar (2008) Face swapping: automatically replacing faces in photographs. In ACM SIGGRAPH 2008 papers, pp. 1–8. Cited by: §II.
  • [7] C. Bregler, M. Covell, and M. Slaney (1997) Video rewrite: driving visual speech with audio. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 353–360. Cited by: §II.
  • [8] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018)

    Stargan: unified generative adversarial networks for multi-domain image-to-image translation

    .
    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8789–8797. Cited by: §II.
  • [9] F. Chollet (2017) Xception: deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1251–1258. Cited by: Appendix A, TABLE X, TABLE XI, TABLE VIII, TABLE IX, §IV-A, §IV-B, TABLE VI, §IV.
  • [10] D. Cozzolino, D. Gragnaniello, and L. Verdoliva (2014) Image forgery localization through the fusion of camera-based, feature-based and pixel-based techniques. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 5302–5306. Cited by: §I.
  • [11] D. Cozzolino, G. Poggi, and L. Verdoliva (2017) Recasting residual-based local descriptors as convolutional neural networks: an application to image forgery detection. In Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security, pp. 159–164. Cited by: §II.
  • [12] K. Dale, K. Sunkavalli, M. K. Johnson, D. Vlasic, W. Matusik, and H. Pfister (2011) Video face replacement. In Proceedings of the 2011 SIGGRAPH Asia Conference, pp. 1–10. Cited by: §II.
  • [13] DeepFakes. Note: www.github.com/deepfakes/faceswapAccessed: 2019-09-18 Cited by: §II, §IV.
  • [14] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Cited by: TABLE XIII, §III-B, §IV-C, §IV, §IV.
  • [15] M. Du, S. Pentyala, Y. Li, and X. Hu (2019)

    Towards generalizable forgery detection with locality-aware autoencoder

    .
    arXiv preprint arXiv:1909.05999. Cited by: §I, §II.
  • [16] N. Dufour and A. Gully (2019) Contributing data to deepfake detection research. Google AI Blog. Cited by: §IV-A, TABLE VI, §IV.
  • [17] FaceSwap. Note: www.github.com/MarekKowalski/FaceSwapAccessed: 2019-09-30 Cited by: §I, §II, §IV.
  • [18] P. Ferrara, T. Bianchi, A. De Rosa, and A. Piva (2012) Image forgery localization via fine-grained analysis of cfa artifacts. IEEE Transactions on Information Forensics and Security 7 (5), pp. 1566–1577. Cited by: §II.
  • [19] J. Fridrich and J. Kodovsky (2012) Rich models for steganalysis of digital images. IEEE Transactions on Information Forensics and Security 7 (3), pp. 868–882. Cited by: §I, §II.
  • [20] P. Garrido, L. Valgaerts, O. Rehmsen, T. Thormahlen, P. Perez, and C. Theobalt (2014) Automatic face reenactment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4217–4224. Cited by: §II.
  • [21] M. Goljan and J. Fridrich (2015) CFA-aware features for steganalysis of color images. In Media Watermarking, Security, and Forensics 2015, Vol. 9409, pp. 94090V. Cited by: §II.
  • [22] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §I, §II, Fig. 1, §III-A, §III-B, §IV-C.
  • [23] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §II, §III-C.
  • [24] D. Güera and E. J. Delp (2018)

    Deepfake video detection using recurrent neural networks

    .
    In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. Cited by: §II.
  • [25] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen (2019) Attgan: facial attribute editing by only changing what you want. IEEE Transactions on Image Processing 28 (11), pp. 5464–5478. Cited by: §II.
  • [26] S. Hussain, P. Neekhara, M. Jere, F. Koushanfar, and J. McAuley (2021) Adversarial deepfakes: evaluating vulnerability of deepfake detectors to adversarial examples. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3348–3357. Cited by: §II.
  • [27] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: Appendix C, §II.
  • [28] L. Li, J. Bao, T. Zhang, H. Yang, D. Chen, F. Wen, and B. Guo (2020) Face x-ray for more general face forgery detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5001–5010. Cited by: §I, §II.
  • [29] Y. Li and S. Lyu (2018) Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656 2. Cited by: §II.
  • [30] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu (2020) Celeb-df: a large-scale challenging dataset for deepfake forensics. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3207–3216. Cited by: TABLE VI, §IV.
  • [31] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han (2019) On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265. Cited by: §IV.
  • [32] M. Liu, Y. Ding, M. Xia, X. Liu, E. Ding, W. Zuo, and S. Wen (2019) Stgan: a unified selective transfer network for arbitrary image attribute editing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3673–3682. Cited by: §II.
  • [33] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Cited by: §I, §II, §III-B.
  • [34] Y. Nirkin, Y. Keller, and T. Hassner (2019) FSGAN: subject agnostic face swapping and reenactment. In Proceedings of the IEEE international conference on computer vision, pp. 7184–7193. Cited by: §II.
  • [35] K. Olszewski, Z. Li, C. Yang, Y. Zhou, R. Yu, Z. Huang, S. Xiang, S. Saito, P. Kohli, and H. Li (2017) Realistic dynamic facial textures from a single image using gans. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5429–5438. Cited by: §II.
  • [36] X. Pan, X. Zhang, and S. Lyu (2012) Exposing image splicing with inconsistent local noise variances. In 2012 IEEE International Conference on Computational Photography (ICCP), pp. 1–10. Cited by: §I, §II.
  • [37] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, Vol. 32, pp. 8026–8037. Cited by: §IV.
  • [38] N. Rahmouni, V. Nozick, J. Yamagishi, and I. Echizen (2017) Distinguishing computer graphics from natural images using convolution neural networks. In 2017 IEEE Workshop on Information Forensics and Security (WIFS), pp. 1–6. Cited by: §II.
  • [39] A. Rössler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner (2019) Faceforensics++: learning to detect manipulated facial images. arXiv preprint arXiv:1901.08971. Cited by: TABLE XI, TABLE VIII, TABLE IX, §II, TABLE I, TABLE II, TABLE V, TABLE VI, §IV, §IV.
  • [40] N. Ruiz, S. A. Bargal, and S. Sclaroff (2020) Disrupting deepfakes: adversarial attacks against conditional image translation networks and facial manipulation systems. In European Conference on Computer Vision, pp. 236–251. Cited by: §II.
  • [41] E. Rusak, L. Schott, R. S. Zimmermann, J. Bitterwolf, O. Bringmann, M. Bethge, and W. Brendel (2020) A simple way to make neural networks robust against diverse image corruptions. arXiv preprint arXiv:2001.06057. Cited by: §III-C.
  • [42] J. Stehouwer, H. Dang, F. Liu, X. Liu, and A. Jain (2019) On the detection of digital face manipulation. arXiv preprint arXiv:1910.01717. Cited by: §II, §IV-B, TABLE VI, §IV.
  • [43] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi (2016)

    Inception-v4, inception-resnet and the impact of residual connections on learning

    .
    arXiv preprint arXiv:1602.07261. Cited by: §IV-C.
  • [44] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826. Cited by: §IV-C.
  • [45] M. Tan and Q. V. Le (2019) Efficientnet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946. Cited by: Appendix A, §IV-A, §IV-B, TABLE I, TABLE II, TABLE III, TABLE IV, TABLE V, TABLE VI, §IV.
  • [46] J. Thies, M. Zollhöfer, and M. Nießner (2019) Deferred neural rendering: image synthesis using neural textures. arXiv preprint arXiv:1904.12356. Cited by: §II, §IV.
  • [47] J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner (2016) Face2face: real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2387–2395. Cited by: §I, §II, §IV.
  • [48] J. Uesato, J. Alayrac, P. Huang, R. Stanforth, A. Fawzi, and P. Kohli (2019) Are labels required for improving adversarial robustness?. arXiv preprint arXiv:1905.13725. Cited by: §II.
  • [49] S. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros (2020) CNN-generated images are surprisingly easy to spot… for now. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vol. 7. Cited by: Appendix A, §III-B, §IV-B.
  • [50] C. Xiao, J. Zhu, B. Li, W. He, M. Liu, and D. Song (2018) Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612. Cited by: §I, Fig. 1, §III-A.
  • [51] N. Yu, L. S. Davis, and M. Fritz (2019) Attributing fake images to gans: learning and analyzing gan fingerprints. In Proceedings of the IEEE International Conference on Computer Vision, pp. 7556–7566. Cited by: §I.
  • [52] ZAO. Note: https://apps.apple.com/cn/app/zao/id1465199127Accessed: 2019-10-13 Cited by: §II.
  • [53] P. Zhou, X. Han, V. I. Morariu, and L. S. Davis (2017) Two-stream neural networks for tampered face detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1831–1839. Cited by: §II.
  • [54] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §III-C.