In recent years, Deepfakes (manipulated videos) have become an increasing threat to social security, privacy and democracy. As a result, research into the detection of such Deepfake videos has taken many different approaches. Initial methods tried to exploit discrepancies in the fake video generation process, however more recently, research has moved toward using deep learning approaches for this task.
From a larger perspective, Deepfake detection can be considered a binary classification problem to distinguish between ‘real’ and ‘fake’ videos. There are many architectures that have achieved remarkable results and these are mentioned in the following section. However, most of these methods rely only on information present in a single image, performing analysis frame by frame, and fail to leverage temporal information in the videos. An area of research that has delved deeper into using information across frames in a video is ‘Action-Recognition’.
In this paper, we aim to apply techniques used for video classification, that take advantage of 3D input, on the Deepfake classification problem at hand. All the convolutional networks we tested (apart from RCN) were pre-trained on the Kinetics dataset , a large scale video classification dataset. The methods were analysed using video clips of fixed lengths.
For this work we have chosen the Celeb-DF (v2) dataset  which contains 590 real videos collected from YouTube and 5639 corresponding synthesised videos of high quality. Celeb-DF was selected as it has proved to be a challenging dataset for existing Deepfake detection methods due to its fewer noticeable visual artifacts in synthesised videos.
2 Related Work
In the last few decades, many methods for generating realistic synthetic faces have surfaced [11, 15, 36, 35, 22, 9, 34, 33, 8, 32, 30]. Most of the early methods fell out of use and were replaced by generative adversarial networks and style transfer techniques [2, 1, 3, 6]. Li et al.  proposed the Deepfake maker method which detected faces from an input video, extracted facial landmarks, aligned the faces to a standard configuration, cropped the faces and fed them to an encoder-decoder to generate faces of a person with a target’s expression on it.
The data used in these algorithms have come from several large-scale DeepFake video datasets such as FaceForensics++ , DFDC , DF-TIMIT , UADFV . The Celeb-DF Dataset  claimed that it was more realistic than the other datasets because the sythesis method they used led to less visual artifacts. Celef-DF has 590 real videos and 5,639 fake ones. Their average length is 13 seconds with a frame rate of 30 fps. The videos were gathered from YouTube corresponding to interviews from 59 celebrities. The synthetic videos in Celeb-DF were made with the DeepFake maker algorithm . The resolution of the videos in Celeb-DF was 256 x 256 pixels.
The importance of the task of detecting Deepfakes has led to the development of numerous methods. Rossler et al.  proposed the XceptionNet model that was trained on the Faceforensics++ dataset. Other popular Deepfake detection approaches include Two-Stream , MesoNet , Headpose , FWA , VA , Multi-task , capsule  and DSP-FWA .
Li et al.  tested all the these methods on the Celeb-DF but didn’t train on them. Kumar and Bhavsar , on the other hand, trained and tested on Celeb-df using the XceptionNet architecture with metric learning.
Even though video classification methods using spatio-temporal features haven’t seen the stratospheric success of deep learning based image classification, several methods have seen some success such as C3D , Recurrent CNN , ResNet-3D , ResNet Mixed 3D-2D , Resnet (2+1)D  and I3D .
In this section we outline the different network architectures we used to perform DeepFake classification. Random cropping and temporal jittering is performed on all methods.
The dataset we used was Celeb-DF V2. It contains 590 real videos, and 5639 fake videos. To pre-process all the videos we decided it would best to remove information that might distract our net from learning what was important.
In a synthesized video, the only part of the frame that is synthesized is over top of the face, so like the frame-based deep fake detection methods, we decided to use face-cropping. This meant taking a crop over the face in every frame, then restacking the frames back into a video file. It also meant that the frames all had to be the same size and that we didn’t stretch the source in-between frames to match this frame size so that no videos had any distortion.
To accomplish this we tried Haar Cascades, then BlazeFace, before settling on RetinaFace . The reason why Haar Cascades were unnaceptable for our purposes was that they have a lot of false positives. The reason we didn’t find BlazeFace acceptable was that it drew it’s bounding boxes pretty inconsistently between frames, causing a lot of jittering. RetinaFace has a slower forward pass than BlazeFace, but still acceptable.
One non-temporal classification method was included for a basis of comparison. This method, Unmasking Deepfakes with simple Features 
, relies on detecting statistical artifacts in images created by GANs. The discrete Fourier transform of the image is taken, and the 2D amplitude spectrum is compressed into a
feature vector with azimuthal averaging. These feature vectors can then be classified with a simple binary classifier, such as the Logistic Regression. This technique can also be used with k-means clustering to effectively classify unlabeled datasets.
Deepfake videos lack temporal coherence as frame by frame video manipulation produce low level artifacts which manifest themselves as inconsistent temporal artifacts. Such artifacts can be found using a RCN 
. The network architecture pipeline consist of a CNN for feature extraction and an LSTM for temporal sequence analysis. A fully connected layer uses the temporal sequence descriptors outputted by the LSTM to classify fake and real videos. Our model consists of two parts: an encoder and a decoder. The encoder comprises of a pre-trained VGG-11 network with batch normalization for extracting features while the decoder is composed of an LSTM with 3 hidden layers followed by 2 fully connected layers. Each video from the dataset was converted into 10 random sequential frames and fed to the network. For each 3D tensor corresponding to the video, features of each frame was found iterating over the time domain and stacked into a tensor which was then fed as input to the decoder.
3D CNNs  are able to capture both spatial and temporal information by extracting motion features encoded in adjacent frames in a video. Models like 3D CNN can be relatively shallow compared to 2D image-based CNNs. The R3D network implemented in this paper consists of a sequence of residual networks which introduce shortcut connections bypassing signals between layers. The only difference with respect to traditional residual networks is that the network is performing 3D convolutions and 3D pooling. The tensor computed by the i-th convolutional block is 4 dimensional and has size . is the number of filters used in the block. Just like C3D 
, the kernels have a size of 3 X 3 X 3. and the temporal stride of conv1 is 1. The input size is 3 x 16 x 112 x 112 where 3 corresponds to the RGB channels and 16 corresponds to the number of consecutive frames buffered. Our implementation follows the 18 layer version of the original paper and has a total of 33.17 million weights pretrained on the Kinetics dataset .
3.5 ResNet Mixed 3D-2D Convolution
MC3 builds on the R3D implementation . To address the argument that temporal modelling may not be required over all the layers in the network, the Mixed Convolution architecture  starts with 3D convolutions and switches to 2D convolutions in the top layers. There are multiple variants of this architecture which involve replacing different groups of 3D convolutions in R3D with 2D convolutions. The specific model used in this paper is MC3 (meaning that layer 3 and deeper are all 2D). Our implementation has 18 layers and 11.49 million weights pretrained on the Kinetics dataset. The network takes clips of 16 consecutive RGB frames with a size of as input. A stride of is used in conv1 to downsample, and a stride of is used to downsample at conv3_1, conv4_1, and conv5_1.
3.6 ResNet (2+1)D
A different approach involves approximating 3D convolution using a 2D convolution followed by a 1D convolution separately. In the R(2+1)D network  the 3D convolutional filters of size are replaced with 2D filters of size and temporal convolutional filters of size .
is a hyperparameter which relates to the dimension of the subspace where the signal is projected between the spatial and temporal convolutions. By separating the 2D and 1D convolutions, more non-linearities are introduced in the network thereby increasing the complexity of functions that can be represented. In addition to this, the factorising convolutions makes optimisation easier resulting in a lower training error. The striding and structure of the network is similar to MC3. The model has 18 layers and 31.30 million parameters pretrained on the Kinetics dataset.
One of the highest performing network architectures for spatiotemporal learning is I3D 
. I3D is an Inflated 3D ConvNet based on 2D ConvNet inflation. The network simply inflates filters and pooling kernels of deep classification ConvNets to 3D, thus allowing spatiotemporal features to be learnt using existing successful 2D architectures pretrained on ImageNet. During implementation the RGB data is passed to the single-stream I3D network. The architecture used is Inception-V1 as the base network. Every convolutional layer is followed by a batch normalization 
and a ReLU activation function except for the last convolutional layer. The cropped faces images are resized to 256 x 256 pixels, and then randomly cropped to 224 x 224. The network has 12.29 million weights pretrained on the Charades dataset.
We evaluate all our methods by measuring the top test accuracy and the top ROC-AUC scores. To avoid errors related to numerical imprecision the scores are rounded to 4 decimal points.
We are comparing the performance of our video-based methods against a selection of methods that only work on the level of frames and don’t learn from temporal information Table 1. Li et al.  made a benchmark on the ROC-AUC scores of frame based methods tested on Celeb-DF but trained on other Deepfake datesets.
More recently, Kumar and Bhavsar  used metric learning using Xception in a triplet network architecture to make the detections. This method was trained and tested on Celeb-DF making it a fairer comparison for our spatio-temporal methods. Additionally, Durall et al.  proposed a method based on running a linear classifier on simple features from the discrete fourier transform of the frames of the videos. We re-implemented that method, and trained it on Celeb-DF.
4.2 Result and analysis
We tested some of the most popular networks that take advantage of temporal features. All the networks were trained on the Celeb-DF dataset starting from the pretrained published weights. No layers were frozen for training. Each method was trained for over 25 epochs and the best ROC-AUC score and test accuracies were recorded in Table2.
The learning rate was set to start at 0.001 and be divided by 10 every 10 epochs. The optimizer used was stochastic gradient descent with a momentum of 0.9 and a weight decay of 0.0005. The criterion used was cross entropy loss. To account for the imbalance between positive and negative samples in the training set, the criterion weighted each class inversely proportional to the number of samples they had in the training set.
The classical frame based method tested was able to achieve an accuracy of only 66.8%. This is likely due to the reduced statistical discrepancy between the real and fake images in Celeb-DF versus FaceForensics++. As shown in the plots in figure 5
, for the cropped Celeb-DF the relative power for each frequency remains within one standard deviation of the mean between fake and real, unlike the FaceForensics++ results. This lack of differentiation between the real and fake statistics can explain the lower performance of the classifier on this dataset.
|Method||ROC-AUC %||Accuracy %|
In this paper, we describe and evaluate the efficacy of action recognition methods to detect AI-generated Deepfakes. Our methods differ from previously explored methods because the networks make decisions while incorporating temporal information. This extra information helped several of these networks beat the state of the art baseline frame-based methods. In particular, R3D outperformed the other networks, even I3D  which was better at action recognition. We hope that this paper will help future researchers in discovering effective ways of detecting Deepfakes.
-  () Dfaker/df: larger resolution face masked, weirdly warped, deepfake,. Note: https://github.com/dfaker/df(Accessed on 04/20/2020) Cited by: §2.
-  () FakeApp 2.2.0 - download for pc free. Note: https://www.malavida.com/en/soft/fakeapp/(Accessed on 04/20/2020) Cited by: §2.
-  () Iperov/deepfacelab: deepfacelab is the leading software for creating deepfakes.. Note: https://github.com/iperov/DeepFaceLab(Accessed on 04/20/2020) Cited by: §2.
-  () Kinetics — deepmind. Note: https://deepmind.com/research/open-source/kinetics(Accessed on 04/24/2020) Cited by: §3.4, §3.5, §3.6.
-  () Perceptual reasoning and interaction research - charades. Note: https://prior.allenai.org/projects/charades(Accessed on 04/24/2020) Cited by: §3.7.
Shaoanlu/faceswap-gan: a denoising autoencoder + adversarial losses and attention mechanisms for face swapping.. Note: https://github.com/shaoanlu/faceswap-GAN(Accessed on 04/20/2020) Cited by: §2.
-  (2018) Mesonet: a compact facial video forgery detection network. In 2018 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–7. Cited by: §2, Table 1.
-  (2017) Bringing portraits to life. ACM Transactions on Graphics (TOG) 36 (6), pp. 196. Cited by: §2.
-  (1997) Video rewrite: driving visual speech with audio. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pp. 353–360. Cited by: §2.
-  (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In , pp. 6299–6308. Cited by: §2, Figure 2, §3.7, Table 2, §5.
-  (2011) Video face replacement. In Proceedings of the 2011 SIGGRAPH Asia Conference, pp. 1–10. Cited by: §2.
-  (2019) RetinaFace: single-stage dense face localisation in the wild. CoRR abs/1905.00641. External Links: Cited by: §3.1.
-  (2019) The deepfake detection challenge (dfdc) preview dataset. arXiv preprint arXiv:1910.08854. Cited by: §2.
-  (2019) Unmasking deepfakes with simple features. arXiv preprint arXiv:1911.00686. Cited by: Figure 2, §3.2, §4.1, Table 1.
-  (2014) Automatic face reenactment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4217–4224. Cited by: §2.
Deepfake video detection using recurrent neural networks. In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. Cited by: §2, Table 2.
-  (2017) Learning spatio-temporal features with 3d residual networks for action recognition. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 3154–3160. Cited by: §2, §3.4, §3.5, Table 2.
-  (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE transactions on pattern analysis and machine intelligence 37 (9), pp. 1904–1916. Cited by: §2, Table 1.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §3.7.
-  (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950. Cited by: §1.
Deepfakes: a new threat to face recognition? assessment and detection. arXiv preprint arXiv:1812.08685. Cited by: §2.
Fast face-swap using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3677–3685. Cited by: §2.
-  (2020) Detecting deepfakes with metric learning. arXiv preprint arXiv:2003.08645. Cited by: §2, §4.1, Table 1.
-  (2018) Exposing deepfake videos by detecting face warping artifacts. arXiv preprint arXiv:1811.00656. Cited by: §2, Table 1.
-  (2019) Celeb-df: a new dataset for deepfake forensics. arXiv preprint arXiv:1909.12962. Cited by: §1, §2, §2, §2, §4.1.
-  (2019) Exploiting visual artifacts to expose deepfakes and face manipulations. In 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW), pp. 83–92. Cited by: §2, Table 1.
-  (2019) Multi-task learning for detecting and segmenting manipulated facial images and videos. arXiv preprint arXiv:1906.06876. Cited by: §2, Table 1.
-  (2019) Use of a capsule network to detect fake images and videos. arXiv preprint arXiv:1910.12467. Cited by: §2, Figure 2, Table 1.
-  (2019) Deep learning for deepfakes creation and detection. arXiv preprint arXiv:1909.11573. Cited by: §3.3.
-  (2018) Generative adversarial talking head: bringing portraits to life with a weakly supervised neural network. arXiv preprint arXiv:1803.07716. Cited by: §2.
-  (2019) Faceforensics++: learning to detect manipulated facial images. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1–11. Cited by: §2, §2, Table 1.
-  (2015) What makes tom hanks look like tom hanks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3952–3960. Cited by: §2.
-  (2017) Synthesizing obama: learning lip sync from audio. ACM Transactions on Graphics (TOG) 36 (4), pp. 1–13. Cited by: §2.
-  (2015) Real-time expression transfer for facial reenactment.. ACM Trans. Graph. 34 (6), pp. 183–1. Cited by: §2.
-  (2016) Face2face: real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2387–2395. Cited by: §2.
-  (2018) FaceVR: real-time gaze-aware facial reenactment in virtual reality. ACM Transactions on Graphics (TOG) 37 (2), pp. 1–15. Cited by: §2.
-  (2018) A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 6450–6459. Cited by: §2, Figure 2, §3.4, §3.5, §3.6, Table 2.
-  (2019) Exposing deep fakes using inconsistent head poses. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8261–8265. Cited by: §2, §2, Table 1.
-  (2017) Two-stream neural networks for tampered face detection. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1831–1839. Cited by: §2, Table 1.