Practical Deepfake Detection: Vulnerabilities in Global Contexts

06/20/2022
by   Yang A. Chuming, et al.
0

Recent advances in deep learning have enabled realistic digital alterations to videos, known as deepfakes. This technology raises important societal concerns regarding disinformation and authenticity, galvanizing the development of numerous deepfake detection algorithms. At the same time, there are significant differences between training data and in-the-wild video data, which may undermine their practical efficacy. We simulate data corruption techniques and examine the performance of a state-of-the-art deepfake detection algorithm on corrupted variants of the FaceForensics++ dataset. While deepfake detection models are robust against video corruptions that align with training-time augmentations, we find that they remain vulnerable to video corruptions that simulate decreases in video quality. Indeed, in the controversial case of the video of Gabonese President Bongo's new year address, the algorithm, which confidently authenticates the original video, judges highly corrupted variants of the video to be fake. Our work opens up both technical and ethical avenues of exploration into practical deepfake detection in global contexts.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset