Identifying Invariant Texture Violation for Robust Deepfake Detection

12/19/2020
by   Xinwei Sun, et al.
0

Existing deepfake detection methods have reported promising in-distribution results, by accessing published large-scale dataset. However, due to the non-smooth synthesis method, the fake samples in this dataset may expose obvious artifacts (e.g., stark visual contrast, non-smooth boundary), which were heavily relied on by most of the frame-level detection methods above. As these artifacts do not come up in real media forgeries, the above methods can suffer from a large degradation when applied to fake images that close to reality. To improve the robustness for high-realism fake data, we propose the Invariant Texture Learning (InTeLe) framework, which only accesses the published dataset with low visual quality. Our method is based on the prior that the microscopic facial texture of the source face is inevitably violated by the texture transferred from the target person, which can hence be regarded as the invariant characterization shared among all fake images. To learn such an invariance for deepfake detection, our InTeLe introduces an auto-encoder framework with different decoders for pristine and fake images, which are further appended with a shallow classifier in order to separate out the obvious artifact-effect. Equipped with such a separation, the extracted embedding by encoder can capture the texture violation in fake images, followed by the classifier for the final pristine/fake prediction. As a theoretical guarantee, we prove the identifiability of such an invariance texture violation, i.e., to be precisely inferred from observational data. The effectiveness and utility of our method are demonstrated by promising generalization ability from low-quality images with obvious artifacts to fake images with high realism.

READ FULL TEXT

page 2

page 9

research
12/01/2020

CycleGAN without checkerboard artifacts for counter-forensics of fake-image detection

In this paper, we propose a novel CycleGAN without checkerboard artifact...
research
04/08/2022

On Improving Cross-dataset Generalization of Deepfake Detectors

Facial manipulation by deep fake has caused major security risks and rai...
research
07/12/2022

FAD: A Chinese Dataset for Fake Audio Detection

Fake audio detection is a growing concern and some relevant datasets hav...
research
08/24/2020

What makes fake images detectable? Understanding properties that generalize

The quality of image generation and manipulation is reaching impressive ...
research
02/03/2023

A geometrically aware auto-encoder for multi-texture synthesis

We propose an auto-encoder architecture for multi-texture synthesis. The...
research
12/23/2017

Texture Object Segmentation Based on Affine Invariant Texture Detection

To solve the issue of segmenting rich texture images, a novel detection ...
research
06/01/2022

Deepfake Caricatures: Amplifying attention to artifacts increases deepfake detection by humans and machines

Deepfakes pose a serious threat to our digital society by fueling the sp...

Please sign up or login with your details

Forgot password? Click here to reset