Interpretable Face Manipulation Detection via Feature Whitening

06/21/2021
by   Yingying Hua, et al.
0

Why should we trust the detections of deep neural networks for manipulated faces? Understanding the reasons is important for users in improving the fairness, reliability, privacy and trust of the detection models. In this work, we propose an interpretable face manipulation detection approach to achieve the trustworthy and accurate inference. The approach could make the face manipulation detection process transparent by embedding the feature whitening module. This module aims to whiten the internal working mechanism of deep networks through feature decorrelation and feature constraint. The experimental results demonstrate that our proposed approach can strike a balance between the detection accuracy and the model interpretability.

READ FULL TEXT

page 3

page 4

research
11/17/2019

Exploiting Human Social Cognition for the Detection of Fake and Fraudulent Faces via Memory Networks

Advances in computer vision have brought us to the point where we have t...
research
02/23/2023

EfficientFace: An Efficient Deep Network with Feature Enhancement for Accurate Face Detection

In recent years, deep convolutional neural networks (CNN) have significa...
research
10/26/2022

Towards A Robust Deepfake Detector:Common Artifact Deepfake Detection Model

Existing deepfake detection methods perform poorly on face forgeries gen...
research
01/16/2019

DAFE-FD: Density Aware Feature Enrichment for Face Detection

Recent research on face detection, which is focused primarily on improvi...
research
09/29/2010

Face Detection with Effective Feature Extraction

There is an abundant literature on face detection due to its important r...
research
01/31/2022

Crowd-powered Face Manipulation Detection: Fusing Human Examiner Decisions

We investigate the potential of fusing human examiner decisions for the ...

Please sign up or login with your details

Forgot password? Click here to reset