Quantitative Metrics for Evaluating Explanations of Video DeepFake Detectors

10/07/2022
by   Federico Baldassarre, et al.
0

The proliferation of DeepFake technology is a rising challenge in today's society, owing to more powerful and accessible generation methods. To counter this, the research community has developed detectors of ever-increasing accuracy. However, the ability to explain the decisions of such models to users is lacking behind and is considered an accessory in large-scale benchmarks, despite being a crucial requirement for the correct deployment of automated tools for content moderation. We attribute the issue to the reliance on qualitative comparisons and the lack of established metrics. We describe a simple set of metrics to evaluate the visual quality and informativeness of explanations of video DeepFake classifiers from a human-centric perspective. With these metrics, we compare common approaches to improve explanation quality and discuss their effect on both classification and explanation performance on the recent DFDC and DFD datasets.

READ FULL TEXT

page 1

page 2

page 9

page 27

page 28

research
11/11/2022

REVEL Framework to measure Local Linear Explanations for black-box models: Deep Learning Image Classification case of study

Explainable artificial intelligence is proposed to provide explanations ...
research
12/21/2022

DExT: Detector Explanation Toolkit

State-of-the-art object detectors are treated as black boxes due to thei...
research
08/28/2023

Goodhart's Law Applies to NLP's Explanation Benchmarks

Despite the rising popularity of saliency-based explanations, the resear...
research
08/12/2022

The Weighting Game: Evaluating Quality of Explainability Methods

The objective of this paper is to assess the quality of explanation heat...
research
05/28/2023

Evaluating GPT-3 Generated Explanations for Hateful Content Moderation

Recent research has focused on using large language models (LLMs) to gen...
research
03/14/2020

Measuring and improving the quality of visual explanations

The ability of to explain neural network decisions goes hand in hand wit...
research
04/30/2022

ExSum: From Local Explanations to Model Understanding

Interpretability methods are developed to understand the working mechani...

Please sign up or login with your details

Forgot password? Click here to reset