Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models

03/21/2023
by   Colton Crum, et al.
0

The performance of convolutional neural networks has continued to improve over the last decade. At the same time, as model complexity grows, it becomes increasingly more difficult to explain model decisions. Such explanations may be of critical importance for reliable operation of human-machine pairing setups, or for model selection when the "best" model among many equally-accurate models must be established. Saliency maps represent one popular way of explaining model decisions by highlighting image regions models deem important when making a prediction. However, examining salience maps at scale is not practical. In this paper, we propose five novel methods of leveraging model salience to explain a model behavior at scale. These methods ask: (a) what is the average entropy for a model's salience maps, (b) how does model salience change when fed out-of-set samples, (c) how closely does model salience follow geometrical transformations, (d) what is the stability of model salience across independent training runs, and (e) how does model salience react to salience-guided image degradations. To assess the proposed measures on a concrete and topical problem, we conducted a series of experiments for the task of synthetic face detection with two types of models: those trained traditionally with cross-entropy loss, and those guided by human salience when training to increase model generalizability. These two types of models are characterized by different, interpretable properties of their salience maps, which allows for the evaluation of the correctness of the proposed measures. We offer source codes for each measure along with this paper.

READ FULL TEXT

page 4

page 5

page 6

page 8

research
08/22/2022

The Value of AI Guidance in Human Examination of Synthetically-Generated Faces

Face image synthesis has progressed beyond the point at which humans can...
research
03/01/2023

Improving Model's Focus Improves Performance of Deep Learning-Based Synthetic Face Detectors

Deep learning-based models generalize better to unknown data samples aft...
research
08/12/2022

The Weighting Game: Evaluating Quality of Explainability Methods

The objective of this paper is to assess the quality of explanation heat...
research
12/01/2021

CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning

Can deep learning models achieve greater generalization if their trainin...
research
10/13/2022

Constructing Natural Language Explanations via Saliency Map Verbalization

Saliency maps can explain a neural model's prediction by identifying imp...
research
11/13/2020

Structured Attention Graphs for Understanding Deep Image Classifications

Attention maps are a popular way of explaining the decisions of convolut...
research
04/12/2022

Maximum Entropy Baseline for Integrated Gradients

Integrated Gradients (IG), one of the most popular explainability method...

Please sign up or login with your details

Forgot password? Click here to reset