The Value of AI Guidance in Human Examination of Synthetically-Generated Faces

08/22/2022
by   Aidan Boyd, et al.
2

Face image synthesis has progressed beyond the point at which humans can effectively distinguish authentic faces from synthetically generated ones. Recently developed synthetic face image detectors boast "better-than-human" discriminative ability, especially those guided by human perceptual intelligence during the model's training process. In this paper, we investigate whether these human-guided synthetic face detectors can assist non-expert human operators in the task of synthetic image detection when compared to models trained without human-guidance. We conducted a large-scale experiment with more than 1,560 subjects classifying whether an image shows an authentic or synthetically-generated face, and annotate regions that supported their decisions. In total, 56,015 annotations across 3,780 unique face images were collected. All subjects first examined samples without any AI support, followed by samples given (a) the AI's decision ("synthetic" or "authentic"), (b) class activation maps illustrating where the model deems salient for its decision, or (c) both the AI's decision and AI's saliency map. Synthetic faces were generated with six modern Generative Adversarial Networks. Interesting observations from this experiment include: (1) models trained with human-guidance offer better support to human examination of face images when compared to models trained traditionally using cross-entropy loss, (2) binary decisions presented to humans offers better support than saliency maps, (3) understanding the AI's accuracy helps humans to increase trust in a given model and thus increase their overall accuracy. This work demonstrates that although humans supported by machines achieve better-than-random accuracy of synthetic face detection, the ways of supplying humans with AI support and of building trust are key factors determining high effectiveness of the human-AI tandem.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 10

page 11

page 12

page 13

research
06/14/2021

More Real than Real: A Study on Human Visual Perception of Synthetic Faces

Deep fakes became extremely popular in the last years, also thanks to th...
research
12/01/2021

CYBORG: Blending Human Saliency Into the Loss Improves Deep Learning

Can deep learning models achieve greater generalization if their trainin...
research
03/21/2023

Explain To Me: Salience-Based Explainability for Synthetic Face Detection Models

The performance of convolutional neural networks has continued to improv...
research
03/01/2023

Improving Model's Focus Improves Performance of Deep Learning-Based Synthetic Face Detectors

Deep learning-based models generalize better to unknown data samples aft...
research
06/16/2021

Explainable AI for Natural Adversarial Images

Adversarial images highlight how vulnerable modern image classifiers are...
research
11/08/2022

Generative Adversarial Networks for anonymous Acneic face dataset generation

It is well known that the performance of any classification model is eff...
research
09/19/2020

Humans learn too: Better Human-AI Interaction using Optimized Human Inputs

Humans rely more and more on systems with AI components. The AI communit...

Please sign up or login with your details

Forgot password? Click here to reset