An Examination of Fairness of AI Models for Deepfake Detection

05/02/2021
by   Loc Trinh, et al.
0

Recent studies have demonstrated that deep learning models can discriminate based on protected classes like race and gender. In this work, we evaluate bias present in deepfake datasets and detection models across protected subgroups. Using facial datasets balanced by race and gender, we examine three popular deepfake detectors and find large disparities in predictive performances across races, with up to 10.7 look reveals that the widely used FaceForensics++ dataset is overwhelmingly composed of Caucasian subjects, with the majority being female Caucasians. Our investigation of the racial distribution of deepfakes reveals that the methods used to create deepfakes as positive training signals tend to produce "irregular" faces - when a person's face is swapped onto another person of a different race or gender. This causes detectors to learn spurious correlations between the foreground faces and fakeness. Moreover, when detectors are trained with the Blended Image (BI) dataset from Face X-Rays, we find that those detectors develop systematic discrimination towards certain racial subgroups, primarily female Asians.

READ FULL TEXT

page 3

page 4

page 5

research
07/21/2022

GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection

Facial forgery by deepfakes has raised severe societal concerns. Several...
research
08/10/2023

Benchmarking Algorithmic Bias in Face Recognition: An Experimental Approach Using Synthetic Faces and Human Evaluation

We propose an experimental method for measuring bias in face recognition...
research
10/28/2022

Addressing Bias in Face Detectors using Decentralised Data collection with incentives

Recent developments in machine learning have shown that successful model...
research
12/01/2017

Improving Smiling Detection with Race and Gender Diversity

Recent progress in deep learning has been accompanied by a growing conce...
research
02/12/2023

Multi-dimensional discrimination in Law and Machine Learning – A comparative overview

AI-driven decision-making can lead to discrimination against certain ind...
research
06/23/2021

Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to Data Imbalance in Deep Learning Based Segmentation

The subject of "fairness" in artificial intelligence (AI) refers to asse...
research
03/22/2017

Can you tell where in India I am from? Comparing humans and computers on fine-grained race face classification

Faces form the basis for a rich variety of judgments in humans, yet the ...

Please sign up or login with your details

Forgot password? Click here to reset