A Comprehensive Analysis of AI Biases in DeepFake Detection With Massively Annotated Databases

08/11/2022
by   Ying Xu, et al.
3

In recent years, image and video manipulations with DeepFake have become a severe concern for security and society. Therefore, many detection models and databases have been proposed to detect DeepFake data reliably. However, there is an increased concern that these models and training databases might be biased and thus, cause DeepFake detectors to fail. In this work, we tackle these issues by (a) providing large-scale demographic and non-demographic attribute annotations of 41 different attributes for five popular DeepFake datasets and (b) comprehensively analysing AI-bias of multiple state-of-the-art DeepFake detection models on these databases. The investigation analyses the influence of a large variety of distinctive attributes (from over 65M labels) on the detection performance, including demographic (age, gender, ethnicity) and non-demographic (hair, skin, accessories, etc.) information. The results indicate that investigated databases lack diversity and, more importantly, show that the utilised DeepFake detection models are strongly biased towards many investigated attributes. Moreover, the results show that the models' decision-making might be based on several questionable (biased) assumptions, such if a person is smiling or wearing a hat. Depending on the application of such DeepFake detection methods, these biases can lead to generalizability, fairness, and security issues. We hope that the findings of this study and the annotation databases will help to evaluate and mitigate bias in future DeepFake detection techniques. Our annotation datasets are made publicly available.

READ FULL TEXT

page 1

page 5

page 7

page 8

page 10

research
05/03/2019

Auditing ImageNet: Towards a Model-driven Framework for Annotating Demographic Attributes of Large-Scale Image Datasets

The ImageNet dataset ushered in a flood of academic and industry interes...
research
07/21/2022

GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection

Facial forgery by deepfakes has raised severe societal concerns. Several...
research
03/02/2021

A Comprehensive Study on Face Recognition Biases Beyond Demographics

Face recognition (FR) systems have a growing effect on critical decision...
research
02/24/2020

Multilingual Twitter Corpus and Baselines for Evaluating Demographic Bias in Hate Speech Recognition

Existing research on fairness evaluation of document classification mode...
research
10/05/2018

Computer Security Risks of Distant Relative Matching in Consumer Genetic Databases

Consumer genetic testing has become immensely popular in recent years an...
research
04/29/2021

Discover the Unknown Biased Attribute of an Image Classifier

Recent works find that AI algorithms learn biases from data. Therefore, ...
research
09/02/2019

Analysis of Bias in Gathering Information Between User Attributes in News Application

In the process of information gathering on the web, confirmation bias is...

Please sign up or login with your details

Forgot password? Click here to reset