Adversarial Threats to DeepFake Detection: A Practical Perspective

11/19/2020
by   Paarth Neekhara, et al.
4

Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. State-of-the art DeepFake detection techniques rely on neural network based classification models which are known to be vulnerable to adversarial examples. In this work, we study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point. We perform adversarial attacks on DeepFake detectors in a black box setting where the adversary does not have complete knowledge of the classification models. We study the extent to which adversarial perturbations transfer across different models and propose techniques to improve the transferability of adversarial examples. We also create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario since they can be easily shared amongst attackers. We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario by designing transferable and accessible adversarial attacks.

READ FULL TEXT

page 3

page 6

page 7

page 8

research
03/03/2020

Data-Free Adversarial Perturbations for Practical Black-Box Attack

Neural networks are vulnerable to adversarial examples, which are malici...
research
04/19/2022

Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors

Deepfakes utilise Artificial Intelligence (AI) techniques to create synt...
research
10/16/2019

A New Defense Against Adversarial Images: Turning a Weakness into a Strength

Natural images are virtually surrounded by low-density misclassified reg...
research
06/17/2019

Adversarial attacks on Copyright Detection Systems

It is well-known that many machine learning models are susceptible to so...
research
12/10/2020

Robustness and Transferability of Universal Attacks on Compressed Models

Neural network compression methods like pruning and quantization are ver...
research
10/03/2020

A Deep Genetic Programming based Methodology for Art Media Classification Robust to Adversarial Perturbations

Art Media Classification problem is a current research area that has att...
research
06/30/2022

MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors

Detection of adversarial examples has been a hot topic in the last years...

Please sign up or login with your details

Forgot password? Click here to reset