Restricted Black-box Adversarial Attack Against DeepFake Face Swapping

04/26/2022
by   Junhao Dong, et al.
0

DeepFake face swapping presents a significant threat to online security and social media, which can replace the source face in an arbitrary photo/video with the target face of an entirely different person. In order to prevent this fraud, some researchers have begun to study the adversarial methods against DeepFake or face manipulation. However, existing works focus on the white-box setting or the black-box setting driven by abundant queries, which severely limits the practical application of these methods. To tackle this problem, we introduce a practical adversarial attack that does not require any queries to the facial image forgery model. Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models. Specially, we propose the Transferable Cycle Adversary Generative Adversarial Network (TCA-GAN) to construct the adversarial perturbation for disrupting unknown DeepFake systems. We also present a novel post-regularization module for enhancing the transferability of generated adversarial examples. To comprehensively measure the effectiveness of our approaches, we construct a challenging benchmark of DeepFake adversarial attacks for future development. Extensive experiments impressively show that the proposed adversarial attack method makes the visual quality of DeepFake face images plummet so that they are easier to be detected by humans and algorithms. Moreover, we demonstrate that the proposed algorithm can be generalized to offer face image protection against various face translation methods.

READ FULL TEXT

page 1

page 2

page 7

page 8

research
01/28/2023

Semantic Adversarial Attacks on Face Recognition through Significant Attributes

Face recognition is known to be vulnerable to adversarial face images. E...
research
08/24/2022

Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries

Adversarial examples are inputs intentionally generated for fooling a de...
research
12/09/2019

Amora: Black-box Adversarial Morphing Attack

Nowadays, digital facial content manipulation has become ubiquitous and ...
research
03/03/2020

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks and Facial Manipulation Systems

Face modification systems using deep learning have become increasingly p...
research
06/26/2023

3D-Aware Adversarial Makeup Generation for Facial Privacy Protection

The privacy and security of face data on social media are facing unprece...
research
02/15/2021

CAP-GAN: Towards Adversarial Robustness with Cycle-consistent Attentional Purification

Adversarial attack is aimed at fooling the target classifier with imperc...
research
05/08/2020

Projection Probability-Driven Black-Box Attack

Generating adversarial examples in a black-box setting retains a signifi...

Please sign up or login with your details

Forgot password? Click here to reset