CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes

05/23/2021
by   Hao Huang, et al.
3

Malicious application of deepfakes (i.e., technologies can generate target faces or face attributes) has posed a huge threat to our society. The fake multimedia content generated by deepfake models can harm the reputation and even threaten the property of the person who has been impersonated. Fortunately, the adversarial watermark could be used for combating deepfake models, leading them to generate distorted images. The existing methods require an individual training process for every facial image, to generate the adversarial watermark against a specific deepfake model, which are extremely inefficient. To address this problem, we propose a universal adversarial attack method on deepfake models, to generate a Cross-Model Universal Adversarial Watermark (CMUA-Watermark) that can protect thousands of facial images from multiple deepfake models. Specifically, we first propose a cross-model universal attack pipeline by attacking multiple deepfake models and combining gradients from these models iteratively. Then we introduce a batch-based method to alleviate the conflict of adversarial watermarks generated by different facial images. Finally, we design a more reasonable and comprehensive evaluation method for evaluating the effectiveness of the adversarial watermark. Experimental results demonstrate that the proposed CMUA-Watermark can effectively distort the fake facial images generated by deepfake models and successfully protect facial images from deepfakes in real scenes.

READ FULL TEXT

page 2

page 3

page 4

page 5

page 6

page 7

research
03/21/2023

Information-containing Adversarial Perturbation for Combating Facial Manipulation Systems

With the development of deep learning technology, the facial manipulatio...
research
03/01/2023

Feature Extraction Matters More: Universal Deepfake Disruption through Attacking Ensemble Feature Extractors

Adversarial example is a rising way of protecting facial privacy securit...
research
01/09/2021

Exploring Adversarial Fake Images on Face Manifold

Images synthesized by powerful generative adversarial network (GAN) base...
research
09/24/2018

Learning to Detect Fake Face Images in the Wild

Although Generative Adversarial Network (GAN) can be used to generate th...
research
08/19/2023

DUAW: Data-free Universal Adversarial Watermark against Stable Diffusion Customization

Stable Diffusion (SD) customization approaches enable users to personali...
research
03/05/2023

Cyber Vaccine for Deepfake Immunity

Deepfakes pose an evolving threat to cybersecurity, which calls for the ...
research
10/20/2020

Preventing Personal Data Theft in Images with Adversarial ML

Facial recognition tools are becoming exceptionally accurate in identify...

Please sign up or login with your details

Forgot password? Click here to reset