Information-containing Adversarial Perturbation for Combating Facial Manipulation Systems

03/21/2023
by   Yao Zhu, et al.
0

With the development of deep learning technology, the facial manipulation system has become powerful and easy to use. Such systems can modify the attributes of the given facial images, such as hair color, gender, and age. Malicious applications of such systems pose a serious threat to individuals' privacy and reputation. Existing studies have proposed various approaches to protect images against facial manipulations. Passive defense methods aim to detect whether the face is real or fake, which works for posterior forensics but can not prevent malicious manipulation. Initiative defense methods protect images upfront by injecting adversarial perturbations into images to disrupt facial manipulation systems but can not identify whether the image is fake. To address the limitation of existing methods, we propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP), which provides more comprehensive protection for facial images. We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems to achieve initiative protection. Recovering the message in adversarial examples with a decoder serves passive protection, contributing to provenance tracking and fake image detection. We introduce a feature-level correlation measurement that is more suitable to measure the difference between the facial images than the commonly used mean squared error. Moreover, we propose a spectral diffusion method to spread messages to different frequency channels, thereby improving the robustness of the message against facial manipulation. Extensive experimental results demonstrate that our proposed IAP can recover the messages from the adversarial examples with high average accuracy and effectively disrupt the facial manipulation systems.

READ FULL TEXT

page 1

page 2

page 6

page 7

page 8

page 9

page 11

page 12

research
12/19/2021

Initiative Defense against Facial Manipulation

Benefiting from the development of generative adversarial networks (GAN)...
research
09/21/2020

DeepTag: Robust Image Tagging for DeepFake Provenance

In recent years, DeepFake is becoming a common threat to our society, du...
research
05/23/2021

CMUA-Watermark: A Cross-Model Universal Adversarial Watermark for Combating Deepfakes

Malicious application of deepfakes (i.e., technologies can generate targ...
research
12/29/2021

Invertible Image Dataset Protection

Deep learning has achieved enormous success in various industrial applic...
research
05/06/2023

Towards Prompt-robust Face Privacy Protection via Adversarial Decoupling Augmentation Framework

Denoising diffusion models have shown remarkable potential in various ge...
research
05/22/2023

Building an Invisible Shield for Your Portrait against Deepfakes

The issue of detecting deepfakes has garnered significant attention in t...
research
09/08/2023

FIVA: Facial Image and Video Anonymization and Anonymization Defense

In this paper, we present a new approach for facial anonymization in ima...

Please sign up or login with your details

Forgot password? Click here to reset