Exploring Frequency Adversarial Attacks for Face Forgery Detection

03/29/2022
by   Shuai Jia, et al.
0

Various facial manipulation techniques have drawn serious public concerns in morality, security, and privacy. Although existing face forgery classifiers achieve promising performance on detecting fake images, these methods are vulnerable to adversarial examples with injected imperceptible perturbations on the pixels. Meanwhile, many face forgery detectors always utilize the frequency diversity between real and fake faces as a crucial clue. In this paper, instead of injecting adversarial perturbations into the spatial domain, we propose a frequency adversarial attack method against face forgery detectors. Concretely, we apply discrete cosine transform (DCT) on the input images and introduce a fusion module to capture the salient region of adversary in the frequency domain. Compared with existing adversarial attacks (e.g. FGSM, PGD) in the spatial domain, our method is more imperceptible to human observers and does not degrade the visual quality of the original images. Moreover, inspired by the idea of meta-learning, we also propose a hybrid adversarial attack that performs attacks in both the spatial and frequency domains. Extensive experiments indicate that the proposed method fools not only the spatial-based detectors but also the state-of-the-art frequency-based detectors effectively. In addition, the proposed frequency attack enhances the transferability across face forgery detectors as black-box attacks.

READ FULL TEXT

page 1

page 3

page 4

page 8

research
10/30/2020

Perception Improvement for Free: Exploring Imperceptible Black-box Adversarial Attacks on Image Classification

Deep neural networks are vulnerable to adversarial attacks. White-box ad...
research
09/03/2023

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

Malicious use of deepfakes leads to serious public concerns and reduces ...
research
08/03/2022

Spectrum Focused Frequency Adversarial Attacks for Automatic Modulation Classification

Artificial intelligence (AI) technology has provided a potential solutio...
research
03/22/2022

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

DeepFakes are raising significant social concerns. Although various Deep...
research
03/02/2021

Spatial-Phase Shallow Learning: Rethinking Face Forgery Detection in Frequency Domain

The remarkable success in face forgery techniques has received considera...
research
10/29/2020

Perception Matters: Exploring Imperceptible and Transferable Anti-forensics for GAN-generated Fake Face Imagery Detection

Recently, generative adversarial networks (GANs) can generate photo-real...
research
09/04/2023

Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration

Adversarial face examples possess two critical properties: Visual Qualit...

Please sign up or login with your details

Forgot password? Click here to reset