Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

09/03/2023
by   Weijie Wang, et al.
0

Malicious use of deepfakes leads to serious public concerns and reduces people's trust in digital media. Although effective deepfake detectors have been proposed, they are substantially vulnerable to adversarial attacks. To evaluate the detector's robustness, recent studies have explored various attacks. However, all existing attacks are limited to 2D image perturbations, which are hard to translate into real-world facial changes. In this paper, we propose adversarial head turn (AdvHeat), the first attempt at 3D adversarial face views against deepfake detectors, based on face view synthesis from a single-view fake image. Extensive experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios. For example, AdvHeat based on a simple random search yields a high attack success rate of 96.8 further reduce the step budget to 50. Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses. The adversarial images generated by AdvHeat are also shown to have natural looks. Our code, including that for generating a multi-view dataset consisting of 360 synthetic views for each of 1000 IDs from FaceForensics++, is available at https://github.com/twowwj/AdvHeaT.

READ FULL TEXT
research
03/29/2022

Exploring Frequency Adversarial Attacks for Face Forgery Detection

Various facial manipulation techniques have drawn serious public concern...
research
09/25/2021

Two Souls in an Adversarial Image: Towards Universal Adversarial Example Detection using Multi-view Inconsistency

In the evasion attacks against deep neural networks (DNN), the attacker ...
research
11/07/2022

Black-Box Attack against GAN-Generated Image Detector with Contrastive Perturbation

Visually realistic GAN-generated facial images raise obvious concerns on...
research
10/29/2020

Perception Matters: Exploring Imperceptible and Transferable Anti-forensics for GAN-generated Fake Face Imagery Detection

Recently, generative adversarial networks (GANs) can generate photo-real...
research
08/25/2018

Analysis of adversarial attacks against CNN-based image forgery detectors

With the ubiquitous diffusion of social networks, images are becoming a ...
research
03/22/2022

Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack

DeepFakes are raising significant social concerns. Although various Deep...
research
10/14/2019

Real-world attack on MTCNN face detection system

Recent studies proved that deep learning approaches achieve remarkable r...

Please sign up or login with your details

Forgot password? Click here to reset