Security of Facial Forensics Models Against Adversarial Attacks

11/02/2019
by   Rong Huang, et al.
0

Deep neural networks (DNNs) have been used in forensics to identify fake facial images. We investigated several DNN-based forgery forensics models (FFMs) to determine whether they are secure against adversarial attacks. We experimentally demonstrated the existence of individual adversarial perturbations (IAPs) and universal adversarial perturbations (UAPs) that can lead a well-performed FFM to misbehave. Based on iterative procedure, gradient information is used to generate two kinds of IAPs that can be used to fabricate classification and segmentation outputs. In contrast, UAPs are generated on the basis of over-firing. We designed a new objective function that encourages neurons to over-fire, which makes UAP generation feasible even without using training data. Experiments demonstrated the transferability of UAPs across unseen datasets and unseen FFMs. Moreover, we are the first to conduct subjective assessment for imperceptibility of the adversarial perturbations, revealing that the crafted UAPs are visually negligible. There findings provide a baseline for evaluating the adversarial security of FFMs.

READ FULL TEXT

page 3

page 4

research
03/02/2021

A Survey On Universal Adversarial Attack

Deep neural networks (DNNs) have demonstrated remarkable performance for...
research
06/08/2019

Defending against Adversarial Attacks through Resilient Feature Regeneration

Deep neural network (DNN) predictions have been shown to be vulnerable t...
research
05/30/2022

Guided Diffusion Model for Adversarial Purification

With wider application of deep neural networks (DNNs) in various algorit...
research
02/20/2022

Real-time Over-the-air Adversarial Perturbations for Digital Communications using Deep Neural Networks

Deep neural networks (DNNs) are increasingly being used in a variety of ...
research
07/14/2022

Adversarial Attacks on Monocular Pose Estimation

Advances in deep learning have resulted in steady progress in computer v...
research
05/27/2022

fakeWeather: Adversarial Attacks for Deep Neural Networks Emulating Weather Conditions on the Camera Lens of Autonomous Systems

Recently, Deep Neural Networks (DNNs) have achieved remarkable performan...
research
03/26/2022

Reverse Engineering of Imperceptible Adversarial Image Perturbations

It has been well recognized that neural network based image classifiers ...

Please sign up or login with your details

Forgot password? Click here to reset