Amora: Black-box Adversarial Morphing Attack

12/09/2019
by   Run Wang, et al.
30

Nowadays, digital facial content manipulation has become ubiquitous and realistic with the unprecedented success of generative adversarial networks (GANs) in image synthesis. Unfortunately, face recognition (FR) systems suffer from severe security concerns due to facial image manipulations. In this paper, we investigate and introduce a new type of adversarial attack to evade FR systems by manipulating facial content, called adversarial morphing attack (a.k.a. Amora). In contrast to adversarial noise attack that perturbs pixel intensity values by adding human-imperceptible noise, our proposed adversarial morphing attack is a semantic attack that perturbs pixels spatially in a coherent manner. To tackle the black-box attack problem, we have devised a simple yet effective learning pipeline to obtain a proprietary optical flow field for each attack. We have quantitatively and qualitatively demonstrated the effectiveness of our adversarial morphing attack at various levels of morphing intensity on two popular FR systems with smiling facial expression manipulations. Experimental results indicate that a novel black-box adversarial attack based on local deformation is possible, which is vastly different from additive noise based attacks. The findings of this work may pave a new research direction towards a more thorough understanding and investigation of image-based adversarial attacks and defenses.

READ FULL TEXT

page 20

page 21

page 23

page 25

page 26

page 28

page 29

page 30

research
01/28/2023

Semantic Adversarial Attacks on Face Recognition through Significant Attributes

Face recognition is known to be vulnerable to adversarial face images. E...
research
04/26/2022

Restricted Black-box Adversarial Attack Against DeepFake Face Swapping

DeepFake face swapping presents a significant threat to online security ...
research
02/04/2022

Pixle: a fast and effective black-box attack based on rearranging pixels

Recent research has found that neural networks are vulnerable to several...
research
06/11/2018

Accurate and Robust Neural Networks for Security Related Applications Exampled by Face Morphing Attacks

Artificial neural networks tend to learn only what they need for a task....
research
09/29/2021

Back in Black: A Comparative Evaluation of Recent State-Of-The-Art Black-Box Attacks

The field of adversarial machine learning has experienced a near exponen...
research
08/13/2021

Optical Adversarial Attack

We introduce OPtical ADversarial attack (OPAD). OPAD is an adversarial a...
research
11/25/2020

Adversarial Attack on Facial Recognition using Visible Light

The use of deep learning for human identification and object detection i...

Please sign up or login with your details

Forgot password? Click here to reset