Universal Adversarial Spoofing Attacks against Face Recognition

by   Takuma Amada, et al.
nec global

We assess the vulnerabilities of deep face recognition systems for images that falsify/spoof multiple identities simultaneously. We demonstrate that, by manipulating the deep feature representation extracted from a face image via imperceptibly small perturbations added at the pixel level using our proposed Universal Adversarial Spoofing Examples (UAXs), one can fool a face verification system into recognizing that the face image belongs to multiple different identities with a high success rate. One characteristic of the UAXs crafted with our method is that they are universal (identity-agnostic); they are successful even against identities not known in advance. For a certain deep neural network, we show that we are able to spoof almost all tested identities (99%), including those not known beforehand (not included in training). Our results indicate that a multiple-identity attack is a real threat and should be taken into account when deploying face recognition systems.


Contextual Face Recognition with a Nested-Hierarchical Nonparametric Identity Model

Current face recognition systems typically operate via classification in...

Harnessing Unrecognizable Faces for Face Recognition

The common implementation of face recognition systems as a cascade of a ...

FD-GAN: Face-demorphing generative adversarial network for restoring accomplice's facial image

Face morphing attack is proved to be a serious threat to the existing fa...

Pairwise Relational Networks for Face Recognition

Existing face recognition using deep neural networks is difficult to kno...

SDeMorph: Towards Better Facial De-morphing from Single Morph

Face Recognition Systems (FRS) are vulnerable to morph attacks. A face m...

Liveness Detection Using Implicit 3D Features

Spoofing attacks are a threat to modern face recognition systems. In thi...

Please sign up or login with your details

Forgot password? Click here to reset