Sibling-Attack: Rethinking Transferable Adversarial Attacks against Face Recognition

03/22/2023
by   Zexin Li, et al.
0

A hard challenge in developing practical face recognition (FR) attacks is due to the black-box nature of the target FR model, i.e., inaccessible gradient and parameter information to attackers. While recent research took an important step towards attacking black-box FR models through leveraging transferability, their performance is still limited, especially against online commercial FR systems that can be pessimistic (e.g., a less than 50 on average). Motivated by this, we present Sibling-Attack, a new FR attack technique for the first time explores a novel multi-task perspective (i.e., leveraging extra information from multi-correlated tasks to boost attacking transferability). Intuitively, Sibling-Attack selects a set of tasks correlated with FR and picks the Attribute Recognition (AR) task as the task used in Sibling-Attack based on theoretical and quantitative analysis. Sibling-Attack then develops an optimization framework that fuses adversarial gradient information through (1) constraining the cross-task features to be under the same space, (2) a joint-task meta optimization framework that enhances the gradient compatibility among tasks, and (3) a cross-task gradient stabilization method which mitigates the oscillation effect during attacking. Extensive experiments demonstrate that Sibling-Attack outperforms state-of-the-art FR attack techniques by a non-trivial margin, boosting ASR by 12.61 average on state-of-the-art pre-trained FR models and two well-known, widely used commercial FR systems.

READ FULL TEXT

page 7

page 8

research
06/25/2022

RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition using a Mobile and Compact Printer

Face recognition has achieved considerable progress in recent years than...
research
05/07/2021

Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

Deep neural networks, particularly face recognition models, have been sh...
research
12/10/2021

Cross-Modal Transferable Adversarial Attacks from Images to Videos

Recent studies have shown that adversarial examples hand-crafted on one ...
research
03/28/2023

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

Face recognition is a prevailing authentication solution in numerous bio...
research
07/27/2022

Look Closer to Your Enemy: Learning to Attack via Teacher-student Mimicking

This paper aims to generate realistic attack samples of person re-identi...
research
02/13/2018

Query-Free Attacks on Industry-Grade Face Recognition Systems under Resource Constraints

To attack a deep neural network (DNN) based Face Recognition (FR) system...
research
06/18/2021

BinarizedAttack: Structural Poisoning Attacks to Graph-based Anomaly Detection

Graph-based Anomaly Detection (GAD) is becoming prevalent due to the pow...

Please sign up or login with your details

Forgot password? Click here to reset