Geometrically Adaptive Dictionary Attack on Face Recognition

11/08/2021
by   Junyoung Byun, et al.
0

CNN-based face recognition models have brought remarkable performance improvement, but they are vulnerable to adversarial perturbations. Recent studies have shown that adversaries can fool the models even if they can only access the models' hard-label output. However, since many queries are needed to find imperceptible adversarial noise, reducing the number of queries is crucial for these attacks. In this paper, we point out two limitations of existing decision-based black-box attacks. We observe that they waste queries for background noise optimization, and they do not take advantage of adversarial perturbations generated for other images. We exploit 3D face alignment to overcome these limitations and propose a general strategy for query-efficient black-box attacks on face recognition named Geometrically Adaptive Dictionary Attack (GADA). Our core idea is to create an adversarial perturbation in the UV texture map and project it onto the face in the image. It greatly improves query efficiency by limiting the perturbation search space to the facial area and effectively recycling previous perturbations. We apply the GADA strategy to two existing attack methods and show overwhelming performance improvement in the experiments on the LFW and CPLFW datasets. Furthermore, we also present a novel attack strategy that can circumvent query similarity-based stateful detection that identifies the process of query-based black-box attacks.

READ FULL TEXT
research
04/09/2019

Efficient Decision-based Black-box Adversarial Attacks on Face Recognition

Face recognition has obtained remarkable progress in recent years due to...
research
07/12/2021

EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks

Recent work has shown how easily white-box adversarial attacks can be ap...
research
06/27/2021

Darker than Black-Box: Face Reconstruction from Similarity Queries

Several methods for inversion of face recognition models were recently p...
research
12/03/2020

Essential Features: Reducing the Attack Surface of Adversarial Perturbations with Robust Content-Aware Image Preprocessing

Adversaries are capable of adding perturbations to an image to fool mach...
research
03/19/2021

LSDAT: Low-Rank and Sparse Decomposition for Decision-based Adversarial Attack

We propose LSDAT, an image-agnostic decision-based black-box attack that...
research
12/26/2022

Simultaneously Optimizing Perturbations and Positions for Black-box Adversarial Patch Attacks

Adversarial patch is an important form of real-world adversarial attack ...
research
07/04/2022

RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely Limited Queries

Recent successful adversarial attacks on face recognition show that, des...

Please sign up or login with your details

Forgot password? Click here to reset