Improving Transferability of Adversarial Examples on Face Recognition with Beneficial Perturbation Feature Augmentation

10/28/2022
by   Fengfan Zhou, et al.
0

Face recognition (FR) models can be easily fooled by adversarial examples, which are crafted by adding imperceptible perturbations on benign face images. To improve the transferability of adversarial examples on FR models, we propose a novel attack method called Beneficial Perturbation Feature Augmentation Attack (BPFA), which reduces the overfitting of the adversarial examples to surrogate FR models by the adversarial strategy. Specifically, in the backpropagation step, BPFA records the gradients on pre-selected features and uses the gradient on the input image to craft adversarial perturbation to be added on the input image. In the next forward propagation step, BPFA leverages the recorded gradients to add perturbations(i.e., beneficial perturbations) that can be pitted against the adversarial perturbation added on the input image on their corresponding features. The above two steps are repeated until the last backpropagation step before the maximum number of iterations is reached. The optimization process of the adversarial perturbation added on the input image and the optimization process of the beneficial perturbations added on the features correspond to a minimax two-player game. Extensive experiments demonstrate that BPFA outperforms the state-of-the-art gradient-based adversarial attacks on FR.

READ FULL TEXT

page 2

page 5

page 6

research
04/23/2023

StyLess: Boosting the Transferability of Adversarial Examples

Adversarial attacks can mislead deep neural networks (DNNs) by adding im...
research
06/02/2023

Adversarial Attack Based on Prediction-Correction

Deep neural networks (DNNs) are vulnerable to adversarial examples obtai...
research
06/09/2022

ReFace: Real-time Adversarial Attacks on Face Recognition Systems

Deep neural network based face recognition models have been shown to be ...
research
10/12/2022

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
03/28/2019

Smooth Adversarial Examples

This paper investigates the visual quality of the adversarial examples. ...
research
04/05/2021

Adversarial Attack in the Context of Self-driving

In this paper, we propose a model that can attack segmentation models wi...
research
09/04/2023

Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration

Adversarial face examples possess two critical properties: Visual Qualit...

Please sign up or login with your details

Forgot password? Click here to reset