JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

06/19/2022
by   Jiaming Zhang, et al.
11

It has been observed that the unauthorized use of face recognition system raises privacy problems. Using adversarial perturbations provides one possible solution to address this issue. A critical issue to exploit adversarial perturbation against unauthorized face recognition system is that: The images uploaded to the web need to be processed by JPEG compression, which weakens the effectiveness of adversarial perturbation. Existing JPEG compression-resistant methods fails to achieve a balance among compression resistance, transferability, and attack effectiveness. To this end, we propose a more natural solution called low frequency adversarial perturbation (LFAP). Instead of restricting the adversarial perturbations, we turn to regularize the source model to employing more low-frequency features by adversarial training. Moreover, to better influence model in different frequency components, we proposed the refined low-mid frequency adversarial perturbation (LMFAP) considering the mid frequency components as the productive complement. We designed a variety of settings in this study to simulate the real-world application scenario, including cross backbones, supervisory heads, training datasets and testing datasets. Quantitative and qualitative experimental results validate the effectivenss of proposed solutions.

READ FULL TEXT
research
07/13/2022

Perturbation Inactivation Based Adversarial Defense for Face Recognition

Deep learning-based face recognition models are vulnerable to adversaria...
research
09/04/2023

Improving Visual Quality and Transferability of Adversarial Attacks on Face Recognition Simultaneously with Adversarial Restoration

Adversarial face examples possess two critical properties: Visual Qualit...
research
02/28/2019

On the Effectiveness of Low Frequency Perturbations

Carefully crafted, often imperceptible, adversarial perturbations have b...
research
06/29/2021

Improving Transferability of Adversarial Patches on Face Recognition with Generative Models

Face recognition is greatly improved by deep convolutional neural networ...
research
10/13/2022

Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition

Deep learning models have shown their vulnerability when dealing with ad...
research
12/10/2021

Learning to Learn Transferable Attack

Transfer adversarial attack is a non-trivial black-box adversarial attac...
research
07/07/2020

Robust Learning with Frequency Domain Regularization

Convolution neural networks have achieved remarkable performance in many...

Please sign up or login with your details

Forgot password? Click here to reset