Seeing is Living? Rethinking the Security of Facial Liveness Verification in the Deepfake Era

02/22/2022
by   Changjiang Li, et al.
0

Facial Liveness Verification (FLV) is widely used for identity authentication in many security-sensitive domains and offered as Platform-as-a-Service (PaaS) by leading cloud vendors. Yet, with the rapid advances in synthetic media techniques (e.g., deepfake), the security of FLV is facing unprecedented challenges, about which little is known thus far. To bridge this gap, in this paper, we conduct the first systematic study on the security of FLV in real-world settings. Specifically, we present LiveBugger, a new deepfake-powered attack framework that enables customizable, automated security evaluation of FLV. Leveraging LiveBugger, we perform a comprehensive empirical assessment of representative FLV platforms, leading to a set of interesting findings. For instance, most FLV APIs do not use anti-deepfake detection; even for those with such defenses, their effectiveness is concerning (e.g., it may detect high-quality synthesized videos but fail to detect low-quality ones). We then conduct an in-depth analysis of the factors impacting the attack performance of LiveBugger: a) the bias (e.g., gender or race) in FLV can be exploited to select victims; b) adversarial training makes deepfake more effective to bypass FLV; c) the input quality has a varying influence on different deepfake techniques to bypass FLV. Based on these findings, we propose a customized, two-stage approach that can boost the attack success rate by up to 70 representative applications of FLV (i.e., the clients of FLV APIs) to illustrate the practical implications: due to the vulnerability of the APIs, many downstream applications are vulnerable to deepfake. Finally, we discuss potential countermeasures to improve the security of FLV. Our findings have been confirmed by the corresponding vendors.

READ FULL TEXT

page 3

page 17

research
12/16/2020

TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask)

Neural backdoors represent one primary threat to the security of deep le...
research
06/02/2023

Systemic Risk and Vulnerability Analysis of Multi-cloud Environments

With the increasing use of multi-cloud environments, security profession...
research
04/17/2021

Towards Fortifying the Multi-Factor-Based Online Account Ecosystem

With the rapid growth of online services, the number of online accounts ...
research
03/28/2023

A Universal Identity Backdoor Attack against Speaker Verification based on Siamese Network

Speaker verification has been widely used in many authentication scenari...
research
06/19/2019

Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

Many recent works demonstrated that Deep Learning models are vulnerable ...
research
05/18/2020

An Evasion Attack against ML-based Phishing URL Detectors

Background: Over the year, Machine Learning Phishing URL classification ...
research
07/15/2020

Detecting Deepfake Videos: An Analysis of Three Techniques

Recent advances in deepfake generating algorithms that produce manipulat...

Please sign up or login with your details

Forgot password? Click here to reset