On the Adversarial Inversion of Deep Biometric Representations

04/12/2023
by   Gioacchino Tangari, et al.
0

Biometric authentication service providers often claim that it is not possible to reverse-engineer a user's raw biometric sample, such as a fingerprint or a face image, from its mathematical (feature-space) representation. In this paper, we investigate this claim on the specific example of deep neural network (DNN) embeddings. Inversion of DNN embeddings has been investigated for explaining deep image representations or synthesizing normalized images. Existing studies leverage full access to all layers of the original model, as well as all possible information on the original dataset. For the biometric authentication use case, we need to investigate this under adversarial settings where an attacker has access to a feature-space representation but no direct access to the exact original dataset nor the original learned model. Instead, we assume varying degree of attacker's background knowledge about the distribution of the dataset as well as the original learned model (architecture and training process). In these cases, we show that the attacker can exploit off-the-shelf DNN models and public datasets, to mimic the behaviour of the original learned model to varying degrees of success, based only on the obtained representation and attacker's prior knowledge. We propose a two-pronged attack that first infers the original DNN by exploiting the model footprint on the embedding, and then reconstructs the raw data by using the inferred model. We show the practicality of the attack on popular DNNs trained for two prominent biometric modalities, face and fingerprint recognition. The attack can effectively infer the original recognition model (mean accuracy 83% for faces, 86% for fingerprints), and can craft effective biometric reconstructions that are successfully authenticated with 1-vs-1 authentication accuracy of up to 92% for some models.

READ FULL TEXT

page 8

page 12

page 14

page 19

page 20

page 26

research
01/13/2020

On the Resilience of Biometric Authentication Systems against Random Inputs

We assess the security of machine learning based biometric authenticatio...
research
11/18/2019

"Please enter your PIN" – On the Risk of Bypass Attacks on Biometric Authentication on Mobile Devices

Nowadays, most mobile devices support biometric authentication schemes l...
research
04/07/2020

Learning to fool the speaker recognition

Due to the widespread deployment of fingerprint/face/speaker recognition...
research
02/28/2023

FacEDiM: A Face Embedding Distribution Model for Few-Shot Biometric Authentication of Cattle

This work proposes to solve the problem of few-shot biometric authentica...
research
05/22/2019

Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

In this work, we investigate the concept of biometric backdoors: a templ...
research
09/22/2022

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

Authentication systems are vulnerable to model inversion attacks where a...

Please sign up or login with your details

Forgot password? Click here to reset