Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

05/22/2019
by   Giulio Lovisotto, et al.
0

In this work, we investigate the concept of biometric backdoors: a template poisoning attack on biometric systems that allows adversaries to stealthily and effortlessly impersonate users in the long-term by exploiting the template update procedure. We show that such attacks can be carried out even by attackers with physical limitations (no digital access to the sensor) and zero knowledge of training data (they know neither decision boundaries nor user template). Based on the adversaries' own templates, they craft several intermediate samples that incrementally bridge the distance between their own template and the legitimate user's. As these adversarial samples are added to the template, the attacker is eventually accepted alongside the legitimate user. To avoid detection, we design the attack to minimize the number of rejected samples. We design our method to cope with the weak assumptions for the attacker and we evaluate the effectiveness of this approach on state-of-the-art face recognition pipelines based on deep neural networks. We find that in scenarios where the deep network is known, adversaries can successfully carry out the attack over 70 black-box scenarios, we find that exploiting the transferability of adversarial samples from surrogate models can lead to successful attacks in around 15 cases. Finally, we design a poisoning detection technique that leverages the consistent directionality of template updates in feature space to discriminate between legitimate and malicious updates. We evaluate such a countermeasure with a set of intra-user variability factors which may present the same directionality characteristics, obtaining equal error rates for the detection between 7-14 sample injections.

READ FULL TEXT

page 7

page 8

page 11

research
03/28/2023

Towards Effective Adversarial Textured 3D Meshes on Physical Face Recognition

Face recognition is a prevailing authentication solution in numerous bio...
research
07/24/2021

Biometric Masterkeys

Biometric authentication is used to secure digital or physical access. S...
research
06/27/2023

Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability

Evasion attacks are a threat to machine learning models, where adversari...
research
09/05/2023

Voice Morphing: Two Identities in One Voice

In a biometric system, each biometric sample or template is typically as...
research
04/12/2023

On the Adversarial Inversion of Deep Biometric Representations

Biometric authentication service providers often claim that it is not po...
research
01/13/2020

On the Resilience of Biometric Authentication Systems against Random Inputs

We assess the security of machine learning based biometric authenticatio...
research
05/11/2023

Prevention of shoulder-surfing attacks using shifting condition using digraph substitution rules

Graphical passwords are implemented as an alternative scheme to replace ...

Please sign up or login with your details

Forgot password? Click here to reset