Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

09/22/2022
by   Sohaib Ahmad, et al.
31

Authentication systems are vulnerable to model inversion attacks where an adversary is able to approximate the inverse of a target machine learning model. Biometric models are a prime candidate for this type of attack. This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems. One of the main constraints in conducting a successful model inversion attack is the amount of training data required. In this work, we focus on iris and facial biometric systems and propose a new technique that drastically reduces the amount of training data necessary. By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data. We denote our new attack technique as structured random with alignment loss. Our attacks are black-box, requiring no knowledge of the weights of the target neural network, only the dimension, and values of the output vector. To show the versatility of the alignment loss, we apply our attack framework to the task of membership inference (Shokri et al., IEEE S P 2017) on biometric data. For the iris, membership inference attack against classification networks improves from 52

READ FULL TEXT

page 3

page 8

page 12

page 13

research
05/08/2020

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

Neural networks are susceptible to data inference attacks such as the mo...
research
02/22/2019

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

The rise of deep learning technique has raised new privacy concerns abou...
research
09/05/2017

The Unintended Consequences of Overfitting: Training Data Inference Attacks

Machine learning algorithms that are applied to sensitive data pose a di...
research
01/13/2020

On the Resilience of Biometric Authentication Systems against Random Inputs

We assess the security of machine learning based biometric authenticatio...
research
09/28/2018

Explainable Black-Box Attacks Against Model-based Authentication

Establishing unique identities for both humans and end systems has been ...
research
06/07/2021

Formalizing Distribution Inference Risks

Property inference attacks reveal statistical properties about a trainin...
research
04/12/2023

On the Adversarial Inversion of Deep Biometric Representations

Biometric authentication service providers often claim that it is not po...

Please sign up or login with your details

Forgot password? Click here to reset