DeepAI
Log In Sign Up

Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models

09/22/2022
by   Sohaib Ahmad, et al.
University of Connecticut
31

Authentication systems are vulnerable to model inversion attacks where an adversary is able to approximate the inverse of a target machine learning model. Biometric models are a prime candidate for this type of attack. This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems. One of the main constraints in conducting a successful model inversion attack is the amount of training data required. In this work, we focus on iris and facial biometric systems and propose a new technique that drastically reduces the amount of training data necessary. By leveraging the output of multiple models, we are able to conduct model inversion attacks with 1/10th the training set size of Ahmad and Fuller (IJCB 2020) for iris data and 1/1000th the training set size of Mai et al. (Pattern Analysis and Machine Intelligence 2019) for facial data. We denote our new attack technique as structured random with alignment loss. Our attacks are black-box, requiring no knowledge of the weights of the target neural network, only the dimension, and values of the output vector. To show the versatility of the alignment loss, we apply our attack framework to the task of membership inference (Shokri et al., IEEE S P 2017) on biometric data. For the iris, membership inference attack against classification networks improves from 52

READ FULL TEXT VIEW PDF

page 3

page 8

page 12

page 13

05/08/2020

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

Neural networks are susceptible to data inference attacks such as the mo...
02/22/2019

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

The rise of deep learning technique has raised new privacy concerns abou...
09/05/2017

The Unintended Consequences of Overfitting: Training Data Inference Attacks

Machine learning algorithms that are applied to sensitive data pose a di...
03/13/2022

Model Inversion Attack against Transfer Learning: Inverting a Model without Accessing It

Transfer learning is an important approach that produces pre-trained tea...
01/13/2020

On the Resilience of Biometric Authentication Systems against Random Inputs

We assess the security of machine learning based biometric authenticatio...
05/22/2019

Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating

In this work, we investigate the concept of biometric backdoors: a templ...
06/07/2021

Formalizing Distribution Inference Risks

Property inference attacks reveal statistical properties about a trainin...

1 Introduction

Biometric identification is widely used, most devices contain multiple biometric modalities. Most prior research focuses on the accuracy of identification models, the privacy implications of these models are not clearly understood. Apple’s documentation111https://support.apple.com/en-us/HT208108. on FaceID states devices store “mathematical representations of your face.” The goal of this work is to understand the privacy risks of leaking the model output (the mathematical representations).

Many authentication systems are based on biometric identification [1, 2]

. Two widely adopted biometrics include iris and facial recognition. Despite the prevalence of these biometric based authentication systems, they remain vulnerable to a type of attack called a model inversion 

[3]. In a model inversion attack, an adversary is able to train an attack model that approximates the inverse of the target biometric model used in the authentication system. Once the adversary is able to succeed in training this attack model, they are able to produce realistic looking biometrics. These realistic looking biometrics can be used for spoofing attacks [4], where an attacker creates a “fake” version of a user’s biometric.

Deep learning models are increasingly being used for biometrics [5, 6, 7, 8, 9, 10]. Fredikson et al. initiated model inversion attacks on such networks, targeting the facial biometric [3]

. Recent model inversion attacks use generative adversarial networks or GANs 

[11] and use auxiliary information such as blurred faces.

To set notation, denote the trained biometric identification system as to indicate it is the model being targeted in the attack. The attack proceeds in stages:

Training

The attacker receives samples of the form

At the end of this stage the attacker outputs a model . It should be the case that for unseen pairs the value is similar to .

Test/Attack

The attacker receives values and inverts them to produce realistic biometric values .

A limitation of prior work is the need for a large number of training samples. Mai et al. [12] require training samples in their attack on the facial biometric. Ahmad and Fuller [13] require training samples in their attack on the iris biometric. While large facial and iris datasets exist, model inversion targets smaller applications. It is thus crucial to determine if model inversion is possible with fewer training points.

We investigate whether the adversary can substitute the output of multiple models in Training in place of more training samples. Salem et al. [14] study the difference a model undergoes when it is updated in an online fashion. Their work considers small updates while we explore larger changes when the target model’s dataset undergoes deletion or addition of classes. We consider the following new attack setup (for parameter ):

Training

Let be models used in training a final model . The attacker receives samples of the form

At the end of this stage the attacker outputs a model . It should be the case that for unseen pairs the value is similar to .

Test/Attack

The attacker receives values and inverts them to produce realistic biometric values .

Multiple works have considered attack avenues to steal models [15, 16, 17]. We review three settings when multiple models are available in Section 2.1. We ask whether an attacker who sees the output of multiple models when training the attack model is able to invert more effectively. The research question of this work is:

How to effectively use multiple models to reduce the training set size?

We consider training set size of . Mai et al. [12] used , Ahmad and Fuller [13] used .

Our attacks are performed on raw templates which are output from biometric networks and stored insecurely. There are two relevant lines of work on securing biometric models. One line shows how to encrypt the output of biometric networks [1, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31] in a way that authentication systems still work. These methods have constraints where the provided security (in bits) is small or authentication is slow. A second line show how to securely train models and allow these models to be evaluated privately [32, 33, 34]. Our attacks are black box but do need the ability to observe for multiple .

1.1 Attack Approach

The high level architecture of our inversion attack is a generative adversarial network or GAN [35] as in prior work on biometric model inversion [12, 13]. A GAN is a pair of algorithms, a generator and a discriminator. In usual image applications, the generator takes random noise. The generator’s goal is to produce images that the discriminator cannot distinguish from true training samples. As with previous work [12, 13], we modify this paradigm, making the GAN generator take the output of biometric transform as input. The discriminator is then given either real biometrics or those created by the generator. By fooling the discriminator, the generator works as our attack model and an inverter for the biometric transform. Yang et al. [36] proposed a simple mechanism for incorporating multiple models:

[labelindent=0cm]

Random

During attack model training, a random is selected and the pair is provided as ground truth for the GAN.

We show visual reconstructions of irises in Figure 1, deferring discussion of results and visual reconstructions of faces until Section 5. The Random or method does recover the high level shape of the iris but is missing crucial details such as 1) a crisp boundary between the iris and the pupil and 2) iris texture.

Figure 1: Visual improvement for our proposed alignment method on feature extractors. Rows represent different irises. Columns indicate the method used as described in Introduction and Section 4.2. All multiple model results use attack described in Section 2.1.

1.2 Our Contribution

Let denote the output dimension of . Yang et al. [36] consider a GAN with input length of . All of our new methods consider a GAN takes inputs of length . We call these networks input-augmented GANs. We introduce the following input-augmented GANs:

[labelindent=0cm]

Concatenation

In this approach the GANs training samples are the entire tuple

Structured Random

Sample a random as above and set other components of the vector to . That is, input

Structured Random w/ Alignment Loss

This approach follows the structured random approach above, but also asks the GAN to predict .

Going forward we refer to these three methods of incorporation as and

respectively. We consider two types of target models: feature extractors and classifiers (see Section 

4.3). Figure 1 shows examples of irises reconstruction from the output of a feature extractor using the different incorporation methods.

provides the best results see visual reconstructions in Figure 1 and detailed results in Section 6). This is interesting in comparison to because the only difference is that asks the model to remember which location is nonzero. Even though the value is “easy” to predict, forcing the GAN to predict this value improves overall performance. We believe that the GAN is better able to distinguish between inputs from different models, which leads to better inversion on the final model . Our accuracy results are in Tables 1 and  2.

1.3 Application to Membership Inference

We consider a secondary application to membership inference [37]. There are many different attacker postures for such attacks (see Papernot et al. [38] for an overview), in this work we make the standard assumption that the adversary has query access to the target model and part of the dataset that was used in training the target model. The goal of membership inference is to determine whether a particular sample was used in the training of a network. That is, given decide if was used to train the model .

Inferring whether a sample was used in training a network has privacy implications, allowing an attacker to infer a user’s race, gender, or their ability to access a system.

Membership inference attacks are usually conducted on classification networks. Mitigation strategies include regularization methods such as dropout and weight regularization [37]. Nasr et al. [39] study membership inference attacks in detail under many settings, they also consider model updates. Leino et al. [40] look at white box membership inference attacks and conclude that a small generalization error does not guarantee safety against attacks. Chen et al. [41] study membership inference attacks on GANs. Melis et al. [42] explore membership inference attacks with model updates.

Our attack model for inferring membership is a simple neural network with 3 layers with the last layer predicting membership of the input vector. Even with this simple network for iris feature extractors membership inference is accurate (Tables 3 and4). Membership inference is substantially harder on classification networks than feature extractors.

As before we compare the and methods. Membership inference on classification networks using a single model is accurate, but using raises accuracy up to .

Organization

The rest of this work is organized as follows: Section 2 describes the system architecture, Section 3 reviews how feature extractors and classifiers are used in biometrics, Section 4 describes our attack model, Sections 5, 6 present evaluation methodology and results respectively. Section 7 presents both methods and results for membership inference. Section 8 concludes.

2 Adversarial Model

This section describes the adversarial model. and goals of model inversion and membership inference We defer discussion of measuring attacker success until Section 4.3. Recall that we use to denote the input to the target network and to indicate the resulting output. The goal of the attack is to train a network that on input that can predict . As mentioned in the Introduction, we assume that the adversary has access to the output of multiple related models. That is, in the Training stage they receive tuples of the form

the goal of the training stage is to produce a model where it is true that

We use to additionally index the target model, for example: The parameter controls how many models the adversary has access to. The second stage of the attack is denoted as Test where we assume outputs leak and the attacker will reconstruct .

2.1 Accessing Multiple Models

We consider three types of related models that may be available to an attacker that we call , and .

In the first setting, we consider the intermediate models that are created when a model is first trained. Due to the complexity of modern models, training is a computationally intensive process and is done in epochs. Since training is a complex, error-prone process models and performance data are stored for debugging purposes. Salem et al. 

[14] considered the related question is whether the difference in two models when an individual item is added leaks about that individual item. Since model training is intensive and is often hand-tuned, these models and their results on training data are stored for debugging purposes. We assume the adversary has access to these models. We assume that multiple models are used at training time. The target iris and face recognition models converge in 100 epochs (see Section 5.1).

We utilize five different models saved during training in this attack. We utilize models after 0 (Pre-trained on ImageNet), 25%, 50%, 75% and 100% of training. At attack time, we consider two settings when only the

final model’s output is available and when all models are available. The setting when all models are available at test time is used to compare the different methods for incorporating multiple vectors and is not intended to be realistic.

For the update attack, we assume the attacker has the ability to insert a new class that will be incorporated into training of either a classification or feature extractor network. Biometric identification systems may need to be retrained when a new user is added.

Addition of a new user into the system that needs to be learned by the model. In this case the attacker may be able to prepare the images used in training the model on the new user. That is, the image need not come from the honest biometric distribution. A natural setting in which an attacker could perform and attacks is federated learning [33, 43]. In this setting, a model is trained but the adversary asks the model to learn on new images.

Images in the attack are crafted by the adversary. The updates are crafted by taking a normal biometric image and applying a Gaussian blur using a 3x3 kernel with to all images of the new user being added. Blurring makes images from different classes appear similar. Fredrikson et al. [44] perform a similar attack where they recover original faces from blurred out images however we perform a smaller amount of blur. Since models are only being fine-tuned for an update we consider the following training regime for the target model. We retrain the target model for 10 epochs. The attacker has access to the original, 5th and 10th model.

Removal of a person from the system. Such a removal may occur due to right to be forgotten legislation which has resulting in the field of machine unlearning [45, 46, 47]. In this setting, the adversary requests an individual be removed but has no control over how this removal is processed. Recent laws and regulations have also taken privacy risks into account. The General Data Protection Regulation (GDPR) [48] in the European Union and the California Consumer Privacy Act (CCPA) [49] in the United States call for more action to protect personal data and control how and where data is stored. In addition to simplifying rules on data storage and privacy this legislation grants control to a person over their personal data, consequently, a person can ask a company to remove their data.

Recent legislation (including GDPR) has given individuals control over their data and the ability to request “to be forgotten.” This right to be forgotten clause is hard to implement in terms of machine learning models. One naive solution is to retrain the model on the dataset after removing required data. Some machine learning models take months of computation time to train while the dataset in use may also be large in size. This problem has realized a subfield of machine learning called Machine unlearning [45, 46, 47]. Some works also perform unlearning by updating model parameters and not retraining [47].

Taking an example of a biometric system where a model is trained on a set number of people. When an individual requests their data be deleted from the training dataset and not be part of the model the biometric system must perform machine unlearning to remove the individuals data. We assume machine unlearning is performed naively: completely retraining the model after deleting required data from the training dataset. Attacking more sophisticated unlearning strategies is an important piece of future work.

We retrain the target model (to perform unlearning) for 100 epochs on the new training dataset. We utilize models after 0, 25%, 50%, 75% and 100% of re-training has been completed for a total of 5 models. We assume the adversary removes multiple people/classes from the training set of a model. We remove 10 classes from our iris application and 5 classes from our face dataset and then retrain the model.

For all attacks except for the attack, the adversary can passively receive normal biometric images and their corresponding outputs. As mentioned above for the attack, these images are prepared specially and differ from the normal biometric distribution.

In our attacks we only assume the output of the model, either a template or a classification vector, is revealed. This is in contrast to models that assume knowledge of the internal weights of the models such as Fredriskson et al. [44].

3 Review of Types of Biometric Target Models

Feature extractors Feature extraction networks [50] output a -dimensional feature vector. Feature extractors are trained to generate embeddings from given biometric inputs. As an example, given a biometric input a network generates an embedding/feature vector . This embedding is known as a template and stored in a database (in mobile devices this storage is inside of a secure enclave). At a subsequent reading of the same biometric input (with noise) another embedding  is produced. A distance metric such as Euclidean distance is used to compare the two vectors . The biometric will authenticate an individual successfully if the distance is below a precomputed threshold, denoted . Training feature extraction networks generally does not require a set number of classes, only labeling which samples should be grouped together or pushed apart. Feature extractors are used in applications when not all users are known when the model is trained.

Classifiers Classification networks output an -dimensional classification vector. Classifiers work with known number of classes. The objective is to learn a classification vector such that every input that belongs to a class from is assigned to its class in the classification vector. Usually, the output layer of a classification network is a softmax based layer which takes the preceding (feature vector) layer and maps it to a classification vector. When all users are known at training time, classifier use for biometric identification is straightforward. A biometric is deemed to belong to class if the classification output indicates membership in class with high enough confidence (which depends on the application).

While classifiers and feature extractors have similar network architectures, feature extractors are expected to identify new individuals that were not seen during training.

3.1 Attack Goal

As a reminder, for a target network where the goal is to learn a transform such that for the values and are similar.

We briefly review the goals for the two settings of feature extractors and classifiers. For feature networks the goal is given to produce an such that . In classification networks, for class , the goal is to produce an that will be classified as class with the highest confidence possible [37]. As we are using a GAN to produce these there is a secondary goal that appears similar to valid . This may not be the case if was simply the class average [36].

The reason for the difference in goal is because of the difference in how these network types are used in identification systems. Feature extractors and classifiers are used differently in identification systems. Feature extractors are used to extract templates that are compared with a stored value. Thus, the goal is to be able to recreate the stored template as accurately as possible. Classifiers judge an input to be in a class if has “high enough” confidence of being assigned to that class so the goal is simply to maximize that confidence. The attacker goal in both settings to produce an image that will authenticate

with the highest probability. In the literature the feature extractor inversion task is called

reconstruction [10] while the classifier inversion task is called model inversion [12]. We do both in this work. To summarize the goals of model inversion are as follows:

Feature Extractor

Given find that is similar to original ,

Classifier

For class , find that is labeled with high confidence and cannot be distinguished from a real image.

In membership inference, the goal is the same in both cases, given determine if was part of the training set.

4 Attack Model Design

Since we utilize models at different stages of training, their accuracy on the training and testing dataset will be different. This difference will show in the vectors obtained by querying the target models. The goal is to boost membership inference and model inversion accuracy by utilizing these additional vectors per input instead of just a single vector.

We assume black-box access to the target network. The attack setting defined by Shokri et al. [37] for membership inference attacks uses shadow models trained on a shadow dataset to mimic the target model. These shadow models are then used to generate training data for attack models ultimately used to perform inference attacks.

The adversary is assumed to have an attack dataset that comes from the same distribution as the target models training data [51]. We utilize the same assumptions as the original membership inference attack [37] except we attack the target model directly (having black-box access) without generating shadow models. We also assume the system training the target model saves models at each epoch. In all three attack settings (, , and ) new models are generated which can be coupled with old models and more accurate attacks can be conducted. Since we do not train shadow models, we relax the assumption of the adversary having a disjoint training dataset.

Figure 2: Inversion attack network. A biometric recognizing model is queried with biometric images to generate vectors. These vectors are fed to the generator which reconstructs an image from the vector.

4.1 GAN design

Our attack network is a GAN [35]. A GAN architecture has two sub-models, a generator which generates images and a discriminator which judges how good the generated images are. This is shown visually in Figure 2

. Usually, the input of the generator is a noise vector sampled from a multivariate normal distribution. In our attack case the generator of the inversion attack model is an autoencoder which takes input a feature vector (or a prediction vector) and tries to reconstruct the corresponding image by minimizing multiple loss functions. The core of prior biometrics model inversion attacks is also a GAN 

[12, 13].

The discriminator and generator of a GAN model can be summarized in two loss equations:

(1)
(2)

Where is the discriminator loss and is the generator loss. The variables and correspond to the original and inverted image respectively. The discriminator’s loss function is the difference in its classification performance for original and inverted images. The generator’s loss function is how well the discriminator does in identifying fake images.

For feature extractor target networks, the GAN model takes as input a feature vector. For classification networks, the GAN takes as input a classification vector. Recall that these two attacks have different goals, the feature extractor GAN is trying to reproduce as accurately as possible. The classifier GAN is trying to find an instance that will be assigned to the appropriate class label with the highest confidence possible. We do not address explicitly train our models to generate samples which will have a high inversion attack accuracy. Instead, minimization of visual difference between original and reproduced samples is a proxy for inversion attack accuracy.

The generator loss functions include the L1 loss, SSIM [52] loss and the perceptual loss [53] between the inverted and actual image. We minimize: 1) the L1, 2) the perceptual loss, and 3) the structural dissimilarity (or maximizing structural similarity). Finally since a GAN consists of a generator and a discriminator, the generator is fine tuned by the output of the discriminator. Our final objective for the generator including the discriminator loss is:

(3)

Where is the discriminator output affecting the generator along with other reconstruction loss function as in [13].

4.2 Incorporating multiple vectors

There are multiple ways the additional vectors (for both attack types) per image can be used to better train our attack models.

In the approach, for every update to our attack models we sample vectors randomly (with a probability of ) chosen from the outputs of one model among models. The inversion model then learns to invert these feature vectors to their corresponding images. The inference attack model learns to differentiate between training and non-training samples.

Augmented GAN mechanisms

Merging vectors to form a long vector is another way of feeding additional information to our attack models.

The method takes vectors of size to form an input vector of size . The attack now learns from multiple models in one training step. The structured random or approach, we randomly sample a vector as in our random approach but instead of a sized vector we form a sized vector with all zeros except the randomly sampled vector placed in the index :

Intuitively, we force the inversion model to differentiate between vectors gathered from multiple models. This enables the inversion model to learn how the output of a target model changed as it trained (or untrained) to convergence. The attack model now learns from a single vector in a single learning step while having the context of multiple vectors across multiple learning steps.

The structured random w/ alignment loss or forces the attack model to predict the index or the index which holds the non-zero vector. In the method we add to the GAN an additional loss where represents the softmax function and cross-entropy loss applied on an intermediary layer in the generator model. This layer predicts the index of the randomly chosen vector. This prediction forces the generator model to implicitly learn features from multiple vectors extracted from multiple models.

This forces the model to further differentiate between vectors from multiple models by forcing the attack model to pass index information across its weights. Alignment loss allows the attack model to better understand how a target model was trained. We show the setup for in Figure 3.

Figure 3: Vector alignment process for method of incorporating multiple vectors. When reconstructing from vectors only a single feature vector is used while the rest are truncated to zero. The inversion model now implicitly learns from multiple vectors over the entire training process.

4.3 Measuring Success

We use two standard accuracy metrics that will be used in this work for feature extractors [12, 13].

[labelindent=0cm]

Rank-1

How frequently the inverted biometric value is closest to a biometric from the same class excluding the reading used to invert. A true positive for rank-1 accuracy is when the reconstructed image’s extracted feature vector is closest to a feature vector belonging to a member of the same class as the target image. For a set of different biometrics consisting of pairs a true positive is when

Type1

Type1 considers the quality of biometric with respect to a specific distance threshold . That is, we first compute as the maximum value such that the false accept rate (FAR) of an image of a different biometric (in the underlying ) is at most on the target model’s training dataset. It then considers how frequently the reconstructed image produces a feature vector that would be accepted by a system with threshold . Mathematically, this is written as

Rank-1 accuracy is more instructive for applications with all to all matching while Type1 accuracy is more important for a spoofing application where one wishes to break into a biometric authentication system.

For classification networks, we consider traditional accuracy:

Accuracy

for , how frequently is labeled with the same class as .

In all attacks, we do not use any images used to train the target model in the attack. Instead, we probe the target model with a probe dataset [37] that is smaller than the training dataset. This probe dataset is class disjoint from the training dataset for the feature extraction setting.

5 Evaluation

This section details the datasets used in training both the target models and the attack, specifies the training methodology for the target models, and describes accuracy metrics used for the attacks.

We utilize two datasets in our work, one for the iris and one for faces. The ND-IRIS-0405 [54, 55] dataset contains 64,980 iris grayscale images from 356 subjects. The classes are highly imbalanced, some classes have many more images than others. Left and right irises of an individual are treated as different classes [56], resulting in 712 classes.

Labeled Faces in the Wild (LFW) [57] face recognition dataset contains 13233 images of 5749 people downloaded from the various websites with 1680 people having 2 or more images. For our evaluation we only consider people with more than 15 images yielding 89 classes with images.

5.1 Target models training

Our target models use the DenseNet-169 architecture from the original DenseNet paper [58]. We use loss function from SphereFace [6] coupled with the Adam optimizer [59]

to train the target networks using Tensorflow 

[60]. Dropout [61] has been studied in literature as a defense against membership inference attacks [51]. We train our networks with dropout applied to the fully connected layer which is the second to last layer of our target classification network. The dropout ratio used is 0.5. DenseNets provide near state of art recognition accuracy when coupled with dropout. Our target networks are thus generalized and possess some defense through the use of Dropout. Mai et al. [12] and Ahmad et al. [13] do not use any dropout in their target networks.

We now discuss the training and probe dataset splits for our feature extraction and classification networks. Our attacks against feature extraction networks the target data’s training is class disjoint from the probe attack dataset. This is not the case for classification networks which are designed for a predefined set of classes.

Iris - Feature Extraction

The target model for the iris dataset is trained on left iris images of all (356) subjects forming a private training set of roughly 10000 images. We assume a probe dataset of 2000 images from right irises of 40 subjects.

Iris - Classification

We train our target model on left iris images of all subjects. The total number of images is 10000 with training done on 7000. The remaining 3000 left iris images from these subjects are used to form the probe dataset.

Face - Feature Extraction

Of the classes with images, classes and images are used as probe images. The target model for the face dataset is trained on the remaining classes and images.

Face - Classification

The target model is trained on all entire 89 classes leaving out 15% images from each class to make the probe dataset.

The iris images are segmented [62] to not include any additional texture besides that of the iris. The reconstruction attack model therefore is forced to learn texture information stored in the output feature vector. We utilize deep-funneled images [63] for LFW dataset and crop the images to a size of 128x128 to include the face area only.

6 Results

We perform two attacks in multiple configurations on two biometric datasets to test the efficacy of our proposed pipelines.

6.1 Feature Extraction Networks

We show Type-1 and Rank-1 accuracy for our attacks. An overview of results in Tables 1 and 2. Figure 1 showed visual results for the iris. Visual results for the facial biometric are in Figure 4.

Single Model Results

Type-1 attack accuracy when inverting feature vectors using access to a single target model is 59% and 85% for the iris and face dataset respectively. In the Type-1 setting a reconstructed biometric is matched with its original counterpart, we obtain Type-1 attack accuracy numbers of compared to Ahmad and Fuller [13] who achieve while using times the training set size. These results are shown in Table 1.

Types of Training
Dataset Models # set size Type1 Rank-1 Acc.
ND Single 1 2000 59% 35% 81%
5 2000 65% 45% 82%
3 2000 61% 38% 81%
5 2000 60% 44% 82%
[13] 1 20000 75% 96% -
LFW Single 1 1500 85% 82% 74%
5 1500 89% 84% 78%
3 1500 87% 84% 75%
5 1500 87% 83% 73%
[12] 1 99% - -
Table 1: Comparison of Accuracy when using multiple models with the method of incorporation. Model Inversion for both Feature Extraction Networks, and Classification Networks . Accuracy per dataset and attack type. Accuracy for classification networks is how frequently an image is assigned the correct class label.
Incorporation Training Models
Dataset Method # Models set size for Test Type1 Rank-1 Accuracy
ND 5 2000 Final 65% 45% 82%
5 2000 Final 48% 27% 78%
5 2000 All 50% 30% 86%
5 2000 Final 66% 46% 81%
5 2000 Final 72% 53% 83%
5 2000 All 65% 45% 81%
[13] 1 20000 75% 96% -
LFW 5 1500 Final 89% 84% 78%
5 1500 Final 78% 75% 74%
5 1500 All 80% 78% 81%
5 1500 Final 89% 84% 79%
5 1500 Final 91% 86% 79%
5 1500 All 89% 84% 79%
[12] 1 99% - -
Table 2: Comparison of Methods for incorporating multiple models. All data uses models. Both Feature Extraction Networks, and Classification Networks . Accuracy per dataset and attack type. Accuracy for classification networks is how frequently an image is assigned the correct class label. Models for Test Column indicates whether all models or just the final model were used during testing.

Rank-1 accuracy measures the probability of a reconstructed biometric being matched with an original biometric of the same class (and not itself). Our Rank-1 accuracy is lower. Our inversion network seems to do better at the specific task of inverting a template to a particular image and does not generalize well. We attribute this to 1) slight overfitting of the inversion network due to our small training dataset and 2) differences to the underlying target network in comparison to the target network of Ahmad and Fuller [13]. These differences include a more modern loss function and the use of dropout. Additionally, our target network Rank-1 accuracy on the test set of . Ahmad and Fuller used a target network with an accuracy of 99.5%. This accuracy changes the threshold distance used to accept or reject biometric comparisons; this change affects Type1 attack accuracy but not Rank-1. This larger distance threshold may explain the relatively high Type1 accuracy.

Figure 4: Alignment helps with reconstructing face features such as facial hair and correcting skin tone.

This split between Type1 and Rank-1 is not observed for the facial task. We achieve a Type-1 accuracy of 85% when using a single model to obtain the training dataset for the inversion attack network. Our inversion attack network achieves  85% accuracy in Type-1 and  82% in Rank-1 settings. Face images have myriad of facial features in addition to some background of the LFW images making them easier to invert and harder for the target model to achieve high test accuracy. Iris images have only the iris texture while other features such as the skin and eyelid are segmented out.

Incorporating Multiple Models

Turning to the setting of multiple models, we present the gain in using multiple models with the technique in Table 1. In all settings, multiple models improve accuracy of the inversion network. Because all attack settings perform similarly, in comparing how to effectively incorporate multiple models we focus on the model.

Results for different incorporation techniques are presented in Table 2. The largest gain is using the technique . For the iris, this technique boosts Rank-1 accuracy from to . Input-augmented GANs boost attack accuracy in most settings.

The technique performs nearly identically to the technique. This is of particular interested compared to the technique which is only forcing the model to learn the provided input which is easy to predict.

Discussion The technique can hurt performance when only a single model output is available at testing. The most natural explanation is that feature vectors from models which have not converged hold information that is hard to use by our inversion models. However, if one assumes that the adversary sees the output of all models at test time, that is the adversary sees at test time this accuracy improves. This indicates that the problem may be the mismatch between the format of the training and testing data. We note that this phenomenon is switched on , providing all vectors at test time is harmful. This supports the hypothesis that Structured Random with Alignment loss is superior for natural attack scenarios.

The accuracy gain when using multiple models is less pronounced for the LFW face dataset. In this setting, we believe that the small amount of training data resulting in the attack model overfitting the training data. However, our attack achieves close to state of art inversion accuracy while using orders of magnitude less training data.

6.2 Classification Networks

Model inversion attacks on classification networks output the average of a certain class (see discussion in Section 3.1). The attack is successful if the reconstructed biometric images are classified to their correct class by the target model.

Single Model Results

Our inversion attack models perform at 81% and 74% attack accuracy for the iris and face datasets respectively. Results are displayed in Table 1.

Incorporating Multiple Models

Structured random with alignment loss bumps the accuracy to 83% and 79% respectively. We do not see a proportional increase in inversion accuracy as we saw with feature extraction networks. Classification networks output prediction vectors which are simple and do not hold much information. Previous works have even truncated prediction vectors [36] for better inversion.

If all models’ output is available at test time method improves but the does not. This same phenomenon was observed in feature extraction network.

Recall, for classification networks the traditional goal is to output the class average, a value that will be assigned to class with as high probability as possible. Prior works have not considered that this average may not appear similar to a real biometric (such as Fredrikson et al. [3]). When training and testing with concatenated prediction vectors () our inverted images vary across a class instead of being the same class average image. An example of different images for the same iris biometric is shown in Figure 5. We attribute this to the additional information in multiple vectors which form the concatenated vector. A similar phenomenon is seen in the work of Yang et al. [36] where classes unknown to the target model are inverted by a method called alignment (that differs from ).

Figure 5: Alignment process enables inversion to vary. Each row represents a different iris from the same biometrics. For classification networks single model always inverts to class average. However, can invert to distinct images that better match the stored template.

6.3 Which attack types perform the best?

We perform a simple experiment to validate which models contribute the most to inversion attack accuracy. With access to the final trained target model and using random sampling for training, an adversary’s reconstructed iris images have a Rank-1 attack accuracy of 35% when provided access to the st, th, th, th, and th models. This accuracy drops to 15% if the adversary only has access to the 25th and 50th model. Attack accuracy jumps to 36% if the first and 100th model are used by the adversary.

The target model is trained using an off the shelf network architecture which was pre-trained on the Imagenet dataset 

[64], which would generate somewhat accurate feature vectors [65]. Of course, the last model generates accurate feature vectors which would allow the adversary to generate good reconstructions. Since inversion accuracy is low with intermediary models yet to converge, our proposed alignment loss forces the inversion model to differentiate between vectors from multiple models.

7 Membership Inference

7.1 Membership inference attack

In membership inference the goal is the same for feature extractors and classifiers: given determine if was part of the training set. This is a binary classification task. More formally, given a target model denoted by and an input , we query the target model with to obtain an output . This will be a feature vector or a classification vector. We assume the attacker has a small dataset from the same distribution as training data and this dataset has members from the target model’s training and testing dataset.

Our attack model is a simple layer fully connected neural network with sizes 64,64,1. The last layer infers training set membership. In the special case of there are two output layers predicting membership and index of the input vector. Our target models are trained as described in Section 5.

7.2 Results

Types of Training Accuracy
Dataset Models # set size
ND Single 1 2000 99% 52%
5 2000 99% 60%
3 2000 99% 59%
5 2000 99% 59%
LFW Single 1 2000 80% 68%
5 2000 76% 78%
3 2000 78% 75%
5 2000 77% 76%
Table 3: Membership inference attack accuracy when using multiple models with the method of incorporation. Accuracy is how frequently the attack predicts correctly whether the item was part of the training set.

Our membership inference attack accuracy numbers shown in Tables 3 and 4. Feature extractor networks for the iris are easy to classify in all settings with accuracy of . More classes increases attack accuracy [37] for prediction vectors. For the iris classification network, Table 3 shows that accuracy using a single model is which is improved to by using multiple models and further to by using . Running this attack on feature vectors should be easier since they contain more information than prediction vectors.

Incorporation # Training Models Accuracy
Dataset Method Models set size for Test
ND 5 2000 Final 99% 60%
5 2000 Final 99% 60%
5 2000 All 99% 66%
5 2000 Final 99% 60%
5 2000 Final 99% 62%
5 2000 All 99% 70%
LFW 5 2000 Final 76% 78%
5 2000 Final 82% 74%
5 2000 All 82% 84%
5 2000 Final 82% 75%
5 2000 Final 81% 76%
5 2000 All 82% 86%
Table 4: Membership inference attack accuracy on both classification networks and feature vector networks for different incorporation methods. All results consider the attack setting.

For the face, feature extraction networks have lower accuracy which is actually hurt by using to incorporate multiple models, however does slighly outperform a single model. As with the iris, using multiple models to attack classification networks has a more pronounced effect on accuracy. For the face a single model has accuracy of , multiple models improve to and improves further to . The LFW dataset is a harder dataset than the iris dataset taking a longer time to converge. Models in the beginning of training do not not output useful feature vectors for membership inference attacks.

Recall that in the model inversion task providing all models as input at test time improved the performance of a model trained with but hurt performance of a model trained using . As shown in Table 4 providing all models improves the performance of both methods, with a strong affect for for both the iris and face. We attribute this difference to a difference in the two tasks, model inversion is trying to reconstruct a full image, while model inversion is only classifying an input. As such it seems the attack model for membership inference is better able to use information from multiple models.

7.2.1 Which attack types perform the best?

As with model inversion, we perform a simple experiment to validate which models contribute the most to inference accuracy. We train our attack model on a single training model and use it to attack the final training model produced during the th epoch. As expected, when using the model output after the 1st epoch to attack the model output after the th epoch accuracy is only , increasing to when using the model from the th epoch, for the model from the th epoch, and finally when using the training using the model of the th epoch (same model for training and attack).

8 Conclusion

An adversary can perform model inversion attacks to gain unauthorized access to biometric authentication systems through biometric spoofing. We explore an adversary’s access to deep learning models trained and stored, models generated after a model is updated, and finally models generated after an unlearning request.

In this work we show when multiple models are accessible by an adversary model inversion attacks can be performed with fewer training samples with high attack accuracy. We explore different methods of incorporating multiple models into the attack model training process.

An interesting finding of our work is that while incorporating multiple models using the method is universally helpful (across biometrics and types of biometric transforms), results using input-augmented GANs are mixed. If only the last model is available at Test time the technique can actually hurt performance, for the iris Type1 accuracy drops from 59% to 48% and is much lower than the 65% achieved by the random method. However, our proposed method of using always improves performance compared to the technique improving Type1 accuracy to compared to the of .

To show the promise of our augmented GAN techniques, we apply them to a secondary application of membership inference. As with model inversion, performs best when only the final model is available at attack time.

Acknowledgements

The authors thank the reviewers for their valuable help in improving the manuscript. This work was supported in part by NSF Grants # 1849904 and 2141033. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA.

References

  • [1] N. K. Ratha, J. H. Connell, and R. M. Bolle, “Enhancing security and privacy in biometrics-based authentication systems,” IBM systems Journal, vol. 40, no. 3, pp. 614–634, 2001.
  • [2] A. K. Jain, P. Flynn, and A. A. Ross, Handbook of biometrics.   Springer Science & Business Media, 2007.
  • [3] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322–1333.
  • [4] S. Marcel, M. S. Nixon, and S. Z. Li, Handbook of biometric anti-spoofing.   Springer, 2014, vol. 1.
  • [5] J. Deng, J. Guo, N. Xue, and S. Zafeiriou, “Arcface: Additive angular margin loss for deep face recognition,” in

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2019, pp. 4690–4699.
  • [6] W. Liu, Y. Wen, Z. Yu, M. Li, B. Raj, and L. Song, “Sphereface: Deep hypersphere embedding for face recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 212–220.
  • [7] H. Wang, Y. Wang, Z. Zhou, X. Ji, D. Gong, J. Zhou, Z. Li, and W. Liu, “Cosface: Large margin cosine loss for deep face recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5265–5274.
  • [8] K. Wang and A. Kumar, “Cross-spectral iris recognition using cnn and supervised discrete hashing,” Pattern Recognition, vol. 86, pp. 85–98, 2019.
  • [9] Z. Zhao and A. Kumar, “Towards more accurate iris recognition using deeply learned spatially corresponding features,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3809–3818.
  • [10] S. Ahmad and B. Fuller, “Thirdeye: Triplet-based iris recognition without normalization,” in IEEE International Conference on Biometrics: Theory, Applications and Systems, 2019.
  • [11] Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The secret revealer: generative model-inversion attacks against deep neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 253–261.
  • [12] G. Mai, K. Cao, P. C. Yuen, and A. K. Jain, “On the reconstruction of face images from deep face templates,” IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 5, pp. 1188–1202, 2018.
  • [13] S. Ahmad and B. Fuller, “Resist: Reconstruction of irises from templates,” in International Joint Conference on Biometrics, 2020.
  • [14] A. Salem, A. Bhattacharya, M. Backes, M. Fritz, and Y. Zhang, “Updates-leak: Data set inference and reconstruction attacks in online learning,” in 29th USENIX Security Symposium, 2020, pp. 1291–1308.
  • [15] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction APIs,” in 25th USENIX Security Symposium), 2016, pp. 601–618.
  • [16]

    B. Wang and N. Z. Gong, “Stealing hyperparameters in machine learning,” in

    2018 IEEE Symposium on Security and Privacy (SP).   IEEE, 2018, pp. 36–52.
  • [17] T. Orekondy, B. Schiele, and M. Fritz, “Knockoff nets: Stealing functionality of black-box models,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4954–4963.
  • [18] J. Zuo, N. K. Ratha, and J. H. Connell, “Cancelable iris biometric,” in 2008 19th International Conference on Pattern Recognition.   IEEE, 2008, pp. 1–4.
  • [19] M. Gomez-Barrero, C. Rathgeb, J. Galbally, C. Busch, and J. Fierrez, “Unlinkable and irreversible biometric template protection based on bloom filters,” Information Sciences, vol. 370, pp. 18–32, 2016.
  • [20] J. Bringer, C. Morel, and C. Rathgeb, “Security analysis of bloom filter-based iris biometric template protection,” in 2015 international conference on biometrics (ICB).   IEEE, 2015, pp. 527–534.
  • [21]

    M. Stokkenes, R. Ramachandra, M. K. Sigaard, K. Raja, M. Gomez-Barrero, and C. Busch, “Multi-biometric template protection—a security analysis of binarized statistical features for bloom filters on smartphones,” in

    2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA).   IEEE, 2016, pp. 1–6.
  • [22] Y. Dodis, R. Ostrovsky, L. Reyzin, and A. Smith, “Fuzzy extractors: How to generate strong keys from biometrics and other noisy data,” SIAM journal on computing, vol. 38, no. 1, pp. 97–139, 2008.
  • [23] A. Juels and M. Wattenberg, “A fuzzy commitment scheme,” in Proceedings of the 6th ACM conference on Computer and communications security, 1999, pp. 28–36.
  • [24] A. Juels and M. Sudan, “A fuzzy vault scheme,” Designs, Codes and Cryptography, vol. 38, no. 2, pp. 237–257, 2006.
  • [25] Z. Jin, J. Y. Hwang, Y.-L. Lai, S. Kim, and A. B. J. Teoh, “Ranking-based locality sensitive hashing-enabled cancelable biometrics: Index-of-max hashing,” IEEE Transactions on Information Forensics and Security, vol. 13, no. 2, pp. 393–407, 2017.
  • [26] X. Boyen, “Reusable cryptographic fuzzy extractors,” in Proceedings of the 11th ACM conference on Computer and Communications Security, 2004, pp. 82–91.
  • [27] B. Fuller, X. Meng, and L. Reyzin, “Computational fuzzy extractors,” in International Conference on the Theory and Application of Cryptology and Information Security.   Springer, 2013, pp. 174–193.
  • [28] F. Hernández Álvarez, L. Hernández Encinas, and C. Sánchez Ávila, “Biometric fuzzy extractor scheme for iris templates,” 2009.
  • [29] J. Bringer, H. Chabanne, G. Cohen, B. Kindarji, and G. Zémor, “Optimal iris fuzzy sketches,” in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems.   IEEE, 2007, pp. 1–6.
  • [30] D. Keller, M. Osadchy, and O. Dunkelman, “Fuzzy commitments offer insufficient protection to biometric templates produced by deep learning,” arXiv preprint arXiv:2012.13293, 2020.
  • [31] R. Canetti, B. Fuller, O. Paneth, L. Reyzin, and A. Smith, “Reusable fuzzy extractors for low-entropy distributions,” Journal of Cryptology, vol. 34, no. 1, pp. 1–33, 2021.
  • [32] P. Mohassel and Y. Zhang, “Secureml: A system for scalable privacy-preserving machine learning,” in 2017 IEEE Symposium on Security and Privacy (SP).   IEEE, 2017, pp. 19–38.
  • [33] Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. Chen, and H. Yu, “Federated learning,”

    Synthesis Lectures on Artificial Intelligence and Machine Learning

    , vol. 13, no. 3, pp. 1–207, 2019.
  • [34] B. D. Rouhani, M. S. Riazi, and F. Koushanfar, “Deepsecure: Scalable provably-secure deep learning,” in Proceedings of the 55th Annual Design Automation Conference, 2018, pp. 1–6.
  • [35] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, 2014, pp. 2672–2680.
  • [36] Z. Yang, J. Zhang, E.-C. Chang, and Z. Liang, “Neural network inversion in adversarial setting via background knowledge alignment,” in Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019, pp. 225–240.
  • [37] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP).   IEEE, 2017, pp. 3–18.
  • [38] N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, “Sok: Security and privacy in machine learning,” in 2018 IEEE European Symposium on Security and Privacy (EuroS&P).   IEEE, 2018, pp. 399–414.
  • [39] M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 739–753.
  • [40] K. Leino and M. Fredrikson, “Stolen memories: Leveraging model memorization for calibrated white-box membership inference,” in 29th USENIX Security Symposium (USENIX Security 20), 2020, pp. 1605–1622.
  • [41] D. Chen, N. Yu, Y. Zhang, and M. Fritz, “Gan-leaks: A taxonomy of membership inference attacks against generative models,” in Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020, pp. 343–362.
  • [42] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in 2019 IEEE Symposium on Security and Privacy (SP).   IEEE, 2019, pp. 691–706.
  • [43] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečnỳ, S. Mazzocchi, B. McMahan et al., “Towards federated learning at scale: System design,” Proceedings of Machine Learning and Systems, vol. 1, pp. 374–388, 2019.
  • [44] M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing,” in 23rd USENIX Security Symposium (USENIX Security 14), 2014, pp. 17–32.
  • [45] Y. Cao and J. Yang, “Towards making systems forget with machine unlearning,” in 2015 IEEE Symposium on Security and Privacy.   IEEE, 2015, pp. 463–480.
  • [46] L. Bourtoule, V. Chandrasekaran, C. Choquette-Choo, H. Jia, A. Travers, B. Zhang, D. Lie, and N. Papernot, “Machine unlearning,” arXiv preprint arXiv:1912.03817, 2019.
  • [47] A. Ginart, M. Guan, G. Valiant, and J. Y. Zou, “Making AI forget you: Data deletion in machine learning,” in Advances in Neural Information Processing Systems, 2019, pp. 3518–3531.
  • [48] A. Mantelero, “The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten’,” Computer Law & Security Review, vol. 29, no. 3, pp. 229–235, 2013.
  • [49] N. F. Palmieri III, “Who should regulate data: An analysis of the california consumer privacy act and its effects on nationwide data protection laws,” Hastings Sci. & Tech. LJ, vol. 11, p. 37, 2020.
  • [50] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning.   MIT Press, 2016, http://www.deeplearningbook.org.
  • [51] A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, and M. Backes, “Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models,” arXiv preprint arXiv:1806.01246, 2018.
  • [52] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004.
  • [53]

    J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in

    European conference on computer vision.   Springer, 2016, pp. 694–711.
  • [54] K. W. Bowyer and P. J. Flynn, “The ND-IRIS-0405 iris image dataset,” arXiv preprint arXiv:1606.04853, 2016.
  • [55] P. J. Phillips, K. W. Bowyer, P. J. Flynn, X. Liu, and W. T. Scruggs, “The iris challenge evaluation 2005,” in Biometrics: Theory, Applications and Systems, 2008. BTAS 2008. 2nd IEEE International Conference  on.   IEEE, 2008, pp. 1–8.
  • [56] J. Daugman, “Iris recognition border-crossing system in the uae,” International Airport Review, vol. 8, no. 2, 2004.
  • [57] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” University of Massachusetts, Amherst, Tech. Rep. 07-49, October 2007.
  • [58] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700–4708.
  • [59] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [60] M. Abadi, “Tensorflow: learning functions at scale,” in Proceedings of the 21st ACM SIGPLAN International Conference on Functional Programming, 2016, pp. 1–1.
  • [61] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
  • [62]

    S. Ahmad and B. Fuller, “Unconstrained iris segmentation using convolutional neural networks,” in

    Asian Conference on Computer Vision.   Springer, 2018, pp. 450–466.
  • [63] G. B. Huang, M. Mattar, H. Lee, and E. Learned-Miller, “Learning to align from scratch,” in NIPS, 2012.
  • [64] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition.   Ieee, 2009, pp. 248–255.
  • [65] A. Boyd, A. Czajka, and K. Bowyer, “Deep learning-based feature extraction in iris recognition: Use existing models, fine-tune or train from scratch?” in 2019 IEEE 10th International Conference on Biometrics Theory, Applications and Systems (BTAS).   IEEE, 2019, pp. 1–9.
  • [66] Y. Ji, X. Zhang, S. Ji, X. Luo, and T. Wang, “Model-reuse attacks on deep learning systems,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018, pp. 349–363.