DeepAI
Log In Sign Up

Super-resolution Guided Pore Detection for Fingerprint Recognition

12/10/2020
by   Syeda Nyma Ferdous, et al.
0

Performance of fingerprint recognition algorithms substantially rely on fine features extracted from fingerprints. Apart from minutiae and ridge patterns, pore features have proven to be usable for fingerprint recognition. Although features from minutiae and ridge patterns are quite attainable from low-resolution images, using pore features is practical only if the fingerprint image is of high resolution which necessitates a model that enhances the image quality of the conventional 500 ppi legacy fingerprints preserving the fine details. To find a solution for recovering pore information from low-resolution fingerprints, we adopt a joint learning-based approach that combines both super-resolution and pore detection networks. Our modified single image Super-Resolution Generative Adversarial Network (SRGAN) framework helps to reliably reconstruct high-resolution fingerprint samples from low-resolution ones assisting the pore detection network to identify pores with a high accuracy. The network jointly learns a distinctive feature representation from a real low-resolution fingerprint sample and successfully synthesizes a high-resolution sample from it. To add discriminative information and uniqueness for all the subjects, we have integrated features extracted from a deep fingerprint verifier with the SRGAN quality discriminator. We also add ridge reconstruction loss, utilizing ridge patterns to make the best use of extracted features. Our proposed method solves the recognition problem by improving the quality of fingerprint images. High recognition accuracy of the synthesized samples that is close to the accuracy achieved using the original high-resolution images validate the effectiveness of our proposed model.

READ FULL TEXT VIEW PDF

page 1

page 6

02/12/2021

A Generative Model for Hallucinating Diverse Versions of Super Resolution Images

Traditionally, the main focus of image super-resolution techniques is on...
04/05/2020

Feature Super-Resolution Based Facial Expression Recognition for Multi-scale Low-Resolution Faces

Facial Expressions Recognition(FER) on low-resolution images is necessar...
08/31/2022

Injecting Image Details into CLIP's Feature Space

Although CLIP-like Visual Language Models provide a functional joint fea...
06/21/2021

FDeblur-GAN: Fingerprint Deblurring using Generative Adversarial Network

While working with fingerprint images acquired from crime scenes, mobile...
12/18/2015

Face Hallucination using Linear Models of Coupled Sparse Support

Most face super-resolution methods assume that low-resolution and high-r...
08/11/2020

Transfer Learning for Protein Structure Classification at Low Resolution

Structure determination is key to understanding protein function at a mo...
09/20/2021

DEM Super-Resolution with EfficientNetV2

Efficient climate change monitoring and modeling rely on high-quality ge...

I Introduction

Fingerprint is one of the principal biometric traits which has been widely adopted for person recognition due to the highly distinctive nature and simplicity of acquisition [25]. The success of using fingerprints in different biometric applications, such as forensics and access control, has led to the intensive development of fingerprint recognition systems. Feature extraction is a major component of fingerprint recognition that directly impacts the matching performance. Fingerprint features can be broadly categorized into three levels. Level-1 features include the coarse and global structure of the ridge pattern, and therefore, contain low discriminative information. Level-2 features are called minutiae and are the most widely used type of features in fingerprints. Minutiae are specific patterns within the ridges, mainly formed by ridge ending and ridge bifurcation. Level-3 features represent the fine attributes that are rich in quantitative information prevalent among sweat pores in fingers that could be leveraged for high-accuracy identification.

Several studies have demonstrated that level-3 features can significantly improve the matching performance of fingerprint recognition systems [14, 39, 2]. However, employing level-3 features imposes challenges that can limit the efficiency of the matching system. Level-1 and level-2 features can be reliably extracted from low-resolution images generally captured at 500 ppi or less, while level-3 features demand capturing high-resolution fingerprints at resolution above 700 ppi [46]. This negatively affects the cost of the device since the sensor must be able to capture detailed fingerprints. Furthermore, finger pores are small in size at low resolution, thus conveying much less information for a feature extractor to ensure effective recognition performance. In addition, pore information cannot be effectively extracted from the legacy fingerprints captured at the conventional 500 ppi resolutions.

To overcome these challenges, we evaluate the feasibility of extracting level-3 features from low resolution fingerprints using a learning-based framework guided by Super-Resolution (SR); a technique where a High-Resolution (HR) fingerprint image is generated from a Low-Resolution (LR) fingerprint image. Reconstruction of high-resolution fingerprints from low details makes this scheme more challenging. Recently the relationship between super-resolution and object recognition has been studied in several works [10, 34, 8] delineating the effect of super-resolution on object recognition performance. All of these studies point to improvement of recognition performance as SR techniques allow a more rigorous analysis of features being used by a detector.

Fig. 1: Visual representation of pore detection performance at varying resolution. Red circles represent the detected pores. An increase in the number of identified pores is apparent with the increase in resolution.

Pore detection is a crucial step in designing a fingerprint recognition system which greatly impacts the overall system performance. A remarkable pore detection accuracy is achievable with an increase in resolution as shown in Fig. 1. Therefore, we propose a modified single-image SR algorithm using the Super-Resolution Generative Adversarial Network (SRGAN) [20] tailored for fingerprint pore detection. We have used all three level features namely minutia, ridge pattern and pore features extracted from fingerprints for better recognition performance. The main contributions of this paper are as follows:

  • We develop a deep fingerprint SR model which employs SRGAN to reliably reconstruct high resolution fingerprint samples from their corresponding low-resolution samples.

  • We adopt a pore detection scheme that helps the SRGAN model to focus on level-3 features while synthesizing HR fingerprint samples. A jointly trained deep SR and pore detection framework is proposed.

  • To better utilize the ridge information of fingerprint samples in combination with pores, we have incorporated a ridge reconstruction loss making use of level-2 and level-3 features in our overall objective function, which helps to improve the fingerprint recognition accuracy of our model.

  • In addition, to make sure that the framework retains class identity, we have used an auxiliary deep verifier module combined with a quality discriminator to conduct fusion at the feature level.

Ii Related Work

Ii-a Fingerprint Recognition Using Pores

In recent years, extensive research has been conducted on fingerprint matching utilizing pore information. Early studies [38, 18] mainly followed skeleton tracing approaches for pore detection. Stosz et al. [38] first used a multi-level fingerprint matching approach employing pore position and ridge feature information in skeletonized fingerprint samples. Apart from the skeletonization approach, filtering based methods have also been applied for pore extraction. Jain et al. [14] developed a fingerprint matching algorithm making use of a Mexican hat wavelet transform and Gabor filters for automatic extraction of pores and ridge contours from fingerprints. This work followed an isotropic model not considering the adaptive nature of real-life fingerprint samples. To address this problem, Zhao et al [48]

proposed an adaptive pore extraction model following a dynamic anisotropic model that estimates orientation and scale of pores dynamically.

In [49], a direct pore matching approach is implemented making the process of pore matching independent from minutia matching. They used a pairwise pore comparison based on the corresponding pore descriptor that uses local features of pores. Afterwards, pore correspondences are refined using the RANSAC (Random Sample Consensus) algorithm to evaluate final pore matching result. A modified direct pore matching method is presented in the work of Liu et al. [23] considering a sparse representation of finger pores that calculates the difference between pores to establish pore correspondences. It is then refined by a weighted RANSAC algorithm removing the false correspondences. Teixeira and Leite [41] proposed a method for pore extraction using spatial pore analysis that can accomodate varying ridge widths and pore sizes. Xu et al. [43] proposed an approach that uses the size of the connected region of closed pores and the skeleton of valleys to detect open pores.

Due to the excellent feature extraction capability of convolutional neural networks (CNNs), they have been used in recent years for pore extraction as a promising new approach. Su et al.

[39] first proposed a CNN-based approach for pore extraction showing comparable results with traditional approaches. Another pore extraction framework, DeepPore has been designed by Jang et al. [16] that uses pore intensity refinement to identify pores with higher true detection rate. The work in [42] used the U-Net architecture [31] to extract ridges and pores present in fingerprints. Labati et al. [19] proposed a CNN-based pore extraction model that can handle heterogeneous fingerprint samples such as touch-based, touchless and latent fingerprints. Dahia et al. [6] designed a fully convolutional network for pore detection from high resolution fingerprint images ensuring minimum number of required model parameters.

Recently, Shen et al. [33] proposed a fully convolutional network incorporating focal loss [21] that solves the class imbalance problem. This method used an edge blur and shortcut structure that helps to utilize contextual information for pore detection. Nguyen et al. [27] proposed a method for end-to end pore extraction for latent fingerprint matching where pore matching uses the ranked score information of minutia matching. Anand et al. [2]

proposed a residual CNN based learning framework employing two models, DeepResPore and PoreNet, for high resolution fingerprint images, surpassing the result of the state-of-the-art methods. Therefore, in this paper, the DeepResPore Network is used to detect pores utilizing pore intensity map while the PoreNet is trained to learn feature descriptors from pore patches by generating a deep feature embedding for corresponding fingerprint images.

Ii-B Application of Super-Resolution on Fingerprint Image

The impact of super-resolution (SR) on object recognition has recently received much attention from the research community; however, the impact of super-resolution on fingerprint recognition has not yet been thoroughly explored. Yuan et al. [45] considered SR as a pre-processing step for fingerprint image enhancement. This method applied early stopping, a regularization technique for improved image quality. A fingerprint SR image reconstruction approach applied in [3]

used sparse representation with a ridge pattern constraint. This method classified fingerprint patches considering ridge orientations and quality of the samples and learned coupled dictionaries for fingerprint classification. A ridge orientation-based clustering followed by sparse SR is adopted in the work of

[37]. This approach employs dominant orientation-based subdictionaries for sparse modeling of fingerprint data that significantly improved fingerprint recognition performance. Results reported in these studies demonstrated the use of SR as a promising scheme for fingerprint recognition, which is further investigated in this paper.

Fig. 2: Complete diagram of our proposed framework including the generator, quality discriminator, pore detector, pore discriminator and SR feature extractor.

Iii Proposed Framework

In this section, we present the details of our framework that is designed to convert a low-resolution (LR) fingerprint image into its high-resolution (HR) equivalent using a conditional Generative Adversarial Network (cGAN) architecture followed by a pore detector.

Iii-a Conditional Generative Adversarial Network

A GAN [9] is a generative model that has outperformed other models in the task of synthetic image generation. This model has been explored in other representation learning tasks such as image super-resolution, image translation, etc. The conventional GAN model uses two sub-networks: a Generator, and a Discriminator, . The generator tries to produce realistic samples by learning a mapping from a random noise z to an output y, such that . Simultaneously, the discriminator learns to classify real and synthesized samples by distinguishing them. This system can be considered as a two-player min-max game where tries to fool by producing more samples indistinguishable from the real ones while gradually improves learning to detect the fake samples generated by . The Conditional GAN (cGAN) [13] differs from the conventional GAN [9] in the sense that target sample along with noise are fed to the network such that ; allowing target sample generation. The objective function for cGAN can be represented using the following equation:

(1)

Here, the generator constantly attempts to minimize Eq. 1 and the discriminator tries to maximize it.

Iii-B Training Objective

The goal of this paper is to design an efficient fingerprint SR model that is guided by a finger pore detection model in order to improve the overall fingerprint recognition accuracy. The network is trained in an end-to-end fashion such that the two individual models, the super-resolution and pore detector get benefit from each other. To achieve stable convergence of the model, we incorporate three losses from the SRGAN model: the Mean Squared Error (MSE) loss, adversarial loss [9], and perceptual loss [17]. In addition, to preserve class identification details embedded in ridge patterns and pores of fingerprints, we add two more losses to the model. One is the ridge reconstruction loss that considers ridge pattern variations and the other one is the pore detection loss, which uses a pore location map to identify pore map differences between the ground truth and the super-resolved fingerprint samples.

Iii-B1 MSE Loss

Most state-of-the-art SR methods [35][7]

use MSE loss, as this helps to achieve high peak signal-to-noise ratio. Similar to those methods, we also used MSE loss in our model. This loss estimates content-wise dissimilarity by taking the absolute pixel-wise differences between the generated image and the ground truth, as given by:

(2)

where W and H represent width and height of a fingerprint sample and N is the number of training samples. From Eq. 4, we see that the loss is simply the difference between the ground truth HR image and the generated super-resolved image from the LR image .

Iii-B2 Adversarial Loss

The generator in our model uses the adversarial loss that aids in the generation of natural looking images. The concept is that the discriminator tries to maximize the probability of genuine or fake images calculated from the real images and the probability of the fake ones denoted by

while the generator tries to minimize the prediction of the fake samples by the discriminator; thus, promoting a lower chance of fake sample generation. We can mathematically formulate this loss as follows:

(3)

where and

denote the probability distributions of real high-resolution fingerprint images and the corresponding low-resolution fingerprint images, respectively.

Iii-B3 Perceptual Loss

To preserve the inherent details of a ground truth in the generated fingerprint image, we use the perceptual loss proposed by Ledig et al. [17]. A pretrained 19- layer VGG network [36] is used to extract abstract features from images that preserve the discriminative information of the images in a lower dimensional sub-space. The perceptual loss is the L2 distance between the ground truth and the super-resolved image, which is measured as:

(4)

where represents the feature map in the convolutional layer and , and denote the layer dimension.

Iii-B4 Ridge Reconstruction Loss

Similar to [17], the ridge reconstruction loss is computed as the squared Frobenius norm of the difference between the Gram matrices of the output and target images. The Gram matrix defines the style of the ground truth image by computing every feature activation of entire ground truth feature map space. Mathematically, it can be expressed as the matrix multiplication of each of the activation and the transpose of feature activation matrix whose elements can be obtained using Eq. 5 which is given by:

(5)

where represents the activations of the convolutional layer. For the generated image , we calculate the Gram matrix in a similar fashion which is given by:

(6)

Finally, the ridge reconstruction loss of the network is given by:

(7)

Iii-B5 Pore Detection Loss

Let be a LR fingerprint that is to be translated into the HR space represented by which is close to the original high-resolution fingerprint . The pore detection module utilizes the generated and produces the pore intensity map, which is compared with the original pore map, . To minimize the error in each iteration, the L1 error between the two intensity maps is back-propagated. The loss due to the pore detector can be expressed as:

(8)

where estimates the pore locations denoted by marked pores for the input fingerprint.

Iii-B6 Final Loss Function

The overall loss for the model can be written as the combination of all the above-mentioned losses with appropriate weighting which is formulated below as:

(9)

where , , , and are used as the constraints to balance the associated losses. The combination of , and losses lead to the generation of realistic fingerprint images. Setting , , to as weighting factors helps these losses to converge faster. The ridge reconstruction loss, enables the model to transfer the correct ridge patterns to the generated fingerprint sample and the pore detection loss, adds pore details to the generated fingerprint samples to ensure a high-performance fingerprint matcher. From our empirical study, we have set the value of , to to achieve the best performing model.

Iii-C Network Architecture

We have combined three separate models into one to create a joint model that is able to produce a high-resolution fingerprint from a low resolution one, as shown in Fig. 2.

Iii-C1 Super-Resolution Model

The first model in our network is the super-resolution model. The task of this model is to predict a high-resolution fingerprint image from its low-resolution version. We have designed this model with inspiration from [20]. Our generator network has seven residual blocks that have identical layout. Each residual block has two convolutional layers of 3

3 kernel with stride 1, 64 feature maps followed by batch-normalization layer

[12], and ParametricRELU [11]

as the activation function. Two sub-pixel convolutional layers

[35] are attached to the network to produce a high-resolution image from its low-resolution version. Our quality discriminator adopts a design similar to the guidelines from Radford et al. [30]. It has seven convolutional layers each having 33 filter kernels. As the network advances, image resolution is decreased by strided convolution with increasing feature map size. The leakyRELU activation function is used with

in this network. Two dense layers and a sigmoid function are added so that the network can distinguish between real and generated samples.

Iii-C2 Deep ID Extractor

In order to preserve class identity information in our model, we employ a deep Siamese verifier as a feature extractor [5]. First, we train the verifier with low-resolution fingerprint samples using the contrastive loss [4]. Then, we extract features from the generated samples using this pre-trained module. To make sure the discriminator is considering the identity information, feature maps from the first, second and third layer of the verifier (size: 40x30x64, 20x15x128 and 10x8x256) are concatenated depth-wise with the features from the quality discriminator making the final output feature map size 40x30x128, 20x15x256 and 10x8x512 respectively. All of the layers comprise of convolutional layers and LeakyReLU activation functions. Kernel size for all the convolutional layer is 33 and stride is set to two.

Iii-C3 Pore Detection Model

The pore detection model [1] follows a residual structure. The network has eight residual blocks with eight shortcut connections. All the residual blocks use a 33 kernel with depth increasing by a factor of two. In total, the network is comprised of eighteen layers with 11 convolution and shortcut connections arranged in alternating manner. The deep residual network takes an input patch size of 80

60 and generates a similar size pore intensity map with marked pores. Then, a binarized pore map is created to highlight the position of the pores.

Fig. 3: Pore detection results in LR, corresponding real HR and super-resolved samples generated by our model. Here, red circles represent detected pores.

Iv Experiments and Result Analysis

In this section, we present our experimental results to analyze the impact of super resolution on finger pore detection. We conduct our experiment on two publicly available datasets (PolyU HRF DBI [29] and FVC2000 DB1 [24]). The PolyU HRF DBI dataset has images of 1,200 ppi resolution with spatial size of 320240. The annotations provided with this dataset contain the pore locations denoted as the central coordinates of pores. The dataset has 30 annotated fingerprint images. We use patches extracted from the first 20 for training and the remaining to test the performance of the pore detector. We have also applied data augmentation to increase the number of samples in our dataset. We augmented our train set by applying gamma transformation, random scale, horizontal and vertical flip to create different contrast for the original samples. To further increase the size of the training set, we divide the images into overlapping patches of size 4030 and feed to the generator. The synthesized super-resolved patches of size 8060 are used to train the pore detector. The second dataset FVC2000 DB1 has images of 500 ppi resolution with spatial size of 300300. Both PolyU DBI-test and FVC2000 DB1 are used to evaluate the performance of our proposed approach.

Iv-1 Training

First, we train our super-resolution and pore detection network separately. For training the super-resolution network, we use the Adam optimizer with a momentum of 0.9, beta 0.5 and batch size of 64. The learning rate is set as . Our pore detector has been trained using the Adam optimizer with a batch size of 64 and a learning rate of

for 30 epochs. For joint training of our model, the entire model is trained with pre-trained weights of super-resolution and pore detection networks for 20 epochs and then the weights are updated for 20 epochs.

Iv-2 Quality distribution analysis

To evaluate the quality of the super-resolved samples, we have used the NFIQ 2.0 utility from NBIS [40]. The NFIQ 2.0 assigns a quality score to an image ranging from 0 to 100. From the quality score distribution in Fig. 4, a large overlap among the scores of the generated and real HR fingerprints is visible which indicates the qualitative similarity of the generated fingerprint samples of our model to the real HR fingerprint samples. Approximately 79% of the generated samples have been assigned a quality score of 50 or higher which confirms quality image generation by our modified SRGAN.

Fig. 4: Image quality comparison of LR, real HR and generated samples from our modified SRGAN for an upscale factor 2.

Iv-3 Performance analysis of pore detection

The pore detection performance of our proposed method is demonstrated in Fig. 3. We have also summarized our pore detection model performance with other state-of-the-art methods in Table I. This experimental result is reported considering the True Detection Rate (TDR) and False Detection Rate (FDR) of pores in the PolyU DBI dataset.

From Table I, we can conclude that our pore detection model performs significantly higher, surpassing the results of other baseline methods. The proposed pore detection method achieves a high TDR accompanied by a low FDR, yielding a very high accuracy.

Method TDR FDR
Jain et al. [14] 75.9% 23%
Zhao et al. [48] 84.8% 17.6%
Segundo et al. [28] 90.8% 11.1%
Su et al. [39] 88.6% 0.4%
Dahia et al. [6] 91.95% 8.88%
Ours 94.3% 0.4%

TABLE I: Pore detection performance comparison in terms of TDR and FDR. Results reported apart from ours have been collected from the corresponding papers.

Iv-4 Performance analysis of unified model

In this work, we have used all three levels of fingerprint features. Ridge patterns and minutiae from fingerprints are extracted by applying wavelet-based Gabor filtering [47] and crossing number algorithm [26], respectively. The matching at Level-2 is performed combining two different matchers, namely correlation-based [15] and minutiae-based matchers [15]. To compare fingerprints based on pores, Graph Comparison Algorithm [44] is utilized which focuses on local features and spatial relationship between pores. In order to make use of the extended fingerprint feature set, a score-level fusion of match scores from Level 1, Level 2 and Level 3 features is performed using sum-rule and min-max-normalization [32] to conduct fingerprint matching.

To evaluate the fingerprint recognition accuracy of our approach, we have generated 3,700 genuine pairs and 21,756 imposter pairs following the same procedure in [22, 23] from the PolyU dataset. The genuine pairs are obtained by matching each fingerprint image in the first session with all five images of the same finger in the second session. The first fingerprint image of each finger in the second session is compared with the first fingerprint image of all the other fingers in the first session. We compare our method with other existing methods; MICPP [14], MINU_SRDP [23], TDSWR [22]. Based on the performed matching experiments, we find that the performance of matching 1000 ppi is quite the same as matching 1200 ppi samples. Hence, we select 1000 ppi as our preferred resolution. It can be observed from Table II that Equal Error Rate (EER) of our SR fingerprints is comparable to that of the ground truth fingerprint images, indicating the effectiveness of our proposed SR based pore detection method. In order to analyze the impact of different level fingerprint features in recognition performance, we have plotted the ROC for Level-2, Level-3 and the score-level fusion of the match scores for both matchers. From Fig. 5, we see that combination of match scores from Level-2 and Level-3 shows significant performance gain compared to individual Level-2 and Level-3 matchers. Also, it can be observed that the recognition accuracy of generated fingerprint samples is very close to the accuracy obtained using real ground-truth samples.

Fig. 5: ROC curves for fingerprint recognition using the different level features extracted from the PolyU HRF DBI dataset.
Method EER
MICPP 30.45%
MINU_SRDP 5.41%
TDSWR 3.25%
Ours (1000-ppi Ground Truth) 1.57%
Ours (1000-ppi Modified SRGAN) 1.63%
TABLE II: Fingerprint recognition performance comparison of different pore matching methods in terms of EER.
Fig. 6: Fingerprint recognition performance evaluated at multiple resolutions using different level features for an upscale factor 2. Solid lines represent the EER values for the PolyU DBI dataset and dashed lines are for the EER values in the FVC2000 DB1 dataset.

Fig. 6. shows the fingerprint recognition performance of our model across different image resolutions for both the PolyU DBI and the FVC2000 DB1 dataset. It can be observed that the decrease in EER is almost twice for the super-resolved high-resolution samples (e.g., 1000 ppi) from their ground-truth (e.g., 500 ppi) samples. This ensures reliable reconstruction of HR fingerprint samples using our model. The significant decrease in EER over the two datasets show a consistent improvement in fingerprint recognition using our SR guided joint framework. It is demonstrated that the generated 1000 ppi fingerprints can substantially improve the matching performance compared to the ground-truth 500 ppi samples.

Fig. 7: ROC curves for real HR, generated samples from SRGAN and our modified SRGAN for an upscale factor 2.

To provide a comprehensive performance analysis, we plot ROC curves investigating the effect of different loss functions adopted in our approach. In Fig.

7, we can see that the matching results based on the SRGAN losses combined with the ridge and pore detection losses achieve the highest Area Under the Curve (AUC) of around 99.8%, which is very close to the AUC computed from the ground truth HR fingerprint samples. It clearly demonstrates that introducing ridge reconstruction and pore detection losses help the model to generate samples close to real ones, which provides an overall improvement in fingerprint recognition performance.

V Conclusion

This paper proposes a jointly optimized fingerprint recognition framework using the concept of super-resolution and pore detection. The model is able to generate HR fingerprint samples, learn pore locations, ridge structure and other details from LR samples. The increase in resolution helps to achieve a high pore detection accuracy, which in turn forces the generator to produce high quality synthesized fingerprint samples. Also, integrating features extracted from a deep verifier with a quality discriminator preserves the individuality in our reconstructed samples. Reliable reconstruction of 1000 ppi fingerprint from its 500 ppi equivalent proves the validity of our approach.

References

  • [1] V. Anand and V. Kanhangad (2019) Pore detection in high-resolution fingerprint images using deep residual network. Journal of Electronic Imaging 28 (2), pp. 020502. Cited by: §III-C3.
  • [2] V. Anand and V. Kanhangad (2020) PoreNet: cnn-based pore descriptor for high-resolution fingerprint recognition. IEEE Sensors Journal. Cited by: §I, §II-A.
  • [3] W. Bian, S. Ding, and Y. Xue (2016) Fingerprint image super resolution using sparse representation with ridge pattern prior by classification coupled dictionaries. IET Biometrics 6 (5), pp. 342–350. Cited by: §II-B.
  • [4] S. Chopra, R. Hadsell, and Y. LeCun (2005) Learning a similarity metric discriminatively, with application to face verification. In

    2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)

    ,
    Vol. 1, pp. 539–546. Cited by: §III-C2.
  • [5] A. Dabouei, H. Kazemi, S. M. Iranmanesh, J. Dawson, N. M. Nasrabadi, et al. (2018) ID preserving generative adversarial network for partial latent fingerprint reconstruction. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–10. Cited by: §III-C2.
  • [6] G. Dahia and M. P. Segundo (2018) Improving Fingerprint Pore Detection with a Small FCN. arXiv preprint arXiv:1811.06846. Cited by: §II-A, TABLE I.
  • [7] C. Dong, C. C. Loy, K. He, and X. Tang (2015) Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38 (2), pp. 295–307. Cited by: §III-B1.
  • [8] S. N. Ferdous, M. Mostofa, and N. M. Nasrabadi (2019) Super resolution-assisted deep aerial vehicle detection. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Vol. 11006, pp. 1100617. Cited by: §I.
  • [9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §III-A, §III-B.
  • [10] M. Haris, G. Shakhnarovich, and N. Ukita (2018) Task-driven super resolution: object detection in low-resolution images. arXiv preprint arXiv:1803.11316. Cited by: §I.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun (2015)

    Delving deep into rectifiers: surpassing human-level performance on ImageNet classification

    .
    In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §III-C1.
  • [12] S. Ioffe and C. Szegedy (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §III-C1.
  • [13] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §III-A.
  • [14] A. K. Jain, Y. Chen, and M. Demirkus (2006) Pores and ridges: high-resolution fingerprint matching using level 3 features. IEEE transactions on pattern analysis and machine intelligence 29 (1), pp. 15–27. Cited by: §I, §II-A, §IV-4, TABLE I.
  • [15] A. K. Jain, S. Prabhakar, and S. Chen (1999) Combining multiple matchers for a high security fingerprint verification system. Pattern Recognition Letters 20 (11-13), pp. 1371–1379. Cited by: §IV-4.
  • [16] H. Jang, D. Kim, S. Mun, S. Choi, and H. Lee (2017)

    DeepPore: fingerprint pore extraction using deep convolutional neural networks

    .
    IEEE Signal Processing Letters 24 (12), pp. 1808–1812. Cited by: §II-A.
  • [17] J. Johnson, A. Alahi, and L. Fei-Fei (2016) Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pp. 694–711. Cited by: §III-B3, §III-B4, §III-B.
  • [18] K. Kryszczuk, A. Drygajlo, and P. Morier (2008) Extraction of level 2 and level 3 features for fragmentary fingerprint comparison. EPFL 3, pp. 45–47. Cited by: §II-A.
  • [19] R. D. Labati, A. Genovese, E. Muñoz, V. Piuri, and F. Scotti (2018) A novel pore extraction method for heterogeneous fingerprint images using convolutional neural networks. Pattern Recognition Letters 113, pp. 58–66. Cited by: §II-A.
  • [20] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §I, §III-C1.
  • [21] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §II-A.
  • [22] F. Liu, Q. Zhao, and D. Zhang (2011) A novel hierarchical fingerprint matching approach. Pattern Recognition 44 (8), pp. 1604–1613. Cited by: §IV-4.
  • [23] F. Liu, Q. Zhao, L. Zhang, and D. Zhang (2010) Fingerprint pore matching based on sparse representation. In 2010 20th International Conference on Pattern Recognition, pp. 1630–1633. Cited by: §II-A, §IV-4.
  • [24] D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain (2002) FVC2000: fingerprint verification competition. IEEE transactions on pattern analysis and machine intelligence 24 (3), pp. 402–412. Cited by: §IV.
  • [25] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar (2009) Handbook of fingerprint recognition. Springer Science & Business Media. Cited by: §I.
  • [26] B. M. Mehtre (1993) Fingerprint image analysis for automatic identification. Machine Vision and Applications 6 (2-3), pp. 124–139. Cited by: §IV-4.
  • [27] D. Nguyen and A. K. Jain (2019) End-to-end pore extraction and matching in latent fingerprints: going beyond minutiae. arXiv preprint arXiv:1905.11472. Cited by: §II-A.
  • [28] M. Pamplona Segundo and R. de Paula Lemes (2015) Pore-based ridge reconstruction for fingerprint recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 128–133. Cited by: TABLE I.
  • [29] PolyU hrf database. Note: http://www4.comp.polyu.edu.hk/~biometrics/HRF/HRF_old.htmAccessed: 3.21.2020 Cited by: §IV.
  • [30] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §III-C1.
  • [31] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §II-A.
  • [32] A. Ross and A. Jain (2003) Information fusion in biometrics. Pattern recognition letters 24 (13), pp. 2115–2125. Cited by: §IV-4.
  • [33] Z. Shen, Y. Xu, J. Li, and G. Lu (2019) Stable pore detection for high-resolution fingerprint based on a CNN detector. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2581–2585. Cited by: §II-A.
  • [34] J. Shermeyer and A. Van Etten (2019) The effects of super-resolution on object detection performance in satellite imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §I.
  • [35] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874–1883. Cited by: §III-B1, §III-C1.
  • [36] K. Simonyan and A. Zisserman (2018) Very deep convolutional networks for large-scale image recognition. arxiv [cs. cv]. 2014. Cited by: §III-B3.
  • [37] K. Singh, A. Gupta, and R. Kapoor (2015) Fingerprint image super-resolution via ridge orientation-based clustered coupled sparse dictionaries. Journal of Electronic Imaging 24 (4), pp. 043015. Cited by: §II-B.
  • [38] J. D. Stosz and L. A. Alyea (1994) Automated system for fingerprint authentication using pores and ridge structure. In Automatic systems for the identification and inspection of humans, Vol. 2277, pp. 210–223. Cited by: §II-A.
  • [39] H. Su, K. Chen, W. J. Wong, and S. Lai (2017)

    A deep learning approach towards pore extraction for high-resolution fingerprint recognition

    .
    In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2057–2061. Cited by: §I, §II-A, TABLE I.
  • [40] E. Tabassi, C. Wilson, and C. Watson (2004) Nist fingerprint image quality. NIST Res. Rep. NISTIR7151 5. Cited by: §IV-2.
  • [41] R. F. Teixeira and N. J. Leite (2014) Improving pore extraction in high resolution fingerprint images using spatial analysis. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 4962–4966. Cited by: §II-A.
  • [42] H. Wang, X. Yang, L. Ma, and R. Liang (2017) Fingerprint pore extraction using U-Net based fully convolutional network. In Chinese Conference on Biometric Recognition, pp. 279–287. Cited by: §II-A.
  • [43] Y. Xu, G. Lu, F. Liu, and Y. Li (2017) Fingerprint pore extraction based on multi-scale morphology. In Chinese Conference on Biometric Recognition, pp. 288–295. Cited by: §II-A.
  • [44] Y. Xu, G. Lu, Y. Lu, F. Liu, and D. Zhang (2018) Fingerprint pore comparison using local features and spatial relations. IEEE Transactions on Circuits and Systems for Video Technology 29 (10), pp. 2927–2940. Cited by: §IV-4.
  • [45] Z. Yuan, J. Wu, S. Kamata, A. Ahrary, and P. Yan (2009) Fingerprint image enhancement by super resolution with early stopping. In 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Vol. 4, pp. 527–531. Cited by: §II-B.
  • [46] D. Zhang, F. Liu, Q. Zhao, G. Lu, and N. Luo (2010) Selecting a reference high resolution for fingerprint recognition using minutiae and pores. IEEE Transactions on Instrumentation and Measurement 60 (3), pp. 863–871. Cited by: §I.
  • [47] W. Zhang, Q. Wang, and Y. Y. Tang (2002) A wavelet-based method for fingerprint image enhancement. In Proceedings. International Conference on Machine Learning and Cybernetics, Vol. 4, pp. 1973–1977. Cited by: §IV-4.
  • [48] Q. Zhao, D. Zhang, L. Zhang, and N. Luo (2010) Adaptive fingerprint pore modeling and extraction. Pattern Recognition 43 (8), pp. 2833–2844. Cited by: §II-A, TABLE I.
  • [49] Q. Zhao, L. Zhang, D. Zhang, and N. Luo (2009) Direct pore matching for fingerprint recognition. In International Conference on Biometrics, pp. 597–606. Cited by: §II-A.