Fingerprint is one of the principal biometric traits which has been widely adopted for person recognition due to the highly distinctive nature and simplicity of acquisition . The success of using fingerprints in different biometric applications, such as forensics and access control, has led to the intensive development of fingerprint recognition systems. Feature extraction is a major component of fingerprint recognition that directly impacts the matching performance. Fingerprint features can be broadly categorized into three levels. Level-1 features include the coarse and global structure of the ridge pattern, and therefore, contain low discriminative information. Level-2 features are called minutiae and are the most widely used type of features in fingerprints. Minutiae are specific patterns within the ridges, mainly formed by ridge ending and ridge bifurcation. Level-3 features represent the fine attributes that are rich in quantitative information prevalent among sweat pores in fingers that could be leveraged for high-accuracy identification.
Several studies have demonstrated that level-3 features can significantly improve the matching performance of fingerprint recognition systems [14, 39, 2]. However, employing level-3 features imposes challenges that can limit the efficiency of the matching system. Level-1 and level-2 features can be reliably extracted from low-resolution images generally captured at 500 ppi or less, while level-3 features demand capturing high-resolution fingerprints at resolution above 700 ppi . This negatively affects the cost of the device since the sensor must be able to capture detailed fingerprints. Furthermore, finger pores are small in size at low resolution, thus conveying much less information for a feature extractor to ensure effective recognition performance. In addition, pore information cannot be effectively extracted from the legacy fingerprints captured at the conventional 500 ppi resolutions.
To overcome these challenges, we evaluate the feasibility of extracting level-3 features from low resolution fingerprints using a learning-based framework guided by Super-Resolution (SR); a technique where a High-Resolution (HR) fingerprint image is generated from a Low-Resolution (LR) fingerprint image. Reconstruction of high-resolution fingerprints from low details makes this scheme more challenging. Recently the relationship between super-resolution and object recognition has been studied in several works [10, 34, 8] delineating the effect of super-resolution on object recognition performance. All of these studies point to improvement of recognition performance as SR techniques allow a more rigorous analysis of features being used by a detector.
Pore detection is a crucial step in designing a fingerprint recognition system which greatly impacts the overall system performance. A remarkable pore detection accuracy is achievable with an increase in resolution as shown in Fig. 1. Therefore, we propose a modified single-image SR algorithm using the Super-Resolution Generative Adversarial Network (SRGAN)  tailored for fingerprint pore detection. We have used all three level features namely minutia, ridge pattern and pore features extracted from fingerprints for better recognition performance. The main contributions of this paper are as follows:
We develop a deep fingerprint SR model which employs SRGAN to reliably reconstruct high resolution fingerprint samples from their corresponding low-resolution samples.
We adopt a pore detection scheme that helps the SRGAN model to focus on level-3 features while synthesizing HR fingerprint samples. A jointly trained deep SR and pore detection framework is proposed.
To better utilize the ridge information of fingerprint samples in combination with pores, we have incorporated a ridge reconstruction loss making use of level-2 and level-3 features in our overall objective function, which helps to improve the fingerprint recognition accuracy of our model.
In addition, to make sure that the framework retains class identity, we have used an auxiliary deep verifier module combined with a quality discriminator to conduct fusion at the feature level.
Ii Related Work
Ii-a Fingerprint Recognition Using Pores
In recent years, extensive research has been conducted on fingerprint matching utilizing pore information. Early studies [38, 18] mainly followed skeleton tracing approaches for pore detection. Stosz et al.  first used a multi-level fingerprint matching approach employing pore position and ridge feature information in skeletonized fingerprint samples. Apart from the skeletonization approach, filtering based methods have also been applied for pore extraction. Jain et al.  developed a fingerprint matching algorithm making use of a Mexican hat wavelet transform and Gabor filters for automatic extraction of pores and ridge contours from fingerprints. This work followed an isotropic model not considering the adaptive nature of real-life fingerprint samples. To address this problem, Zhao et al 
proposed an adaptive pore extraction model following a dynamic anisotropic model that estimates orientation and scale of pores dynamically.
In , a direct pore matching approach is implemented making the process of pore matching independent from minutia matching. They used a pairwise pore comparison based on the corresponding pore descriptor that uses local features of pores. Afterwards, pore correspondences are refined using the RANSAC (Random Sample Consensus) algorithm to evaluate final pore matching result. A modified direct pore matching method is presented in the work of Liu et al.  considering a sparse representation of finger pores that calculates the difference between pores to establish pore correspondences. It is then refined by a weighted RANSAC algorithm removing the false correspondences. Teixeira and Leite  proposed a method for pore extraction using spatial pore analysis that can accomodate varying ridge widths and pore sizes. Xu et al.  proposed an approach that uses the size of the connected region of closed pores and the skeleton of valleys to detect open pores.
Due to the excellent feature extraction capability of convolutional neural networks (CNNs), they have been used in recent years for pore extraction as a promising new approach. Su et al. first proposed a CNN-based approach for pore extraction showing comparable results with traditional approaches. Another pore extraction framework, DeepPore has been designed by Jang et al.  that uses pore intensity refinement to identify pores with higher true detection rate. The work in  used the U-Net architecture  to extract ridges and pores present in fingerprints. Labati et al.  proposed a CNN-based pore extraction model that can handle heterogeneous fingerprint samples such as touch-based, touchless and latent fingerprints. Dahia et al.  designed a fully convolutional network for pore detection from high resolution fingerprint images ensuring minimum number of required model parameters.
Recently, Shen et al.  proposed a fully convolutional network incorporating focal loss  that solves the class imbalance problem. This method used an edge blur and shortcut structure that helps to utilize contextual information for pore detection. Nguyen et al.  proposed a method for end-to end pore extraction for latent fingerprint matching where pore matching uses the ranked score information of minutia matching. Anand et al. 
proposed a residual CNN based learning framework employing two models, DeepResPore and PoreNet, for high resolution fingerprint images, surpassing the result of the state-of-the-art methods. Therefore, in this paper, the DeepResPore Network is used to detect pores utilizing pore intensity map while the PoreNet is trained to learn feature descriptors from pore patches by generating a deep feature embedding for corresponding fingerprint images.
Ii-B Application of Super-Resolution on Fingerprint Image
The impact of super-resolution (SR) on object recognition has recently received much attention from the research community; however, the impact of super-resolution on fingerprint recognition has not yet been thoroughly explored. Yuan et al.  considered SR as a pre-processing step for fingerprint image enhancement. This method applied early stopping, a regularization technique for improved image quality. A fingerprint SR image reconstruction approach applied in 
used sparse representation with a ridge pattern constraint. This method classified fingerprint patches considering ridge orientations and quality of the samples and learned coupled dictionaries for fingerprint classification. A ridge orientation-based clustering followed by sparse SR is adopted in the work of. This approach employs dominant orientation-based subdictionaries for sparse modeling of fingerprint data that significantly improved fingerprint recognition performance. Results reported in these studies demonstrated the use of SR as a promising scheme for fingerprint recognition, which is further investigated in this paper.
Iii Proposed Framework
In this section, we present the details of our framework that is designed to convert a low-resolution (LR) fingerprint image into its high-resolution (HR) equivalent using a conditional Generative Adversarial Network (cGAN) architecture followed by a pore detector.
Iii-a Conditional Generative Adversarial Network
A GAN  is a generative model that has outperformed other models in the task of synthetic image generation. This model has been explored in other representation learning tasks such as image super-resolution, image translation, etc. The conventional GAN model uses two sub-networks: a Generator, and a Discriminator, . The generator tries to produce realistic samples by learning a mapping from a random noise z to an output y, such that . Simultaneously, the discriminator learns to classify real and synthesized samples by distinguishing them. This system can be considered as a two-player min-max game where tries to fool by producing more samples indistinguishable from the real ones while gradually improves learning to detect the fake samples generated by . The Conditional GAN (cGAN)  differs from the conventional GAN  in the sense that target sample along with noise are fed to the network such that ; allowing target sample generation. The objective function for cGAN can be represented using the following equation:
Here, the generator constantly attempts to minimize Eq. 1 and the discriminator tries to maximize it.
Iii-B Training Objective
The goal of this paper is to design an efficient fingerprint SR model that is guided by a finger pore detection model in order to improve the overall fingerprint recognition accuracy. The network is trained in an end-to-end fashion such that the two individual models, the super-resolution and pore detector get benefit from each other. To achieve stable convergence of the model, we incorporate three losses from the SRGAN model: the Mean Squared Error (MSE) loss, adversarial loss , and perceptual loss . In addition, to preserve class identification details embedded in ridge patterns and pores of fingerprints, we add two more losses to the model. One is the ridge reconstruction loss that considers ridge pattern variations and the other one is the pore detection loss, which uses a pore location map to identify pore map differences between the ground truth and the super-resolved fingerprint samples.
Iii-B1 MSE Loss
use MSE loss, as this helps to achieve high peak signal-to-noise ratio. Similar to those methods, we also used MSE loss in our model. This loss estimates content-wise dissimilarity by taking the absolute pixel-wise differences between the generated image and the ground truth, as given by:
where W and H represent width and height of a fingerprint sample and N is the number of training samples. From Eq. 4, we see that the loss is simply the difference between the ground truth HR image and the generated super-resolved image from the LR image .
Iii-B2 Adversarial Loss
The generator in our model uses the adversarial loss that aids in the generation of natural looking images. The concept is that the discriminator tries to maximize the probability of genuine or fake images calculated from the real images and the probability of the fake ones denoted bywhile the generator tries to minimize the prediction of the fake samples by the discriminator; thus, promoting a lower chance of fake sample generation. We can mathematically formulate this loss as follows:
denote the probability distributions of real high-resolution fingerprint images and the corresponding low-resolution fingerprint images, respectively.
Iii-B3 Perceptual Loss
To preserve the inherent details of a ground truth in the generated fingerprint image, we use the perceptual loss proposed by Ledig et al. . A pretrained 19- layer VGG network  is used to extract abstract features from images that preserve the discriminative information of the images in a lower dimensional sub-space. The perceptual loss is the L2 distance between the ground truth and the super-resolved image, which is measured as:
where represents the feature map in the convolutional layer and , and denote the layer dimension.
Iii-B4 Ridge Reconstruction Loss
Similar to , the ridge reconstruction loss is computed as the squared Frobenius norm of the difference between the Gram matrices of the output and target images. The Gram matrix defines the style of the ground truth image by computing every feature activation of entire ground truth feature map space. Mathematically, it can be expressed as the matrix multiplication of each of the activation and the transpose of feature activation matrix whose elements can be obtained using Eq. 5 which is given by:
where represents the activations of the convolutional layer. For the generated image , we calculate the Gram matrix in a similar fashion which is given by:
Finally, the ridge reconstruction loss of the network is given by:
Iii-B5 Pore Detection Loss
Let be a LR fingerprint that is to be translated into the HR space represented by which is close to the original high-resolution fingerprint . The pore detection module utilizes the generated and produces the pore intensity map, which is compared with the original pore map, . To minimize the error in each iteration, the L1 error between the two intensity maps is back-propagated. The loss due to the pore detector can be expressed as:
where estimates the pore locations denoted by marked pores for the input fingerprint.
Iii-B6 Final Loss Function
The overall loss for the model can be written as the combination of all the above-mentioned losses with appropriate weighting which is formulated below as:
where , , , and are used as the constraints to balance the associated losses. The combination of , and losses lead to the generation of realistic fingerprint images. Setting , , to as weighting factors helps these losses to converge faster. The ridge reconstruction loss, enables the model to transfer the correct ridge patterns to the generated fingerprint sample and the pore detection loss, adds pore details to the generated fingerprint samples to ensure a high-performance fingerprint matcher. From our empirical study, we have set the value of , to to achieve the best performing model.
Iii-C Network Architecture
We have combined three separate models into one to create a joint model that is able to produce a high-resolution fingerprint from a low resolution one, as shown in Fig. 2.
Iii-C1 Super-Resolution Model
The first model in our network is the super-resolution model. The task of this model is to predict a high-resolution fingerprint image from its low-resolution version. We have designed this model with inspiration from . Our generator network has seven residual blocks that have identical layout. Each residual block has two convolutional layers of 312], and ParametricRELU 
as the activation function. Two sub-pixel convolutional layers are attached to the network to produce a high-resolution image from its low-resolution version. Our quality discriminator adopts a design similar to the guidelines from Radford et al. . It has seven convolutional layers each having 33 filter kernels. As the network advances, image resolution is decreased by strided convolution with increasing feature map size. The leakyRELU activation function is used with
in this network. Two dense layers and a sigmoid function are added so that the network can distinguish between real and generated samples.
Iii-C2 Deep ID Extractor
In order to preserve class identity information in our model, we employ a deep Siamese verifier as a feature extractor . First, we train the verifier with low-resolution fingerprint samples using the contrastive loss . Then, we extract features from the generated samples using this pre-trained module. To make sure the discriminator is considering the identity information, feature maps from the first, second and third layer of the verifier (size: 40x30x64, 20x15x128 and 10x8x256) are concatenated depth-wise with the features from the quality discriminator making the final output feature map size 40x30x128, 20x15x256 and 10x8x512 respectively. All of the layers comprise of convolutional layers and LeakyReLU activation functions. Kernel size for all the convolutional layer is 33 and stride is set to two.
Iii-C3 Pore Detection Model
The pore detection model  follows a residual structure. The network has eight residual blocks with eight shortcut connections. All the residual blocks use a 33 kernel with depth increasing by a factor of two. In total, the network is comprised of eighteen layers with 11 convolution and shortcut connections arranged in alternating manner. The deep residual network takes an input patch size of 80
60 and generates a similar size pore intensity map with marked pores. Then, a binarized pore map is created to highlight the position of the pores.
Iv Experiments and Result Analysis
In this section, we present our experimental results to analyze the impact of super resolution on finger pore detection. We conduct our experiment on two publicly available datasets (PolyU HRF DBI  and FVC2000 DB1 ). The PolyU HRF DBI dataset has images of 1,200 ppi resolution with spatial size of 320240. The annotations provided with this dataset contain the pore locations denoted as the central coordinates of pores. The dataset has 30 annotated fingerprint images. We use patches extracted from the first 20 for training and the remaining to test the performance of the pore detector. We have also applied data augmentation to increase the number of samples in our dataset. We augmented our train set by applying gamma transformation, random scale, horizontal and vertical flip to create different contrast for the original samples. To further increase the size of the training set, we divide the images into overlapping patches of size 4030 and feed to the generator. The synthesized super-resolved patches of size 8060 are used to train the pore detector. The second dataset FVC2000 DB1 has images of 500 ppi resolution with spatial size of 300300. Both PolyU DBI-test and FVC2000 DB1 are used to evaluate the performance of our proposed approach.
First, we train our super-resolution and pore detection network separately. For training the super-resolution network, we use the Adam optimizer with a momentum of 0.9, beta 0.5 and batch size of 64. The learning rate is set as . Our pore detector has been trained using the Adam optimizer with a batch size of 64 and a learning rate of
for 30 epochs. For joint training of our model, the entire model is trained with pre-trained weights of super-resolution and pore detection networks for 20 epochs and then the weights are updated for 20 epochs.
Iv-2 Quality distribution analysis
To evaluate the quality of the super-resolved samples, we have used the NFIQ 2.0 utility from NBIS . The NFIQ 2.0 assigns a quality score to an image ranging from 0 to 100. From the quality score distribution in Fig. 4, a large overlap among the scores of the generated and real HR fingerprints is visible which indicates the qualitative similarity of the generated fingerprint samples of our model to the real HR fingerprint samples. Approximately 79% of the generated samples have been assigned a quality score of 50 or higher which confirms quality image generation by our modified SRGAN.
Iv-3 Performance analysis of pore detection
The pore detection performance of our proposed method is demonstrated in Fig. 3. We have also summarized our pore detection model performance with other state-of-the-art methods in Table I. This experimental result is reported considering the True Detection Rate (TDR) and False Detection Rate (FDR) of pores in the PolyU DBI dataset.
From Table I, we can conclude that our pore detection model performs significantly higher, surpassing the results of other baseline methods. The proposed pore detection method achieves a high TDR accompanied by a low FDR, yielding a very high accuracy.
|Jain et al. ||75.9%||23%|
|Zhao et al. ||84.8%||17.6%|
|Segundo et al. ||90.8%||11.1%|
|Su et al. ||88.6%||0.4%|
|Dahia et al. ||91.95%||8.88%|
Iv-4 Performance analysis of unified model
In this work, we have used all three levels of fingerprint features. Ridge patterns and minutiae from fingerprints are extracted by applying wavelet-based Gabor filtering  and crossing number algorithm , respectively. The matching at Level-2 is performed combining two different matchers, namely correlation-based  and minutiae-based matchers . To compare fingerprints based on pores, Graph Comparison Algorithm  is utilized which focuses on local features and spatial relationship between pores. In order to make use of the extended fingerprint feature set, a score-level fusion of match scores from Level 1, Level 2 and Level 3 features is performed using sum-rule and min-max-normalization  to conduct fingerprint matching.
To evaluate the fingerprint recognition accuracy of our approach, we have generated 3,700 genuine pairs and 21,756 imposter pairs following the same procedure in [22, 23] from the PolyU dataset. The genuine pairs are obtained by matching each fingerprint image in the first session with all five images of the same finger in the second session. The first fingerprint image of each finger in the second session is compared with the first fingerprint image of all the other fingers in the first session. We compare our method with other existing methods; MICPP , MINU_SRDP , TDSWR . Based on the performed matching experiments, we find that the performance of matching 1000 ppi is quite the same as matching 1200 ppi samples. Hence, we select 1000 ppi as our preferred resolution. It can be observed from Table II that Equal Error Rate (EER) of our SR fingerprints is comparable to that of the ground truth fingerprint images, indicating the effectiveness of our proposed SR based pore detection method. In order to analyze the impact of different level fingerprint features in recognition performance, we have plotted the ROC for Level-2, Level-3 and the score-level fusion of the match scores for both matchers. From Fig. 5, we see that combination of match scores from Level-2 and Level-3 shows significant performance gain compared to individual Level-2 and Level-3 matchers. Also, it can be observed that the recognition accuracy of generated fingerprint samples is very close to the accuracy obtained using real ground-truth samples.
|Ours (1000-ppi Ground Truth)||1.57%|
|Ours (1000-ppi Modified SRGAN)||1.63%|
Fig. 6. shows the fingerprint recognition performance of our model across different image resolutions for both the PolyU DBI and the FVC2000 DB1 dataset. It can be observed that the decrease in EER is almost twice for the super-resolved high-resolution samples (e.g., 1000 ppi) from their ground-truth (e.g., 500 ppi) samples. This ensures reliable reconstruction of HR fingerprint samples using our model. The significant decrease in EER over the two datasets show a consistent improvement in fingerprint recognition using our SR guided joint framework. It is demonstrated that the generated 1000 ppi fingerprints can substantially improve the matching performance compared to the ground-truth 500 ppi samples.
To provide a comprehensive performance analysis, we plot ROC curves investigating the effect of different loss functions adopted in our approach. In Fig.7, we can see that the matching results based on the SRGAN losses combined with the ridge and pore detection losses achieve the highest Area Under the Curve (AUC) of around 99.8%, which is very close to the AUC computed from the ground truth HR fingerprint samples. It clearly demonstrates that introducing ridge reconstruction and pore detection losses help the model to generate samples close to real ones, which provides an overall improvement in fingerprint recognition performance.
This paper proposes a jointly optimized fingerprint recognition framework using the concept of super-resolution and pore detection. The model is able to generate HR fingerprint samples, learn pore locations, ridge structure and other details from LR samples. The increase in resolution helps to achieve a high pore detection accuracy, which in turn forces the generator to produce high quality synthesized fingerprint samples. Also, integrating features extracted from a deep verifier with a quality discriminator preserves the individuality in our reconstructed samples. Reliable reconstruction of 1000 ppi fingerprint from its 500 ppi equivalent proves the validity of our approach.
-  (2019) Pore detection in high-resolution fingerprint images using deep residual network. Journal of Electronic Imaging 28 (2), pp. 020502. Cited by: §III-C3.
-  (2020) PoreNet: cnn-based pore descriptor for high-resolution fingerprint recognition. IEEE Sensors Journal. Cited by: §I, §II-A.
-  (2016) Fingerprint image super resolution using sparse representation with ridge pattern prior by classification coupled dictionaries. IET Biometrics 6 (5), pp. 342–350. Cited by: §II-B.
-  (2005) Learning a similarity metric discriminatively, with application to face verification. In , Vol. 1, pp. 539–546. Cited by: §III-C2.
-  (2018) ID preserving generative adversarial network for partial latent fingerprint reconstruction. In 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–10. Cited by: §III-C2.
-  (2018) Improving Fingerprint Pore Detection with a Small FCN. arXiv preprint arXiv:1811.06846. Cited by: §II-A, TABLE I.
-  (2015) Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence 38 (2), pp. 295–307. Cited by: §III-B1.
-  (2019) Super resolution-assisted deep aerial vehicle detection. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, Vol. 11006, pp. 1100617. Cited by: §I.
-  (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §III-A, §III-B.
-  (2018) Task-driven super resolution: object detection in low-resolution images. arXiv preprint arXiv:1803.11316. Cited by: §I.
Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026–1034. Cited by: §III-C1.
-  (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167. Cited by: §III-C1.
-  (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §III-A.
-  (2006) Pores and ridges: high-resolution fingerprint matching using level 3 features. IEEE transactions on pattern analysis and machine intelligence 29 (1), pp. 15–27. Cited by: §I, §II-A, §IV-4, TABLE I.
-  (1999) Combining multiple matchers for a high security fingerprint verification system. Pattern Recognition Letters 20 (11-13), pp. 1371–1379. Cited by: §IV-4.
DeepPore: fingerprint pore extraction using deep convolutional neural networks. IEEE Signal Processing Letters 24 (12), pp. 1808–1812. Cited by: §II-A.
-  (2016) Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pp. 694–711. Cited by: §III-B3, §III-B4, §III-B.
-  (2008) Extraction of level 2 and level 3 features for fragmentary fingerprint comparison. EPFL 3, pp. 45–47. Cited by: §II-A.
-  (2018) A novel pore extraction method for heterogeneous fingerprint images using convolutional neural networks. Pattern Recognition Letters 113, pp. 58–66. Cited by: §II-A.
-  (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §I, §III-C1.
-  (2017) Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980–2988. Cited by: §II-A.
-  (2011) A novel hierarchical fingerprint matching approach. Pattern Recognition 44 (8), pp. 1604–1613. Cited by: §IV-4.
-  (2010) Fingerprint pore matching based on sparse representation. In 2010 20th International Conference on Pattern Recognition, pp. 1630–1633. Cited by: §II-A, §IV-4.
-  (2002) FVC2000: fingerprint verification competition. IEEE transactions on pattern analysis and machine intelligence 24 (3), pp. 402–412. Cited by: §IV.
-  (2009) Handbook of fingerprint recognition. Springer Science & Business Media. Cited by: §I.
-  (1993) Fingerprint image analysis for automatic identification. Machine Vision and Applications 6 (2-3), pp. 124–139. Cited by: §IV-4.
-  (2019) End-to-end pore extraction and matching in latent fingerprints: going beyond minutiae. arXiv preprint arXiv:1905.11472. Cited by: §II-A.
-  (2015) Pore-based ridge reconstruction for fingerprint recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 128–133. Cited by: TABLE I.
-  PolyU hrf database. Note: http://www4.comp.polyu.edu.hk/~biometrics/HRF/HRF_old.htmAccessed: 3.21.2020 Cited by: §IV.
-  (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §III-C1.
-  (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §II-A.
-  (2003) Information fusion in biometrics. Pattern recognition letters 24 (13), pp. 2115–2125. Cited by: §IV-4.
-  (2019) Stable pore detection for high-resolution fingerprint based on a CNN detector. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 2581–2585. Cited by: §II-A.
-  (2019) The effects of super-resolution on object detection performance in satellite imagery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 0–0. Cited by: §I.
-  (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1874–1883. Cited by: §III-B1, §III-C1.
-  (2018) Very deep convolutional networks for large-scale image recognition. arxiv [cs. cv]. 2014. Cited by: §III-B3.
-  (2015) Fingerprint image super-resolution via ridge orientation-based clustered coupled sparse dictionaries. Journal of Electronic Imaging 24 (4), pp. 043015. Cited by: §II-B.
-  (1994) Automated system for fingerprint authentication using pores and ridge structure. In Automatic systems for the identification and inspection of humans, Vol. 2277, pp. 210–223. Cited by: §II-A.
A deep learning approach towards pore extraction for high-resolution fingerprint recognition. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2057–2061. Cited by: §I, §II-A, TABLE I.
-  (2004) Nist fingerprint image quality. NIST Res. Rep. NISTIR7151 5. Cited by: §IV-2.
-  (2014) Improving pore extraction in high resolution fingerprint images using spatial analysis. In 2014 IEEE International Conference on Image Processing (ICIP), pp. 4962–4966. Cited by: §II-A.
-  (2017) Fingerprint pore extraction using U-Net based fully convolutional network. In Chinese Conference on Biometric Recognition, pp. 279–287. Cited by: §II-A.
-  (2017) Fingerprint pore extraction based on multi-scale morphology. In Chinese Conference on Biometric Recognition, pp. 288–295. Cited by: §II-A.
-  (2018) Fingerprint pore comparison using local features and spatial relations. IEEE Transactions on Circuits and Systems for Video Technology 29 (10), pp. 2927–2940. Cited by: §IV-4.
-  (2009) Fingerprint image enhancement by super resolution with early stopping. In 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, Vol. 4, pp. 527–531. Cited by: §II-B.
-  (2010) Selecting a reference high resolution for fingerprint recognition using minutiae and pores. IEEE Transactions on Instrumentation and Measurement 60 (3), pp. 863–871. Cited by: §I.
-  (2002) A wavelet-based method for fingerprint image enhancement. In Proceedings. International Conference on Machine Learning and Cybernetics, Vol. 4, pp. 1973–1977. Cited by: §IV-4.
-  (2010) Adaptive fingerprint pore modeling and extraction. Pattern Recognition 43 (8), pp. 2833–2844. Cited by: §II-A, TABLE I.
-  (2009) Direct pore matching for fingerprint recognition. In International Conference on Biometrics, pp. 597–606. Cited by: §II-A.