Automatic fingerprint recognition is one of the most widely studied topic in biometrics over the past years . One of the main challenges in fingerprint recognition is to increase the recognition accuracy, especially for latent fingerprints. Fingerprint comparison is primarily based on minutiae set comparison [28, 11]. A number of hand-crafted approaches [10, 28] have been used to augment the minutiae with their attributes to improve the recognition accuracy. However, robust automatic fingerprint minutiae extraction, particularly for noisy fingerprint images, continues to be a bottleneck in fingerprint recognition systems.
and finally heuristics to define minutiae attributes. While such an approach works well for good quality fingerprint images, it provides inaccurate minutiae location and orientation for poor quality rolled/plain prints and, particularly for latent fingerprints. To overcome the noise in fingerprint images, Yoon used Gabor filtering to calculate the reliability of extracted minutiae. Although this approach can work better than , it also resulted in poor results with highly noisy images. Because these prevailing approaches are based on handcrafted methods or heuristics, they are only able to extract basic (or low level) features111Features such as edges, corners, etc. of images. We believe learning based approaches using deep networks will have better ability to extract high level features222Abstract/semantic features retrieved from deep layers. from low quality fingerprint images.
In this paper, we present a novel framework that exploits useful domain knowledge coded in the deep neural networks to overcome limitations of existing approaches to minutiae extraction. Figure 1 visualizes results of the proposed framework on two latent fingerprints from the NIST SD27 dataset.
Specifically, our proposed approach comprises of two networks, called CoarseNet and FineNet:
- CoarseNet is a residual learning  based convolutional neural network that takes a fingerprint image as initial input, and the corresponding enhanced image, segmentation map, and orientation field (computed by the early stages of CoarseNet) as secondary input to generate the minutiae score map. The minutiae orientation is also estimated by comparing with the fingerprint orientation.
- FineNet is a robust inception-resnet  based minutiae classifier. It processes each candidate patch, a square region whose center is the candidate minutiae point, to refine the minutiae score map and approximate minutiae orientation by regression. Final minutiae are the classification results.
Deep learning approach has been used by other researchers for minutiae extraction (see Table 1). But, our approach differs from published methods in the way we encode fingerprint domain knowledge in deep learning. Sankaran  classified the minutiae and non-minutiae patches by using sparse autoencoders. Jiang  introduced a combination of two networks: JudgeNet for classifying minutiae patches, and LocateNet for locating precise minutiae location. While Jiang use neural networks, their approach is very time-consuming due to use of sliding window to extract minutiae candidates. Another limitation of this approach is that it does not provide minutiae orientation information.
Tang  utilized the idea of object detection to detect candidate minutiae patches, but it suffers from two major weaknesses: (i) hard threshold to delete the candidate patches, and (ii) the same network is used for both candidate generation and classification. By using sliding windows, Darlow  fed each pixel of the input fingerprint to a convolutional neural network, called MENet, to classify whether it corresponds to a minutia or not. It also suffers from time-consuming sliding windows as in , and separate modules for minutiae location and orientation estimates. Tang  proposed FingerNet that maps traditional minutiae extraction pipeline including orientation estimation, segmentation, enhancement, and extraction to a network with fixed weights. Although this approach is promising because it combines domain knowledge and deep network, it still uses plain 333A series of stacked layers. network architecture and hard threshold in non-maximum suppression 444A post-processing algorithm that merges all detections belonging to the same object.. Finally, the accuracy of FingerNet depends largely on the quality of the enhanced and segmentation stage while ignoring texture information in the ridge pattern.
In summary, the published approaches suffer from using sliding windows to process each pixel in input images, setting hard threshold in post-processing step, and using plain convolutional neural network to classify candidate regions. Furthermore, the evaluation process in these studies is not consistent in terms of defining “correct” minutiae.
The contributions of our approach are as follows:
A network-based automatic minutiae extractor utilizing domain knowledge is proposed to provide reliable minutiae location and orientation without a hard threshold or fine tuning.
A robust patch based minutiae classifier that significantly boosts the precision and recall of candidate patches. This can be used as a robust minutiae extractor with compact embedding of minutiae features.
2 Proposed framework
Our minutiae extraction framework has two modules: (i) residual learning based convolutional neural network, called CoarseNet that generates candidate patches containing minutiae from input fingerprint image; (ii) inception-resnet based network architecture, called FineNet which is a strong minutiae classifier that classifies the candidate patches output by CoarseNet. These two networks also provide minutiae location and orientation information as outputs. Figure 2 describes the complete network architecture for automatic minutiae location and orientation for an input fingerprint image. Section 2.1 presents the architecture of CoarseNet. In Section 2.2, we introduce FineNet with details on training to make it a strong classifier.
2.1 CoarseNet for minutiae extraction
We adopt the idea of combining domain knowledge and deep representation of neural networks in  to boost the minutiae detection accuracy. In essence, we utilize the automatically extracted segmentation map, enhanced image, and orientation map as complementary information to the input fingerprint image. The goal of CoarseNet is not to produce the segmentation map or enhanced image or orientation map. They are just the byproducts of the network. However, these byproducts as fingerprint domain knowledge must be reliable to get robust minutiae score map. Because Tang  proposed an end-to-end unified network that maps handcrafted features to network based architecture, we use this as a baseline for our CoarseNet.
2.1.1 Segmentation and orientation feature sharing
Adding more layers in the deep network with the hope of increasing accuracy might lead to the exploding or vanishing gradients problem. From the success of residual learning, we use residual instead of just plain stacked convolutional layers in our network to make it more powerful. Figure 3 shows the detailed architecture of the network.
to process each patch with fixed size and stride, we use a deeper residual learning based network with more pooling layers to scale down the region patch. Specifically, we get the output after the, , and pooling layer to feed to an ASPP network  with corresponding rates for multiscale segmentation. This ensures the output has the same size as input without a loss of information when upsampling the score map.
By using four pooling layers, each pixel in the feature map, called level , corresponds to a region in the original input. Result layers at level and will be tested as coarse estimates while the level serves as fine estimation.
Image segmentation and fingerprint orientation estimation share the same convolutional layers. Thus, by applying multi-level approach mentioned above, we get probability maps of each level-corresponding region in input image. For instance, to get finer-detailed segmentation for each region level, we continue to process probability map of region level .
Orientation map. To get complete minutiae information from context, we adopt the fusion idea of Cao . We fuse the results of Dictionary-based method  with our orientation results from CoarseNet. Because  uses a hand-crafted approach, we set the fusion weight ratio of its output with our network-based approach as 1:3.
2.1.2 Candidate generation
The input fingerprint image might contain large amounts of noise. So, without using domain knowledge we may not be able to identify prominent fingerprint features. The domain knowledge comprises of four things: raw input image, enhanced image, orientation map, and segmentation map. In the Gabor image enhancement module, we take the average of filtered image and the orientation map for ridge flow estimation. To emphasize texture information in fingerprints, we stack the original input image with the output of enhancement module to obtain the final enhancement map. To remove spurious noises, we apply segmentation map on the enhancement map and use it as input to coarse minutiae extractor module.
To obtain the precise location of minutiae, each level of residual net is fused to get the final minutiae score map with size , where and are the height and width of the input image. Figure 4 shows the details of processing score map. To reduce the processing time, we use score map at level as a coarse location. To get precise location, lower level score maps are used.
2.1.3 Non-maximum suppression
Using non-maximum suppression to reduce the number of candidates is common in object detection [7, 17]. Some of the candidate regions are deleted to get a reliable minutiae score map by setting a hard threshold  or using heuristics [21, 22]. However, a hard threshold can also suppress valid minutiae locations. A commonly used heuristics is to sort the candidate scores in ascending order. The distance between pairwise candidates is calculated with hard thresholds for distance and orientation. By iteratively comparing each candidate with the rest in the candidate list, only the candidate with higher score and score above the thresholds is kept. However, this approach fails when two minutiae are near each other and the inter-minutiae distance is below the hard thresholds.
Since each score in the minutiae map corresponds to a specific region in the input image, we propose to use the intersection over union strategy. Specifically, after sorting the scores of the candidate list, we keep high score candidates while ignoring the lower scoring candidates with at least overlap with the candidates already selected.
2.1.4 Training data for CoarseNet
Given the lack of datasets with ground truth, we use the approach in Tang  to generate weak labels for training the segmentation and orientation module. The coarse minutiae extractor module uses minutiae location and minutiae orientation ground truth provided in the two datasets. We also use data augmentation techniques as mentioned in Section 3.
Extracting minutiae based on candidate patches is not adequate. Although CoarseNet is reliable, it still fails to detect true minutiae or detects spurious minutiae. This can lead to poor performance in fingerprint matching. This motivates our use of FineNet - a minutiae classifier from generated candidate patches. FineNet takes candidates from the output of CoarseNet as input to decide whether the region in the center of the corresponding patch has a valid minutia or not.
2.2.1 FineNet architecture
For FineNet training, we extract an equal number of sized minutiae and non-minutiae patches with . FineNet determines the whether the pixel region in the center of each patch contains a valid minutia or not. The candidate patches are resized into pixels that feed to FineNet. Based on the observation that the original input image size (without rescaling) is not large in comparison with images for object classification, too much up scaling the image can cause blurring the tiny details, and too small an input image size is not sufficient for network with complex architecture, we choose pixels.
Training data for FineNet is extracted from the input gray scale images where minutiae data are based on the ground truth minutiae location and non-minutiae ones are from random sampling with the center regions do not contain partial or fully minutiae location. To make the network more robust, we use some small techniques like Batch Normalization, rotation, flipping, scaled augmentation , and bounding blurring  as pre-processing stage.
2.2.2 Losses and implementation details
Intra-Inter class losses. Because input captured fingerprint image is not always in the ideal condition, it might be easily affected by distortion, finger movement, or quality (wet/dry) of finger. Thus, the variation of minutiae shapes and surrounding ridges can affect the accuracy of the classifier. To handle this situation and make the FineNet more robust with intra-class variations, we use Center Loss  as a complementary of softmax loss and minutiae orientation loss. While softmax loss tends to push away features of different classes, center loss tends to pull features in the same class closer. Let , , , be the total loss, center loss, sofmax loss, and orientation loss, the total loss for training is calculated as follows:
where we set to balance between intra-class (center) and inter-class (sofmax) loss and to emphasize the importance of precision of minutiae orientation.
Parameter settings. As mentioned in Section 2.2.1, fingerprint patches are input to FineNet where the patch size is . To ensure our network can handle distortion in input image, we apply scaled augmentation 
, random cropping, and brightness adjustment. Horizontal and vertical flip with pixel mean subtraction are also adopted. We randomly initialize all variables by using a Gaussian distribution. Batch size is set to . We use schedule learning rate after particular iterations. Specifically, we set it as in the beginning and reduce times after
iterations. To prevent vanishing gradient problem, we set the maximum epoch to. We use as the value for momentum and weight decay is .
3 Experimental results
We evaluate our method on two different datasets with different characteristics under different settings of parameters and (see Eq. (2)). We also visualize examples of score maps with correct and incorrect minutiae extractions in Figure 7. All experiments were implemented in Tensorflow and ran on Nvidia GTX GeForce.
We use FVC 2002 dataset  with data augmentation consisting of plain fingerprint images for training. To compensate for the lack of a large scale dataset for training, we distort the input images in and coordinates in the spirit of hard-training with non-ideal input fingerprint images. Furthermore, we also apply additive random noise to the input images. Thus, for training CoarseNet, we have an augmented dataset of images. To obtain data for training FineNet, we extract pixel patches from these training images for CoarseNet whose center is a ground truth minutia point. For non-minutiae patches, we randomly extract patches with the criteria that the center of each patch does not contain any minutia. Thus, we collect around minutia and non-minutia patches for training FineNet.
To demonstrate the robustness of our framework, we compare our results with published approaches on FVC 2004 555We obtained the groundtruth from  and NIST SD27  datasets under different criteria of distance and orientation thresholds. Let the tuples and be the location coordinates and orientation values of predicted and ground truth minutia. The predicted minutia is called true if it satisfies the following constrains:
where and are the thresholds in pixels and degrees, respectively. Specifically, we set the range of distances between detected and ground truth minutiaes from to pixels (in location) and to degree (in orientation) with default threshold value (). We choose these settings to demonstrate the robust and precise results from the proposed approach while published works degrade rather quickly.
Table 2 shows the precision and recall comparisons of different approaches to minutiae extraction. MINDTCT  is the open source NIST Biometric Image Software. VeriFinger  is a commercial SDK for minutiae extraction and matching. Since Gao  did not release their code in public domain, we report their results on NIST SD27 and FVC 2004 database from . Darlow  use only a subset of the FVC dataset for training and the rest for testing, we do not include in our evaluation.
Table 2 shows that the proposed method outperforms state-of-the-art techniques under all settings of parameters (thresholds) and for both FVC 2004 and NIST SD27. Our results also reveal that by using only rolled/plain fingerprint images for training, our framework can work pretty well for detecting minutiae in latents.
Table 3 shows a comparison between using and not using our proposed non-maximum suppression method on the NIST SD27 dataset with setting in Table 2. Because non-maximum suppression is a post processing step, it helps improve precision, recall and F1 values.
To make a complete comparison (at all the operating points) with published methods, we present the precision-recall curves in Figure 6. The proposed approach surpasses all published works on both FVC 2004 and NIST SD27 datasets.
Figure 7 shows the minutiae extraction results on both FVC 2004 and NIST SD27 datasets with different quality images. Our framework works well in difficult situations such as noisy background or dry fingerprints. However, there are some cases where the proposed framework either misses the true minutiae or extracts spurious minutiae. For the FVC 2004 dataset and rolled fingerprints from NIST SD27 dataset, we obtain results that are close to the ground truth minutiae. However, some minutiae points are wrongly detected (image a) because of the discontinuity of ridges or missed detections (image c) because the location of minutiae is near the fingerprint edge. For the latent fingerprints from NIST SD27 dataset, besides the correctly extracted minutiae, the proposed method is sensitive to severe background noise (image e) and poor latent fingerprint quality (image g). The run time per image is around 1.5 seconds for NIST SD27 and 1.2 seconds for FVC 2004 on Nvidia GTX GeForce.
We have presented two network architectures for automatic and robust fingerprint minutiae extraction that fuse fingerprint domain knowledge and deep network representation:
- CoarseNet: an automatic robust minutiae extractor that provides candidate minutiae location and orientation without a hard threshold or fine tuning.
- FineNet: a strong patch based classifier that accelerates the reliability of candidates from CoarseNet to get final results.
A non-maximum suppression is proposed as a post processing step to boost the performance of the whole framework. We also reveal the impact of residual learning on minutiae extraction in the latent fingerprint dataset despite using only plain fingerprint images for training. Our experimental results show that the proposed framework is robust and achieves superior performance in terms of precision, recall and F1 values over published state-of-the-art on both benchmark datasets, namely FVC 2004 and NIST SD27.
The proposed framework can be further improved by (i) using larger training set for network training that includes latent images, (ii) constructing context descriptor to exploit the region surrounding minutiae, (iii) improving processing time, and (iv) unifying minutiae extractor into an end-to-end fingerprint matching framework.
-  K. Cao and A. K. Jain. Automated latent fingerprint recognition. arXiv preprint arXiv:1704.01925, 2017.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. PAMI, 2017.
-  L. Darlow and B. Rosman. Fingerprint minutiae extraction using deep learning. In Proc. IEEE IJCB, 2017.
-  J. Feng. Combining minutiae descriptors for fingerprint matching. Pattern Recognition, 41(1):342–352, 2008.
-  X. Gao, X. Chen, J. Cao, Z. Deng, C. Liu, and J. Feng. A novel method of fingerprint minutiae extraction based on Gabor phase. In Proc. 17th IEEE ICIP, pages 3077–3080, 2010.
-  M. D. Garris and R. M. McCabe. NIST special database 27: Fingerprint minutiae from latent and matching tenprint images. NIST Technical Report NISTIR, 6534, 2000.
-  R. Girshick, F. Iandola, T. Darrell, and J. Malik. Deformable part models are convolutional neural networks. In Proc. IEEE CVPR, pages 437–446, 2015.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE CVPR, pages 770–778, 2016.
-  S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML, pages 448–456, 2015.
-  A. Jain, L. Hong, and R. Bolle. On-line fingerprint verification. IEEE Trans. PAMI, 19(4):302–314, 1997.
-  A. K. Jain, Y. Chen, and M. Demirkus. Pores and ridges: High-resolution fingerprint matching using level 3 features. IEEE Trans. PAMI, 29(1):15–27, 2007.
-  A. K. Jain, K. Nandakumar, and A. Ross. 50 years of biometric research: Accomplishments, challenges, and opportunities. Pattern Recognition Letters, 79:80–105, 2016.
-  L. Jiang, T. Zhao, C. Bai, A. Yong, and M. Wu. A direct fingerprint minutiae extraction approach based on convolutional neural networks. In Proc. IEEE IJCNN, pages 571–578, 2016.
-  M. Kayaoglu, B. Topcu, and U. Uludag. Standard fingerprint databases: Manual minutiae labeling and matcher performance analyses. arXiv preprint arXiv:1305.1443, 2013.
-  D. Maio, D. Maltoni, R. Cappelli, J. Wayman, and A. Jain. FVC2004: Third fingerprint verification competition. In Biometric Authentication, pages 31–35. Springer, 2004.
-  D. Maio, D. Maltoni, R. Cappelli, J. L. Wayman, and A. K. Jain. FVC2002: Second fingerprint verification competition. In Proc. 16th ICPR, volume 3, pages 811–814, 2002.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. PAMI, 39(6):1137–1149, 2017.
-  A. Sankaran, P. Pandey, M. Vatsa, and R. Singh. On latent fingerprint minutiae extraction using stacked denoising sparse autoencoders. In Proc. IEEE IJCB, pages 1–7, 2014.
-  K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi.
Inception-v4, inception-resnet and the impact of residual connections on learning.In Proc. AAAI, pages 4278–4284, 2017.
-  Y. Tang, F. Gao, and J. Feng. Latent fingerprint minutia extraction using fully convolutional network. In Proc. IEEE IJCB, 2017.
-  Y. Tang, F. Gao, J. Feng, and Y. Liu. Fingernet: An unified deep network for fingerprint minutiae extraction. In Proc. IEEE IJCB, 2017.
-  Verifinger. Neuro-technology, 2010.
-  C. I. Watson, M. D. Garris, E. Tabassi, C. L. Wilson, R. M. McCabe, S. Janet, and K. Ko. User’s guide to NIST biometric image software (NBIS). NIST Interagency/Internal Report 7392, 2007.
Y. Wen, K. Zhang, Z. Li, and Y. Qiao.
A discriminative feature learning approach for deep face recognition.In Proc. ECCV, pages 499–515. Springer, 2016.
-  X. Yang, J. Feng, and J. Zhou. Localized dictionaries based orientation field estimation for latent fingerprints. IEEE Trans. PAMI, 36(5):955–969, 2014.
-  S. Yoon, J. Feng, and A. K. Jain. Latent fingerprint enhancement via robust orientation field estimation. In Proc. IEEE IJCB, pages 1–8, 2011.
-  F. Zhao and X. Tang. Preprocessing and postprocessing for skeleton-based fingerprint minutiae extraction. Pattern Recognition, 40(4):1270–1281, 2007.