Latent fingerprints, also known as fingermarks, are friction ridge impressions formed as a result of someone touching a surface, particularly at a crime scene. Latents have been successfully used to identify suspects in criminal investigations for over 100 years by comparing the similarity between latent and rolled fingerprints in a reference database . Latent cropping (segmentation) is the crucial first step in the latent recognition algorithm. For a given set of latent enhancement, minutiae extraction, and matching modules, different cropping masks for friction ridges can lead to dramatically different recognition accuracies. Unlike rolled/slap fingerprints, which are captured in a controlled setting, latent fingerprints are typically noisy, distorted and have low ridge clarity. This creates challenges for an accurate automatic latent cropping algorithm.
We map the latent fingerprint cropping problem to a sequence of computer vision tasks as follow: (a)Object detection  as friction ridge localization; (b) Semantic segmentation  as separating all possible friction ridge patterns (foreground) from the background; and (c) Instance Segmentation  as separating individual friction ridge patterns in the input latent by semantic segmentation.
. FCN based segmentation consists of a series of consecutive receptive fields in its network and is built on translation invariance. Instead of computing general nonlinear functions, FCN builds its nonlinear filters based on relative spatial information in a sequence of layers. On the other hand, detection based segmentation first finds a core and then branches out in parallel to construct pixel-wise segmentation from regions of interest returned by previous detection.
Our proposed method, called SegFinNet, inherits the idea of instance segmentation and utilizes the advantages of FCN  and Mask-RCNN  to deal with the latent fingerprint cropping problem. SegFinNet uses Faster RCNN  as its backbone while its head comprises of atrous transposed convolution layers . We utilize a combination of a non-warp region of interest technique, a fingerprint attention mechanism, a fusion voting and a feedback scheme to take advantage of both the deep information from neural networks and the shallow appearance of fingerprint domain knowledge (Figure 1). In our experiments, SegFinNet shows a significant improvement not only in latent cropping, but also in latent search (see Section 4.6 for more details).
2 Related work
In latent fingerprint recognition literature, it is a common practice to use a patch based approach in various modules, (i.e. minutiae extraction  and enhancement [19, 3]). In such cases, the input latent is divided into multiple overlapping patches at different locations. The latent segmentation module of latent recognition systems has also been approached in this patch based manner, both with convolutional neural networks (convnet) and non-convnet approaches. Table 1 concisely describes these methods reported in the literature.
Non-convnet patch-based approaches: Choi  constructed orientation and frequency maps to use as a reference in evaluating latent patches. Essentially, this is a dictionary look up map aimed at classifying each individual patch into two classes. Zhang  used an adaptive directional total variance model which also relies on the orientation estimation. From the information in the spatial-frequency domain, Ruangsakul  proposed a Fourier subband method with necessary post-processing to fill gaps and eliminate islands. Cao  classified patches based on a dictionary which depends on dictionary quality and needs post-processing to make the masks smooth. Liu  utilized texture information to develop linear density on a set of line segments but requires a post-processing technique.
The features used in all the above methods are “hand-crafted” and rely on post-processing techniques. With the success of deep neural networks in many domains, latent fingerprint cropping has also been tackled using them.
Convnet patch-based approaches: Zhu  used a classification neural network framework to classify patches. This approach is similar to existing non-convnet methods except it simply replaces hand-crafted features by convnet features. Ezeobiejesi  used a stack of restricted Boltzmann machines in an idea similar to .
There are a number of disadvantages to patch-based approaches. (i) Patch-based methods take significant time to compute since they need to process every patch into the framework (Figure 3). Typically, a patch size is around . Thus, on average there are, approximately patches in a latent fingerprint in the NIST SD27 dataset. That means patch-based approaches process subimages instead of one. (ii) Patch-based approaches cannot separate multiple instances of friction ridge patterns, i.e. more than one latent (overlapping or non-overlapping) in the input image.
Our work combines fully convolutional neural network and detection based approaches for latent fingerprint segmentation to process the entire input latent image in one shot. Furthermore, it also utilizes a top-down approach (detection before segmentation) which can also be applied to segmenting overlapping latent fingerprints. The main contributions of this paper are as follows:
A fully automatic latent segmentation framework, called SegFinNet, which processes the entire input image in one shot. It also outputs multiple instances of fingermark locations.
NonWarp-RoIAlign is proposed to obtain more precise segmentation while mapping region of interest (cropped region) in the feature map to the original image.
Visual attention technique is designed to focus only on fingermark regions in the input image. This addresses the problem of “where to look”.
Feedback scheme with weighted loss is utilized to emphasize the difference in importance of different objective functions (foreground-background, bounding box, etc.)
Majority voting fusion mask is proposed to increase the stability of the cropped mask while dealing with different qualities of latents.
Experiments demonstrating that the proposed framework outperforms both human latent cropping and published automatic cropping approaches. Furthermore, the proposed segmentation framework, when integrated with a latent AFIS, boosts the search accuracy on three different latent databases: NIST SD27, WVU, and MSP DB (an operational forensic database).
Based on the idea of detection-based segmentation of Mask RCNN , we build our framework upon the Faster RCNN architecture , where the head is a series of atrous transposed convolutions for pixel-wise prediction.
Unlike previous patch-based approaches which used either handcrafted features [18, 3, 13] or a convnet approach [20, 6], we feed the whole input latent image once to Faster RCNN and process candidates foreground regions returned by SegFinNet. This reduces the training time, and avoids post-processing heuristics to combine results from different patches. Figure 2 and Table 1 illustrate the SegFinNet architecture in details.
The RoIAlign module in Mask RCNN can handle the misalignment problem111Due to mapping each point in feature map to the nearest value in its neighboring coordinate grid. We refer readers to  for more details.
while quantizing the region of interest (RoI) coordinates in feature maps by using bilinear interpolation on fixed point values. However, it warps the RoI feature maps into a square size (e.g. ) before feeding to the upsampling step. This leads to further misalignment and information loss when reshaping the ROI feature map back to original size in image coordinates.
The idea of NonWarp-RoIAlign is simple but effective. Instead of warping RoI feature maps into squared size and then applying multiple deconvolution (upsampling) layers, we only pad zero value pixels to get to a specific size. This can avoid the loss of pixel-wise information when warping regions. We use atrous convolution when upsampling for faster processing and saving memory resources (see Figure 2 for more visualization). The advantage of this method of upsampling is that we can deal with the multi-scale problem with atrous spatial pyramid pooling properties and weights of atrous convolution can be obtained from the transposed corresponding forward layer.
3.2 Where to look? Visual attention mechanism
Latent examiners tend to examine fingermarks, directed by the RoI, to identify the region of interest (see Figure 4). Thus, by directing attention to a specific fingerprint, we can eliminate unnecessary computation for low interest regions.
We reuse feature maps returned by Faster RCNN to locate the region of interest. Next, we train SegFinNet to learn two classes: (i) attention region (fingermark region identified by a black marker by the examiner (Figure 2) and (ii) fingermark. In the inference phase, a comparison between returned the fingermarks’ location to the attention region is used to decide which one needs to be kept using the following criterion: if the overlapping area between the fingerprint bounding box and the attention region is over 70%, the bounding box is kept.
Our attention mechanism is intuitive, and it helps during matching (see Section 4.6 for more details) because it eliminates background friction ridges which generate spurious minutiae.
3.3 Feedback scheme
One issue in using a detection-based segmentation approach is that it segments objects based on candidates (RoI returned by detector). Thus, a large proportion of pixels in these bounding boxes belong to the foreground (fingermarks) rather than the background. The need to have a new loss function that can handle the imbalanced class problem is necessary. Letbe a set of training samples, where is the input image and is its corresponding groundtruth mask. SegFinNet outputs a set of concatenated masks for each input . We create weights (, ) for each loss value to solve this problem.
Unlike most popular computer vision datasets that have a significant number of pixel-wise annotated masks, there is no dataset available in the fingerprint domain that provides pixel-wise segmentation. Hence, different published studies have used different annotations (see Figure 5). Furthermore, since the border of fingermarks is usually not well defined, it leads to inaccuracies in these manual masks and, subsequently, training error. To alleviate this concern, we propose a semi-supervised partial loss that updates the loss for pixels in the same class while discarding other classes except the background.
Combining the two solutions together, let be the segmentation (mask) loss which takes into account the proportion of all classes in the dataset:
where is the soft-max weight number of pixels on the label, is corresponding mask label of the sample in the class, is regularization term, is cross-entropy loss w.r.t. background, and is the per-pixel sigmoid average binary cross-entropy loss as defined in .
In the training phase, we consider the loss function as a weight sum of the class, bounding box, and mask loss. Let be the total, class, bounding box, and pixel-wise mask loss, respectively. The total loss for training is calculated as follows:
3.4 Voting fusion masks
The effect of grayscale normalization. In the computer vision domain, input images usually have intensity values in a specific range. Thus, we can easily recognize, detect or segment objects. However, the fingerprint domain differs from most traditional computer vision related problems. In particular, because of the noisy background and low contrast of the fingerprint ridge structure, mistakes can be easily made in detecting fingermarks. Motivated by the various procedures used by forensic experts to “preprocess” images when examining latent fingerprints, we tried different preprocessing techniques on the original latent such as centered gray scaled, histogram equalized, and inverse image.
Even though the original latent fingerprint is noisy, removing noise via an enhancement algorithm prior to segmentation [3, 19] is not advisable because texture information may be lost. To make sure the segmentation result is reasonable and invariant to the contrast of input image, we propose a simple but effective voting fusion mask technique. Given an input latent, we first preprocess it to generate different types of grayscale images which are subsequently fed into SegFinNet to get the corresponding score maps. The final score map is accumulated over different grayscale inputs. Each pixel in the image has its own score value. We set the threshold to , which means that each pixel in the chosen region receives at least votes from the voting masks.
Although this approach seems to increase the computational requirement, it boosts the reliability of the resulting mask while keeping the running time of whole system efficient (see Section 4.5 for quantitative running time values).
4.1 Implementation Details
We set the anchor size in Faster RCNN varying from to . Batch size, detection threshold, and pixel-wise mask threshold are set to , , and , respectively. The learning rate for SegFinNet is set to 0.001 in the first iterations, and 0.0001 in the rest of the iterations. Mini-batch size is 1 and weight decay is 0.0001. We set the hyper-parameter in Equation 1 .
We have used 3 different latent fingerprint databases: MSP DB (an operational forensic database), NIST SD27 , and West Virginia University latent database (WVU) . The MSP DB includes 2K latent images and over 100K reference rolled fingerprints. The NIST SD27 contains latent images with their true mates while WVU contains latent images with their mated rolled fingerprints and another non-mated rolled images.
Training: we used the MSP DB to train SegFinNet. We manually generated ground truth binary masks for each latent. Figure 6 shows some example latents in the MSU DB with the corresponding groun truth masks. We used a subset of latent images from MSP DB for training while using a different set of latents in MSP DB for testing. With common augmentation techniques (e.g. random rotation, translation, scaling, cropping, etc. ), the training dataset size increased to latent images.
Testing: we conducted experiments on NIST SD27, WVU, and 1000 sequestered test images from the MSP database. To make the latent search to appear more realistic, we constructed a gallery consisting of rolled images, including the true mates of NIST SD27, rolled fingerprints in WVU, images from NIST14 , and the rest from rolled prints in the MSP database. The ground truth masks were obtained from .
4.3 Cropping Evaluation Criteria
Published papers based on patch-based approach with a classification scheme reported the cropping performance in terms of MDR and FDR metrics. The lower values of these metrics, the better framework is. Let and be two sets of pixels in predicted mask and ground truth mask, respectively. MDR and FDR are then defined as:
We reproduce the results based on masks and ground truth provided by authors.
Its metrics are on reported patches.
With the proposed non-patch-based and top-down approach (detection and segmentation), it is necessary to use IoU metric, which is more appropriate for multi-class segmentation [14, 8, 4]. In addition, we report our results in terms of the MDR and FDR metrics for comparison. In contrast to MDR and FDR metrics, a superior framework will lead to a higher value of IoU. The IoU metric is defined as:
We note that the published methods [18, 20, 5] used their own individual ground truth information so comparing them based on MDR and FDR is not fair given these two metrics critically depend on the ground truth. Figure 5 demonstrates the variations in ground truths used by existing works222We contacted the authors to get their groundtruths.. It is important to emphasize that a favorable metric value does not mean that the associated cropping will lead to better latent recognition accuracy. It simply reveals the overlap between predicted mask and its manually annotated ground truth.
4.4 Cropping Accuracy
Table 2 shows a quantitative comparison between SegFinNet and other existing works using MDR and FDR metrics on NIST SD27 and WVU databases. The IoU metric was computed based on masks and the ground truth provided by the authors. Because Ezeobiejesi evaluated MDR and FDR on patches, it is not fair to include it for IoU comparison. Table 2 also shows that SegFinNet provides the lowest error rate in terms of MDR and FDR. This is because of our use of non-patch based approach. The table also reveals that the low values of either MDR or FDR only does not usually lead to high IoU value.
4.5 Running time
Experiments are run on a Nvidia GTX Ti 1080 GPU with 12GB of memory. Table 3 shows a comparison with different configurations in computation time on NIST SD27 and WVU. We note that the voting fusion technique takes longer time to process an image because it runs on different inputs. However, its accuracy is better than just using a single image with attention technique.
Figure 7 shows the visualization of SegFinNet compared to existing works on NIST SD27. These masks were obtained by contacting authors.
4.6 Latent Matching
The final goal of segmentation is to increase the latent matching accuracy. We used two different matchers for latent to rolled matching: Verifinger SDK 6.3  and a state-of-the-art latent COTS AFIS. To make a fair comparison to existing works [19, 18, 13], we also report the matching performance for Verifinger on 27K background from NIST 14 . In addition, we report the performance of COTS on 100K background. To explain the matching experiments, we first define some terminologies.
(a) Baseline: Original gray scale latent image.
(b) Manual GT: Groundtruth masks from Jain .
(c) SegFinNet with AM: Masked latent images using visual attention mechanism only.
(d) SegFinNet with VF: Masked latent images using majority voting mask technique only.
(e) SegFinNet full: Masked latents with full modules.
(f) Score fusion: Sum of score fusion of our proposed SegFinNet with SegFinNet+AM, SegFinNet+VF, and original input latent images.
Table 4 demonstrates matching results using Verifinger. Since there are many versions of the Verifinger SDK, we use masks provided by the authors in [5, 19, 3, 18] to make a fair comparison. However, the authors did not provide their masks for the WVU database. Note that contrary to popular belief, the manual groundtruth does not always give better results than original images.
Figure 8 shows the matching results using state-of-the-art COTS. We did not use any enhancement techniques like Cao  in this comparison. The combination between the attention mechanism and the voting technique showed better performance in our proposed method. Besides, highest results of score fusion technique mean that our method can be complementary to using full image in matching.
We have proposed a framework for latent segmentation, called SegFinNet. It utilizes a fully convolutional neural network and detection based approach for latent fingerprint segmentation to process the full input image instead of dividing it into patches. Experimental results on three different latent fingerprint databases (i.e. NIST SD27, WVU, and MSP database) show that SegFinNet outperforms both human ground truth cropping for latents and published segmentation algorithms. This improved cropping, in turn, boosts the hit rate of a state of the art COTS latent fingerprint matcher. Our framework can be further developed along the following lines: (a) Integrating into an end-to-end matching model by using shared parameters learned in the Faster RCNN backbone as a feature map for minutiae/non-minutiae extraction; (b) Combining orientation information to get instance segmentation for segmenting overlapping latent fingerprints.
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2018-18012900001. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
Integrated pattern recognition and biometrics lab, West Virginia University.http://www.csee.wvu.edu/~ross/i-probe/.
-  NIST Special Database 14. http://www.nist.gov/srd/nistsd14.cfm.
-  K. Cao, E. Liu, and A. K. Jain. Segmentation and enhancement of latent fingerprints: A coarse to fine ridgestructure dictionary. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(9):1847–1859, 2014.
-  L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  H. Choi, M. Boaventura, I. A. Boaventura, and A. K. Jain. Automatic segmentation of latent fingerprints. In IEEE International Conference on Biometrics: Theory, Applications and Systems, pages 303–310, 2012.
-  J. Ezeobiejesi and B. Bhanu. Latent fingerprint image segmentation using deep neural network. In Deep Learning for Biometrics, pages 83–107. Springer, 2017.
-  M. D. Garris and R. M. McCabe. NIST special database 27: Fingerprint minutiae from latent and matching tenprint images. NIST Technical Report NISTIR, 6534, 2000.
-  K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In IEEE International Conference on Computer Vision, pages 2980–2988, 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
-  M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017–2025, 2015.
-  A. K. Jain and J. Feng. Latent fingerprint matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1):88–100, 2011.
-  A. K. Jain, K. Nandakumar, and A. Ross. 50 years of biometric research: Accomplishments, challenges, and opportunities. Pattern Recognition Letters, 79:80–105, 2016.
-  S. Liu, M. Liu, and Z. Yang. Latent fingerprint segmentation based on linear density. In IEEE International Conference on Biometrics, pages 1–6, 2016.
-  J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
-  Neurotechnology Inc. Verifinger. http://www.neurotechnology.com/verifinger.html.
-  D.-L. Nguyen, K. Cao, and A. K. Jain. Robust minutiae extractor: Integrating deep networks and fingerprint domain knowledge. In IEEE International Conference on Biometrics, 2018.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137–1149, 2017.
-  P. Ruangsakul, V. Areekul, K. Phromsuthirak, and A. Rungchokanun. Latent fingerprints segmentation based on rearranged fourier subbands. In IEEE International Conference on Biometrics, pages 371–378, 2015.
-  J. Zhang, R. Lai, and C.-C. J. Kuo. Adaptive directional total-variation model for latent fingerprint segmentation. IEEE Transactions on Information Forensics and Security, 8(8):1261–1273, 2013.
-  Y. Zhu, X. Yin, X. Jia, and J. Hu. Latent fingerprint segmentation based on convolutional neural networks. In IEEE Workshop on Information Forensics and Security, pages 1–6, 2017.