Palmprint image registration using convolutional neural networks and Hough transform

04/01/2019
by   Mohsen Ahmadi, et al.
10

Minutia-based palmprint recognition systems has got lots of interest in last two decades. Due to the large number of minutiae in a palmprint, approximately 1000 minutiae, the matching process is time consuming which makes it unpractical for real time applications. One way to address this issue is aligning all palmprint images to a reference image and bringing them to a same coordinate system. Bringing all palmprint images to a same coordinate system, results in fewer computations during minutia matching. In this paper, using convolutional neural network (CNN) and generalized Hough transform (GHT), we propose a new method to register palmprint images accurately. This method, finds the corresponding rotation and displacement (in both x and y direction) between the palmprint and a reference image. Exact palmprint registration can enhance the speed and the accuracy of matching process. Proposed method is capable of distinguishing between left and right palmprint automatically which helps to speed up the matching process. Furthermore, designed structure of CNN in registration stage, gives us the segmented palmprint image from background which is a pre-processing step for minutia extraction. The proposed registration method followed by minutia-cylinder code (MCC) matching algorithm has been evaluated on the THUPALMLAB database, and the results show the superiority of our algorithm over most of the state-of-the-art algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 6

page 7

11/28/2018

2D/3D Megavoltage Image Registration Using Convolutional Neural Networks

We presented a 2D/3D MV image registration method based on a Convolution...
11/23/2017

Unsupervised End-to-end Learning for Deformable Medical Image Registration

We propose a registration algorithm for 2D CT/MRI medical images with a ...
01/14/2022

Multimodal registration of FISH and nanoSIMS images using convolutional neural network models

Nanoscale secondary ion mass spectrometry (nanoSIMS) and fluorescence in...
07/27/2015

Real-time 2D/3D Registration via CNN Regression

In this paper, we present a Convolutional Neural Network (CNN) regressio...
12/07/2019

Comparison of Neuronal Attention Models

Recent models for image processing are using the Convolutional neural ne...
11/22/2018

Feature-based groupwise registration of historical aerial images to present-day ortho-photo maps

In this paper, we address the registration of historical WWII images to ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Biometrics is used to recognize or verify human identity based on physical or behavioral characteristics. Biometric features such as face, iris, fingerprint, hand geometry, palmprint and signature have been used for human identification and recognition [1]. Among all these features, recently, palmprint recognition has gained considerable attention as a reliable personal identification technique. Palmprint inherits many of the fingerprint features. Both of them are represented by the information presented in a friction ridge impression including ridge flow, ridge characteristics, and ridge structure. Due to their uniqueness and permanence, palmprint and fingerprint identification has been generally trusted [2], [3].

In palmprint recognition systems mainly two different types of images: 1) low resolution, i.e. less than dpi, and 2) high resolution ( dpi) images are used [3], [4]. Features in the low resolution palmprints includes principal lines, wrinkles and ridges. These features define the palmprint as a texture containing discriminatory features which are relatively stable and applicable in biometric identification or verification systems [5], [6], [7]. In low-resolution palmprint images, ridges are not seen, and it is not feasible to extract the second-level information, which is more discriminant [8]. Features obtained from low resolution images are more suitable for civil and commercial application such as access control [3]. On the other hand, features obtained from high resolution images are minutiae, the orientation image, singular points, and the ridge frequency image. High resolution images are suitable for forensic application such as criminal detection [9]. Because these features are robust against time passing, they are considered admissible in a court of law [10]. In this paper, our focus will be on high resolution palmprint images and minutia-based palmprint matching. Our designed registration method is based on first-level information (mainly orientation image of palmprint), and it is applicable as long as the palmprint images are high resolution .

In literature, a lot of effort has been made regarding minutia-based palmprint matching in order to reduce the matching time and to address other challenges. In [11], similarity between two palmprints is calculated by means of a weighted sum of minutiae and orientation field matching scores. Li et.al describes a palmprint matching algorithm where minutiae are compared by means of a local and global matching [12]. Multi-feature-based palmprint recognition system is another strategy which is suggested in [8]. Minutia, orientation image, density map, and major creases are extracted to achieve higher matching accuracy. In [13], a new coarse-to-fine palmprint algorithm is proposed to deal with the large number of minutiae in palmprints. A clustering algorithm is applied to separate minutiae into different groups and make the matching process much faster. Raffaele and his colleagues, in [14], proposed a matching algorithm which is robust against skin distortion. They apply two stage minutia-based matching: 1) local minutia matching using Minutia Cylinder-Code (MCC) algorithm [15] and 2) new global score computation.

Use of spectral minutiae representation and segmenting palmprint into three different regions is another approach which aims to design faster matching algorithm[16]. In [17] ridge distance and conventional minutia descriptors are used together to speed up the matching. The difference of ridge distance near the minutiae is calculated, and only the pairs with similar ridge distance are considered. Hence, the number of minutiae comparison is greatly reduced which results in less time complexity. In [18], a novel region-quality based minutia extraction algorithm is proposed to improve the accuracy of palmprint recognition. Also, authors proposed an efficient and minutiae based encoding and matching algorithm to accelerate the palmprint matching.

Despite the available algorithms, some problems with the palmprint matching still need to be solved for large-scale applications [19]: skin distortion, diversity of different palm regions, and computational complexity. Registering palmprint images into the same coordinate, has been used to address the skin distortion and problem of large number of minutia comparison during the match process [19], [20]. By registration, it does not need to compare all minutia from two pamprint. Therefore we need to apply the matching algorithm only to the same parts of the two palmprints. This helps to mitigate the effects of skin distortion and improve matching speed. However, the registration methods proposed in [19], [20] are not accurate enough, and also for algorithm in [19] there is no criterion to show whether the registration is successful.

To increase the matching speed and accuracy, motivated by [19] and our previous work [20], we developed a new palmprint registration method. In this method we combine CNN and GHT to register all palmprint images with a reference image. Using CNN, we first roughly find the rotation difference between the current pamprint and the reference palmprint. Then, we find the exact rotation difference and translation between these two images by GHT. Two new criteria are used to measure the confidence of the registration. Also, using these two criteria our method is capable of recognizing left palm from right palm which can double the matching speed if we do not know whether the current palm is coming from the left hand or right hand. Furthermore, the designed structure of CNN in registration stage, gives us the segmented palmprint image from background which is a pre-processing step for minutia extraction.

Our minutiae matching stage is based on the MCC method with some modification [15], which includes a local matching stage based on MCC, followed by a relaxation procedure to compute a global matching score.

By proposed method, the matching time is much better than the state-of-the-art. On the other hand, because of accurate registration, we could subside the distortion effect, and as a result the matching accuracy improved. The experimental results on THUPALMLAB database [21] indicate that the proposed system achieves a false non-match rate (FNMR) of while the false match rate (FMR) is controlled at . As for the speed of our system, the average time of full palmprint matching is about milliseconds. The rest of this paper is organized as follows: Section 2 outlines the details of the proposed palmprint registration process. In section 3 matching strategy is explained. In Section 4, the experimental results are presented and analyzed. Finally, we end up with conclusions in section 5.

2 Proposed palmprint Registration

Most of the raw palmprint images are not in the same coordinate system. By applying registration and bringing images to same coordinate system, the matching process will be facilitated. By registering, we need to compare only minutiae in both the query and gallery image which have almost the same location and direction, and this accelerates the matching process.

Most of the palmprint registration methods are based on intervals between fingers [22], [23] or hand contour and principle lines [24]. These features are suitable for low resolution images which are captured by contact-less devices, ensuring the whole palm region and the finger roots are visible. However, in high resolution images fingers are not seen or hand contour is incomplete and unreliable. Orientation fields of different palms are quite similar, but the orientation of different palm regions is distinctive. This makes the orientation field to be a reliable feature for palmprint registration.

In [19], [20], the average orientation field of palmprints has been used as the reference image, and other palmprints are aligned with this image. Parameters (rotation around axis and displacement in and

direction) of rigid registration between reference orientation field and the estimated orientation field of each palmprint are obtained. This transformation (rigid body) is applied to unregistered images and brings them to the same coordinate system. These methods fail when the rotation difference between the reference image and the floating image is large (greater than

degrees). The situation is much worse when the palmprint image is not that high quality, and as a result the estimated orientation field is not reliable. To address this issue, we first roughly find the rotation difference between two images by CNN. CNN does not need to extract orientation field, and it is robust against distortion and low quality. After that, the exact values of registration parameters is obtained by use of GHT. Figure 1 shows the steps of proposed registration method.

Fig. 1: Steps of proposed palmprint registration method

2.1 Registration by CNN

To find the rotation of a plamprint image around axis, CNN is applied. Raw palmprint image is fed to CNN, and the output will show us the rotation of the image around

axis. We formulated this task as a classification problem, i.e., the input palmprint will be classified to one of the

existing classes. Theses classes are . The architecture of designed CNN is shown in figure 2.

Fig. 2: Architecture of designed CNN

It consists of

convolutional layers, each layer includes convolution and max-pooling steps, followed by a fully connected layer and a soft-max function to calculate the final probabilities of each class. Kernel size in first layer is

and it changes to in other convolutional layers. In the proposed network, as illustrated in figure 2, , , , , , , , and kernels are used for Conv layer , , , , , , and

respectively, with stride

. For pooling layers, kernel size is set to , and stride

. We also applied dropout , Batch-Normalization

[25]

and extensive data augmentation to overcome over fitting. Rectified Linear Unit (Relu) activation function is widely used in DNN. Because of Dying Relu problem, we applied exponential linear unit (ELU)

[26] activation function instead of Relu function.

We used palm images in training phase, palms randomly selected from THUPALMLAB database [21] and palms acquired by a live scanner. All these palms were rotated around axis such that their rotation is almost rspect to axis. To build the augmented training data, images were rotated with random values from set respect to axis. For each image, we produced different images with different random rotations, and each new image with its rotation information was saved in data set. Therefore, the training data contains palmprint images. To have a much generalized data set, other augmentation techniques such shrinkage, adding noise and flipping respect to axis were applied.

CNN output is vector

with length showing the probabilities of each classes. The class with highest probability shows the rotation of input palmprint image respect to axis. For instance, if the probability of class is maximum, then the rotation around axis and respect to axis is . To use CNN and find the rotation, for a given input image we smooth the output vector by a window with length and call it . Then, maximum value of vector is considered, . If , then the output of network is reliable. Those images that satisfies this condition, are rotated around axis by (correcting rotation, see figure 1) and are passed to the second round of registration. On the other hand, images with smaller value of

are passed directly to feature extraction step; more likely registration will fail if we pass these images to “registration refining”. For these palm images we do not apply registration, and treat them differently in local matching step (section

3.1).

Another advantage of designed CNN is its ability to segment palm region from the background. We noticed that in trained model, the outputs of layer #3 are near binary image which carry high values for regions with palm, and they near to zero for pixels belonging to the the background. In other word, during the training phase, the coefficient of filters of this layer were tuned in a way that they give the segmented palmprint image. Figure 3 illustrates the resulted mask image produced by combination of outputs of layer #3 for a sample palmprint. As it is seen, it can be used to segment the foreground from background. Palmprint segmentation is a pre-step before minutia extraction. Now, we have this image without any extra computation.

Fig. 3: Mask image produced from outputs of layer #3 of designed CNN

2.2 Registration refining by GHT

In previous section, by assigning each palmprint to one of classes of angles, we roughly found the rotation difference between palmprint images respect to ground truth. To find exact values of registration (displacement in and directions and rotation around axis), we apply GHT. We know that rotation around axis can not be larger than (we already corrected the rotation difference, ). This will help us to find the exact value of much faster. CNN outputs () are used to judge the level of confidence of registration. If it is larger than a threshold, it means that the estimated is reliable. For images with small values of we can not trust the estimated , and as a result we ignore doing registration on these images (see figure 1).

Similar to [20], we used orientation field of each palmprint image to find the registration parameters. The average orientation field of palmprints is used as the reference image, and other palmprint are aligned with this image. The proposed orientation field estimation algorithm in [20] is used to compute the orientation field in each blocks. Finally, the GHT algorithm [27], is applied to find registration parameters.

All possible pairs of blocks between the input (unregistered) palmprint orientation and the reference orientation field vote for corresponding rotation and the displacement (in and directions). Contrary to [19]

which uses inverse of circular standard deviation as weights of each block in voting system, we calculate the weight of each block based on its quality

[20].

Palmprint registration, brings palm images to the same coordinate, which helps to speed up the matching, with a small accuracy reduction. We need a criterion to judge the result of the registration and to check if it can be trusted or not. To measure the reliability of registration results, we define few new parameters:

shows the percentage of blocks in an image which belong to palmprint, i.e., foreground. A larger value of shows the higher quality of a palmprint or it could also indicate larger palmprint image. Small values of show that the palmprint image is not a full palmprint (latent palmprint) or the palmprint image has low quality. If this value is less than a threshold (in our case ), we ignore registration for that palmprint. shows the quality of registration, and is the ratio of blocks which vote for final registration parameters to all blocks in foreground. with larger values guarantees the registration accuracy; is acceptable. These parameters can be used to determine if the input palmprint is for right hand or left hand.

The reference orientation palmprint which we use for refining the registration , belongs to left hand. GHT is applied to do exact registration for both original and flipped version of the input palmprint (flipped around axis). Now, two s are in hand, resulted from registration of reference image with orientation of input palmprint and its flipped version. If of original palmprint is larger than of flipped image, then input palmprint is left hand and vice versa. Note that, if the larger is less than , then the registration has most likely failed. Therefore, our algorithm can automatically recognize left palmprint from right palmprint which will increase the palmprint matching speed and accuracy.

Fig. 4: Results of final registration for both original(left) and flipped image(right). For both images , for original image and for flipped version .
Fig. 5: Sample palmprint registered by proposed method. First row shows the original images and the second raw indicates the registered images
(a) Original image
(b) Registered by CNN
(c) Registered by GHT
(d) Enhanced image
(e) Skeleton image
(f) Minutiae
Fig. 6: Sample output of each step of whole procedure

Figure 4 shows results of registration for a sample right hand palmprint. satisfies the threshold while for left hand palmprint is less than and it is much smaller than of right hand palmprint. Therefore, features (Minutia) are extracted from right image, and palmprint class is known, i.e., the palmprint belongs to right hand.

We applied the proposed registration strategy to THUPALMLAB database(train set) and only a few of the images did not satisfy . Maximum allowed rotation was limited to (with steps=) and maximum displacement was restricted to pixels, which is of the image width and height. For palmprints that or do not satisfy thresholds, palmprint class is “unknown”.

Figure 5 shows some raw palmprint samples, from the same hand, and their registered version. As it is seen in the second row, all three images are almost in the same coordinate.

3 Palmprint Matching

After registration, the segmented palmprint is enhanced by applying Gabor filters [28]

. The enhancement, which is guided by the estimated local orientations and frequencies, produces a near binary image which is then simply binarized. The thinning algorithm in

[29] is applied to the enhanced binary image and the palmprint skeleton is obtained. Using the algorithm in [2] minutiae are extracted and the results are refined by approach in [30]. Once the minutiae have been extracted, the matching process can be done. Figure 6 shows the whole procedure for sample palmprint image.

The matching algorithm is similar to what we have done in [20]. It consists of two stages: a) local minutia matching and b) global score calculation. In the local minutia matching stage, the similarity of a minutia pairs is computed with the MCC descriptor to make the matching process more efficient and fast. In the second stage, overall similarity of two palmprints is calculated.

3.1 Local matching

The MCC descriptor is a local data structure which is invariant for translation and rotation. This structure encodes spatial and directional relationships among the minutia and its (fixed-radius) neighborhood. It is represented by a bit vector of fixed length. To check the similarity of two local minutiae from two different images, we compare their MCC descriptors. For (number of cells along the cylinder diameter) and (number of cylinder section), each MCC descriptor contains a vector with length equal to bits. To check the similarity of two minutiae, we need to do a operation between two vectors, with length , which is a time consuming process and may not be suitable for palmprint images. To decrease the number of similarity computation, we use registered images. In other word, we only check the MCC similarity between two minutia which roughly belong to the same part of the palm. By decreasing the number of MCC similarity computations, the matching process will speed up. For a given minutia from palmprint A and minutia from palmprint B; If and do not belong to the same region, their local similarity is zero. This means, if , or their MCC similarity is set to zero. For images which fail to be registered by CNN, . Images with small values of or in second phase of registration(refining by GHT), , and are set to , and respectively. Finally, for images which are successfully registered, (see figure 1).

Without registration , the original MCC algorithm needs to compare minutiae from image with minutiae from image , where and are number of minutia in image and , respectively. This amount of comparison is time consuming and use of registration decrease this complexity, dramatically.

3.2 Global matching

In order to compare two palmprints, a global score (denoting their overall similarity) needs to be derived from the local similarities. The approach proposed in [20] is used to calculate the global score. For two given MCC minutia descriptor sets and , there is a local similarity score matrix, , showing the MCC similarity between minutiae of image with those of image . is the MCC similarity of minutia from image with minutia from image . Based on the section 3.1, is a low density matrix; most of its elements is zero.

To compute the global score, minutia pairs with maximum similarity values are selected. Suppose is the selected minutiae-index pairs. First , normalized version of , is computed:

(1)

The normalization modifies each value according to the average of the values in the same row and in the same column. After normalization, minutiae pairs, corresponding to the top values in matrix, are selected and a relaxation procedure is applied to the similarity scores guided by their corresponding minutia pairs.

Let is initial normalized similarity of pair ; the similarity of pair at iteration of the relaxation procedure is:

(2)

where is a weighting factor and:

(3)

where shows the Euclidean distance between minutia , and is the difference (modulo ) between two angles , . indicates directional difference between two minutiae (). measures compatibility between two minutiae pairs from image A and from image B. It is computed as the product of three feature, and

and normalized by a sigmoid functions with its specific

and :

(4)

computes distance similarity of two minutia pairs. is the similarity of directional differences and denotes the radial angle similarity. Radial angle is the angle between line connecting two minutiae and the direction of the first minutia. The smaller , and results in a smaller similarity value between two minutia pairs .

Relaxation procedure (2) is repeated for times and an efficiency of each pair is computed as follow:

(5)

At the end, global score of two palmprint images A and B is calculated as the average of the similarity values, , corresponding to top pairs with highest efficiency values.

4 Experimental results

In this section, experiments were carried out to evaluate the proposed method. The proposed method is compared with other palmprint matching algorithms, and results and parameters are reported.

4.1 Database and Parameter Tuning

The only publicly available high-resolution palmprint database, Tsinghua Palmprint Database [21], is used as the main database. It consists of palmprint images from different people (left and right palms and eight impressions per palm). All images are pixels with

dpi resolution and 256 gray levels. By zero padding, the image size is changed to

( is divisible by which is block-size in orientation field estimation algorithm). The THUPALMLAB database consists of two separate sets: 1) the training set and 2) the test set. The training set (the first subjects) includes palmprints (from different palms), while the test set contains the remaining palmprints (from different palms).The training set is used to tune the parameters, and it is done by minimizing Equal Error Rate (EER).

Thanks to the registration, the MCC similarity of small percent of all possible minutia pairs is non-zero, i.e, matrix is sparse. In the global score computation phase, minutia pairs with maximum local similarity score are selected from Matrix. Therefore, the remaining non-zero elements of the similarity matrix are more reliable, and the relaxation phase can be done even with small values of . Reducing from (in original MCC) to decreases the relaxation phase computations dramatically. With , needs to be calculated almost times in each iteration and this value is much bigger than . In contrast to [14] where , in our proposed system, is and the resulted MCC descriptor length is bits, and it takes much less time to compute the MCC similarity of two minutiae. Also, since palmprint images are registered, the maximum global rotation between two images cannot be a large value.

4.2 Time Complexity and Computational Requirements

A typical palmprint contains minutiae on average; then the original MCC algorithm needs to do MCC local similarity computations. Our statistical observation showed that by applying proposed registration algorithm, we only need to do percent of all computations, i.e. local MCC similarity computation. Results in table I confirm this, and show that the contribution of the registration toward the matching time reduction is significant. By the proposed system, palmprint matches per second can be done whereas the feasible number of matches per second for algorithm in [14] (the fastest algorithm to the best of our knowledge) is . Note that, we can double the matching speed by applying proposed left-right palm detector, i.e. average matching time is milliseconds(222 matches per seconds).

Method Matching time
Jian and Feng [11]
Dai and Zhou [8]
Chen and Guo [17]
Tariq et.al(CPU) [18]
Proposed method 0.021
Capelli and Ferrara [14] (Multithreaded)
Proposed method (Multithreaded) 0.006
TABLE I: Average matching time (second)
Method Required memory
Jian and Feng [11]
Dai and Zhou [8]
Chen and Guo [17] less than 8 KB
Tariq et.al [18]
Capelli and Ferrara [14]
Proposed algorithm
TABLE II: Required memory for a typical palmprint image
Method EER FNMR at FMR FNMR at FMR FNMR at FMR
Jian and Feng [11]
Dai and Zhou [8]
Wang and Ram [16]
Chen and Guo [17]
Tariq et.al [18]
Capeli and Ferara [14] 0.01% 0.1%
Proposed method 0.14% 0.24%
TABLE III: EER and FMMR at give FMRs

The proposed algorithm was tested on an Intel(R) Core(TM) 2 Quad Q9550 CPU at 2.83 GHz, using single and multi-threaded C++ implementation. Table I summarizes average matching times of the proposed system with those of four other approaches in [11], [8], [17],[18] and [14]. The times of the methods in [11], [8] have been measured on an Intel Xeon E5620 at 2.4 GHz, with a single-thread in C++, and the time of [17] has been measured by Intel(R) core (TM) i7 CPU 2.8 GHz in C++. Time of the algorithm in [14] has been measured by Intel Core 2 Quad Q9400 CPU at 2.66 GHz, using a multi-threaded C# implementation. Due to different hardware and the way that software is implemented, it is hard to accurately compare the matching times of the algorithms. For example, we implemented the method in [14] in our system using a multi-threaded C++ implementation; and the matching process took about milliseconds, while the authors of this algorithm report a matching time of milliseconds with a weaker computer and C# implementation.

The computational cost of the registration procedure is composed of applying CNN,voting and searching for the best transformation parameters. The execution time of the registration is about four seconds in our experiment. Since it is performed in the enrollment stage only once, it has no computation complexity during identification.

The amount of memory which is required by the proposed algorithm to store the palmprint templates is relatively small in comparison to other algorithms. For a single minutia, the proposed system stores bits for the MCC descriptor, and about five bytes for (). In total it needs to store bytes per minutia which is a small value in comparison to the bytes (per minutia) required by the algorithm in [14]. The method in [12] requires bytes per minutia and one byte per orientation element; the algorithm in [9] needs five bytes per minutia, two bytes per orientation. For a typical palmprint () with an average minutiae and orientation elements, the amount of memory required by these algorithms is summarized in table II. Among all these algorithms, the proposed system requires relatively less amount of memory to store minutia descriptors and templates. The method in [17] needs less memory in comparison to ours, but the matching time and the accuracy of this algorithm are much worse.

4.3 Accuracy

To evaluate the matching algorithms, impostor and genuine tests need to be done. For this database, the total number of genuine matches is . Each impression is compared with the remaining impressions of the same palmprint. The number of impostor matches is much larger than that of genuine matches. All impressions from subjects are compared with the remaining impressions; therefore, the number of impostor matches is .

The accuracy of the proposed system is compared to the some state-of-the art( [11], [8], [14],[16] and [17]). As seen in Table III, our algorithm shows a higher accuracy at any FMR. At zero FMR, the proposed algorithm makes false non-match errors. Most of these errors occur because of the images with low quality, regions with small area and less pattern (similar to latent palmprints), and less overlap area with other palmprints. When two palmprint images have no minutia in common, i.e. no overlap, non of the existing algorithms can find a high matching score between them. It is noteworthy that without registration , the accuracy of the proposed system reduces dramatically. This is because of the small values of , and some other parameters that we have changed in MCC algorithm.

As for the accuracy of the registration phase, palmprints were not successfully registered, which is less than one percent of all the palmprints within the test database. All of the failure cases are due to bad image quality or pattern-less and small areas. In all of these images, one of or was less than the corresponding threshold.

5 Conclusion

Because of the existence of creases, a large number of minutia and skin distortion, minutia-based palmprint matching is a challenging task. In this paper, we designed new strategy to speed up the palmprint matching speed. Combining CNN and GHT, an accurate palmprint registration was designed which can bring all palmprints to a same coordinate system. In the first round of registration, CNN finds the rotation of a palmprint image respect to axis. Image is rotated by , and exact registration is done by GHT. Registration helps to compare only minutiae belonging to the same part of the palmprint, ignoring comparison of minutiae which are far apart. This reduces computations in local match phase, and makes the matching algorithm faster.

Our method is six times faster than the state-of-the-art while the matching accuracy is still very high.The proposed algorithm was applied on THUPALMLAB database and the results revealed that our algorithm has the capability of doing palmprint matches per second with .

References

  • [1] Z. Guo, D. Zhang, L. Zhang, and W. Zuo, “Palmprint verification using binary orientation co-occurrence vector,” Pattern Recognition Letters, vol. 30, pp. 1219–1227, 2009.
  • [2] D. Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition, 2nd ed.   London: Springer-Verlag, 2009.
  • [3] A. Kong, D. Zhang, and M. Kamel, “A survey of palmprint recognition,” Pattern Recognition, vol. 42, pp. 1408–1418, 2009.
  • [4] D. Zhang, W. Zuo, and F. Yue, “A comparative study of palmprint recognition algorithms,” ACM Computing Surveys, vol. 44, p. 2, 2012.
  • [5] D. S. Huang, W. Jia, and D. Zhang, “Palmprint verification based on principal lines,” Pattern Recognition, vol. 41, pp. 1316–1328, 2008.
  • [6] W. Jia, D. S. Huang, and D. Zhang, “Palmprint verification based on robust line orientation code,” Pattern Recognition, vol. 41, pp. 1504–1513, 2008.
  • [7] X. Wu and Q. Zhao, “Deformed palmprint matching based on stable regions,” IEEE Transactions on Image Processing, vol. 24, pp. 4978–4989, 2015.
  • [8] J. Dai and J. Zhou, “Multifeature-based high-resolution palmprint recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, pp. 945–957, 2011.
  • [9] K. C. Mangold, Data Format for the Interchange of Fingerprint, Facial & Other Biometric Information, 3rd ed.   Gaithersburg, Maryland, USA: ANSI/NIST-ITL, 2016.
  • [10] D. R. Ashbaugh, Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology, 1st ed.   Boca Raton: CRC Press, 1999.
  • [11] A. K. Jain and J. Feng, “Latent palmprint matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 1032–1047, 2008.
  • [12] J. Li and G. Shi, “A novel palmprint feature processing method based on skeleton image,” IEEE International Conference on Signal Image Technology and Internet Based Systems, 2008.
  • [13] E. Liu, A. K. Jain, and J. Tian, “A coarse to fine minutiae-based latent palmprint matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 2307–2322, 2013.
  • [14] R. Cappelli, M. Ferrara, and D. Maio, “A fast and accurate palmprint recognition system based on minutiae,” IEEE Trans Syst Man Cybern B Cybern, vol. 42, pp. 956–962, 2012.
  • [15] ——, “Minutia cylinder-code: A new representation and matching technique for fingerprint recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, pp. 2128–2141, 2010.
  • [16] R. Wang, D. Ramos, R. Veldhuis, J. Fierrez, L. Spreeuwers, and H. Xu, “Regional fusion for high-resolution palmprint recognition using spectral minutiae representation,” IET Biometrics, vol. 3, pp. 94–100, 2014.
  • [17] J. Chen and Z. Guo, “Palmprint matching by minutiae and ridge distance,” International Conference on Cloud Computing and Security, vol. 10040, pp. 371–382, 2016.
  • [18] S. A. Tariq, S. Iqbal, M. Ghafoor, I. A. Taj, N. M. Jafri, S. Razzaq, and T. Zia, “Massively parallel palmprint identification system using gpu,” Cluster Computing, pp. 1–16, 2017.
  • [19] J. Dai, J. Feng, and J. Zhou, “Robust and efficient ridge-based palmprint matching,” IEEE Trans Pattern Anal Mach Intell, vol. 34, pp. 1618–1632, 2012.
  • [20] H. Soleimani and M. Ahmadi, “Fast and efficient minutia-based palmprint matching,” IET Biometrics, vol. http://digital-library. theiet.org/content/journals/10.1049/iet-bmt.2017.0128, 2018.
  • [21] Dai, “Thupalmlab database,” http://ivg.au.tsinghua.edu.cn/dataset/THUPALM LAB.php, 2012.
  • [22] Y. Wang, J. Hu, and D. Phillips, “A fingerprint orientation model based on 2d fourier expansion (fomfe) and its application to singular-point detection and fingerprint indexing,” IEEE Trans Pattern Anal Mach Intell, vol. 29, pp. 573–585, 2007.
  • [23] J. Funada, N. Ohta, M. Mizoguchi, T. Temma, K. Nakanishi, A. Murai, T. Sugiuchi, T. Wakabayashi, and Y. Yamada, “Feature extraction method for palmprint considering elimination of creases,” Fourteenth International Conference on Pattern Recognition, pp. 1051–4651, 1998.
  • [24] C. C. Han, “A hand-based personal authentication using a coarse-to-fine strategy,” Image and Vision Computing, vol. 22, pp. 909–918, 2004.
  • [25] S. Loffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,”

    32nd International Conference on International Conference on Machine Learning

    , vol. 37, pp. 448–456, 2015.
  • [26] D. A. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (elus),” Proceedings of the 4th International Conference on Learning Representation, pp. 1–14, 2016.
  • [27] D. H. Ballard, “Generalizing the hough transform to detect arbitrary shapes,” Pattern Recognition, vol. 13, pp. 111–122, 1981.
  • [28] L. Hong, Y. Wan, and A. Jain, “Fingerprint image enhancement: algorithm and performance evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, pp. 777–789, 1998.
  • [29] T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communications of the ACM, vol. 27, pp. 236–239, 1984.
  • [30] F. Zhao and X. Tang, “Preprocessing and postprocessing for skeleton-based fingerprint minutiae extraction,” Pattern Recognition, vol. 40, pp. 1270–1281, 2007.