Partial Fingerprint Detection Using Core Point Location

02/04/2019
by   Wajih Ullah Baig, et al.
0

In Biometric identification, fingerprints based identification has been the widely accepted mechanism. Automated fingerprints identification/verification techniques are widely adopted in many civilian and forensic applications. In forensic applications fingerprints are usually incomplete, broken, unclear or degraded which are known as partial fingerprints. Fingerprints identification/verification largely suffer from the problem of handling partial fingerprints. In this paper a novel and simple approach is presented for detecting partial fingerprints using core point location. Our techniques is particularly useful during the acquisition stage as to determine whether a user needs to re-align the finger to ensure a complete capture of fingerprint area.This technique is tested on FVC-2002 DB1A. The results are very accurate which are presented in the Results sections.

READ FULL TEXT VIEW PDF

Authors

page 2

page 3

page 4

11/08/2019

A Novel Approach for Partial Fingerprint Identification to Mitigate MasterPrint Generation

Partial fingerprint recognition is a method to recognize an individual w...
12/08/2018

Biometric Recognition System (Algorithm)

Fingerprints are the most widely deployed form of biometric identificati...
06/14/2010

An Effective Fingerprint Verification Technique

This paper presents an effective method for fingerprint verification bas...
11/13/2013

An Efficient Method for Recognizing the Low Quality Fingerprint Verification by Means of Cross Correlation

In this paper, we propose an efficient method to provide personal identi...
10/13/2017

Real time ridge orientation estimation for fingerprint images

Fingerprint verification is an important bio-metric technique for person...
08/08/2012

Performance Measurement and Method Analysis (PMMA) for Fingerprint Reconstruction

Fingerprint reconstruction is one of the most well-known and publicized ...
02/02/2010

Feature Level Fusion of Biometrics Cues: Human Identification with Doddingtons Caricature

This paper presents a multimodal biometric system of fingerprint and ear...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Background

Automatic fingerprint identification/recognition systems (AFIS/AFRS) have been nowadays widely used in personal identification applications such as access control [1, 2]. Minutiae based fingerprints recognition techniques are widely used in those system [2]. Minutiae based matching techniques are best suited for good quality and complete images. Improper capturing/scanning results Partial or incomplete fingerprints with poor quality [3]. General minutiae based matching algorithm does not perform well case of partial fingerprints which results in degradation of over all automatic fingerprint identification/recognition systems. In order to minimize the effect of partial fingerprints our proposed techniques detect the partial fingerprints at the time of capturing or scanning.

Ii Introduction

Fingerprint identification is a pattern recognition problem which has been under research for many years. Achievement of very high accuracy matching with light-weight algorithms in poor quality or partial images is still an open issue. There has been a lot of effort in providing different features for matching fingerprints. Mostly minutiae based matching algorithms. Though there are efforts for matching using frequency, orientation, texture etc. In minutiae based matching, minutiae play a vital role in fingerprint comparisons. Such algorithms solely base their performances on fingerprint quality. Core point detection can be achieved using the direction fields that are deduced from a fingerprint. For core points, the field become discontinuous or the point at which maximum direction change. Orientation field is used to determine the reference point. A filter is applied to the orientation filed to detect the maximum direction change in ridge flow.

Human Fingerprints are unique for each individual. This uniqueness provides the ground for identification and verification of individuals and the automation of the process is termed as Automated Fingerprint Identification System (AFIS). Fingerprints are matched by encoding each fingerprint image into a numerically equivalent identity. These numerical identities are cross-matched with others to either verify or identify an individual. The numerical identities are conventionally based on minutiae. Algorithms for fingerprint recognition are mainly based on minutiae information. However, the small number of minutiae in partial fingerprints is still a challenge in fingerprint matching. For this purpose it needs to be ensured that fingerprints with acceptable quality are available. Apart from quality, fingerprints also have classified patterns called deltas,,whorls, loops, cores etc. A fingerprint matching algorithms needs to correctly perform on full fingerprints and partial fingerprints to attain high accuracy discrimination

[5]. If fingerprints are known to be partial in advance, a fingerprint algorithm can be tuned to perform accordingly [6].

Iii Proposed Detection Technique

Iii-1 Pre-processing

Prior to the extraction of core points, a fingerprint image undergoes the steps of pre-processing. The pre-processing steps in a typical manner would follow;

  1. Segmentation

  2. Basic Enhancement (Contrast stretch)

  3. Orientation field estimation

  4. Advanced Enhancement (Gabor Filtering)

  5. Orientation field estimation

A fingerprint image is first segmented. Then passed through contrast normalization to ensure the ridges are more visible. Followed by gradient-based orientation field estimation [8]. The orientation field deduced from this step is used to align the Gabor filters in the next step. That is, a Gabor filters will be aligned to the local orientation provided by the local window over which Gabor responses are calculated. After Gabor filtering a enhanced version of the fingerprint is produced. The enahced version is binarized. Once again the orientation field is estimated from the binary image. This time a more smooth orientation field shall be available due to the fact that Gabor filtering has produced fine ridges. The orientation field is then passed onto complex filtering for core point detection. It is interesting to note that complex filtering can also extract minutiae [7].

Iii-2 Complex Filtering

Core point (singular point) localization via complex filtering is a very effective technique [4],[10]

. Using multi-scale filters, the output from this technique is robust and reliable. Morphological techniques rely lesser on statistics and more on shape, where as this technique relies heavily on the statistics of the image in question. Core point is the central point of a fingerprint image. Fingerprints tend to have may have no or multiple core points. There is also classification of core points which tend to have special patterns. Some of these patterns being whorl, loops etc. In whorl patterns, the core point lies in the middle of the spiral. While in loop pattern core point lie in the in the top region of innermost loop. These patterns are easily mapped using complex filters image. Variance image is calculated and finally highest response from the variance image is used to detect a singularity if a threshold value is fulfilled.



Raidial symmetries can be modeled using complex filters of order m [9]. These symmetries can be expressed in cartesean and polar forms by equation (1) and equation (2) respectively;

(1)
(2)
(3)

In the perspective of applying the complex filters to the image, the filters have to be orientation isotropic (rotation invariant) and separable. The requirement of orientation isotropicity and separation is fulfilled by a Gaussian distribution. The Gaussian restricted by a window of size W, inhibits the properties of orientation isotropicity and separability in polar coordinates as it is a function of radius only

[11].

The complex filter of order in terms of a Gaussian filter can be represented by

(4)
(5)
(6)

Equation (5) is the Gaussian distribution. In order to apply the complex filter to detect singularities, the complex filter of order is convolved with absolute valued

of complex tensor field image

instead of the original image.

(7)
(8)
(9)

Equations (9) represents the convolution operation of filter with abosolute value of . A visualization of this step can be seen in Fig (a)a. The variable fx is the derivative of the image in x-direction and the variable fy is the derivative in y-direction.

In order to detect singularities, filters of first order and are used to detect singularities of type cores and deltas.

Fig. 1: Filters of order and . Singualr point detetors of type Whorls/Loops and Delta Points respectively. [4]
Fig. 2: Tensor field views of core and delta points.[4]
(a) Original image
(b) Gabor filtered
(c) Gradient x
(d) Gradient y
(e) Binary image
(f) Orientation field
Fig. 3: A stepwise output presentation for complex filtering

The outcome from equation (9) is used to create a variance image Fig.(b)b. Variance is calculated in a block-wise manner in non-overlapping window that is 1/4th the size of the Gaussian window . Here we set . The response from each block is calculated to create variance image. The image is then normalized using the following equation.

(10)

Equation (10) shall limit the variance inside the close domain of . The Singular point shall be located at the highest variance point inside the variance image, see Fig.((c)c). In the case of detecting singular points at Whorls/Loops, the order of complex filters is set .

(a) Complex image
(b) Variance image
(c) Core point detected
(d) Segmentation mask
Fig. 4: A stepwise output presentation for complex filtering

Iii-3 Partial Image Detection

Placing the core point as the center of the origin of the segmentation mask image (Fig.(d)d), white pixels are counted along the horizontal and vertical axes of the Cartesian coordinates whose center is the core point. The counting along the axes continues until a black pixel or image ending is encountered.

(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)

For equations (11) to (14) the coordinates of the core point are while the coordinates for black pixels are Equations (16) to (19) are normalized values.

Consider equation(12), starting from , white pixels are counted till a black pixel is encountered at or if we encounter the image end, we stop the pixel count. The same analogy is followed for rest of the equations.
To differentiate partial image from non-partials, it can be observed that in partial fingerprint images, the pixel count in any one of the four counts is less than the other three,see Fig.((b)b) to Fig.((e)e). While in full fingerprint images, this is not the case, see Fig.((a)a).
Using the normalized values, we select the minimum pixel count from equation (20)

(20)

Applying a threshold ,a differentiation of partial from non-partial fingerprints is achieved.

(a) 1_1.tif
(b) 4_5.tif
(c) 8_5.tif
(d) 28_7.tif
(e) 28_8.tif
Fig. 5: Outputs from proposed algorithm. Except for 1_1.tif, rest are partial images for threshold less than

Iv Results

The proposed algorithm applied to the local dataset as well as FVC-2002 DB_1a dataset. There are total 800 images in the data set out of which 62 are partial images. Rest of of them are non-partial images.
Table I

shows the confusion matrix that describes the performance of the proposed algorithm. The terms related to the confusion matrix are as follows;

  • True Positives (TP): Number of partials correctly classified.

  • True Negatives (TN): Number of non-partials correctly classified.

  • True Negatives (TN): Number of non-partials correctly classified.

  • False Negatives (TN): Number of non-partials incorrectly classified.

800 images Predicted Partial Predicted Non-Partial
True Partial TN = 36 FP = 18
True Non-Partial FN = 28 TP = 718
TABLE I: Confusion matrix for FVC 2002 DB1_a tests.

Taking into account the above confusion matrix terms, the performance of the algorithm is defined by the following equations.

(21)
(22)
(23)

Sensitivity (also called the true positive rate, the recall, or probability of detection) measures the proportion of positives that are correctly identified.


Specificity (also called the true negative rate) measures the proportion of negatives that are correctly identified.
Accuracy is the proportion of the total number of predictions that were correct.

It will interesting to note that a human observer was used to classify partial and non-partial images in order to perform the tests.

Database Sensitivity Specificity Accuracy
FVC 2002 DB_1A 66.6% 96.2% 94.2%
TABLE II: Perforance table for proposed method.

V Conclusion

In this paper we have presented a very simple yet accurate algorithm to detect partial images. The power of the algorithm can be seen from performance table. The algorithm itself is computationally in-expensive and can be used for real time acquisitions as a single image can be processed under 0.1 seconds. The environment under which testing was evaluated was done using Matlab (2015) for Windows 10 using a single thread process on a Core-i7 laptop machine.

Acknowledgment

This research is supported by the Technical Directorate of National Database and Registration Authority’s (NADRA) [12] Fund for Research And Development in advanced computing.

References

  • [1] N. Ratha, R. Bolle, Automatic Fingerprint Recognition Systems, Springer, New York, 2004.
  • [2] D. Maltoni, D. Maio, A.K. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, New York, 2003
  • [3] S. Malathi and C. Meena, ”Improved Partial Fingerprint Matching Based on Score Level Fusion Using Pore and SIFT Features,” 2011 International Conference on Process Automation, Control and Computing, Coimbatore, 2011, pp. 1-4.
  • [4] Nilsson, Kenneth, and Josef Bigun. ”Localization of corresponding points in fingerprints by complex filtering.” Pattern Recognition Letters 24.13 (2003): 2135-2144.
  • [5] O. Zanganeh, B. Srinivasan and N. Bhattacharjee, Partial Fingerprint Matching through Region-Based Similarity, 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Wollongong, NSW, 2014, pp. 1-8
  • [6] Fanglin Chen and Jie Zhou Biometric Recognition, On the Influence of Fingerprint Area in Partial Fingerprint Recognition pp 104-111: Springerlink, 2012.
  • [7] Fronthaler, Hartwig, Klaus Kollreider, and Josef Bigun. ”Local features for enhancement and minutiae extraction in fingerprints.” IEEE Transactions on Image Processing 17.3 (2008): 354-363.
  • [8] Wang, Yi, Jiankun Hu, and Fengling Han. ”Enhanced gradient-based algorithm for the estimation of fingerprint orientation fields.” Applied Mathematics and Computation 185.2 (2007): 823-833.
  • [9] Bigun, J., 1997. Pattern recognition in images by symmetries and coordinate transformations.Computer Vision and Image Understanding 68 (3), 290–307
  • [10] Bigun, J., Bigun, T., 2001. Symmetry derivatives of gaussians illustrated by cross tracking. Research report IDE-0131, September.
  • [11] Rudin, W., 1987. Real and Complex Analysis. McGraw-Hill, New York
  • [12] https://www.nadra.gov.pk/