Filter Design and Performance Evaluation for Fingerprint Image Segmentation

01/09/2015
by   Duy Hoang Thai, et al.
The University of Göttingen
0

Fingerprint recognition plays an important role in many commercial applications and is used by millions of people every day, e.g. for unlocking mobile phones. Fingerprint image segmentation is typically the first processing step of most fingerprint algorithms and it divides an image into foreground, the region of interest, and background. Two types of error can occur during this step which both have a negative impact on the recognition performance: 'true' foreground can be labeled as background and features like minutiae can be lost, or conversely 'true' background can be misclassified as foreground and spurious features can be introduced. The contribution of this paper is threefold: firstly, we propose a novel factorized directional bandpass (FDB) segmentation method for texture extraction based on the directional Hilbert transform of a Butterworth bandpass (DHBB) filter interwoven with soft-thresholding. Secondly, we provide a manually marked ground truth segmentation for 10560 images as an evaluation benchmark. Thirdly, we conduct a systematic performance comparison between the FDB method and four of the most often cited fingerprint segmentation algorithms showing that the FDB segmentation method clearly outperforms these four widely used methods. The benchmark and the implementation of the FDB method are made publicly available.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 26

page 27

page 31

page 34

page 36

page 37

page 38

07/03/2021

Sensor-invariant Fingerprint ROI Segmentation Using Recurrent Adversarial Learning

A fingerprint region of interest (roi) segmentation algorithm is designe...
06/20/2016

Perfect Fingerprint Orientation Fields by Locally Adaptive Global Models

Fingerprint recognition is widely used for verification and identificati...
04/09/2019

3D Quantum Cuts for Automatic Segmentation of Porous Media in Tomography Images

Binary segmentation of volumetric images of porous media is a crucial st...
12/26/2018

FPD-M-net: Fingerprint Image Denoising and Inpainting Using M-Net Based Convolutional Neural Networks

The fingerprint is a common biometric used for authentication and verifi...
09/20/2021

Background-Foreground Segmentation for Interior Sensing in Automotive Industry

To ensure safety in automated driving, the correct perception of the sit...
10/12/2017

Hardware design for binarization and thinning of fingerprint images

Two critical steps in fingerprint recognition are binarization and thinn...
11/21/2018

Chan-Vese Reformulation for Selective Image Segmentation

Selective segmentation involves incorporating user input to partition an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Keywords

Fingerprint recognition, fingerprint image segmentation, evaluation benchmark, manually marked ground truth, directional Hilbert transform, Riesz transform, Butterworth bandpass filter, texture extraction

1 Introduction

Nowadays, fingerprint recognition is used by millions of people in their daily life for verifying a claimed identity in commercial applications ranging from check-in at work places or libraries, access control at amusement parks or zoos to unlocking notebooks, tablets or mobile phones. Most fingerprint recognition systems are based on minutiae as features for comparing fingerprints [1]

. Typical processing steps prior to minutiae extraction are fingerprint segmentation, orientation field estimation and image enhancement. The segmentation step divides an image into foreground, the region of interest (ROI), and background. Two types of error can occur in this step and both have a negative impact on the recognition rate: ’true’ foreground can be labelled as background and features like minutiae can be lost, or ’true’ background can be misclassified as foreground and spurious features may be introduced. It is desirable to have a method that controls both errors.

1.1 The Factorized Directional Bandpass Method, Benchmark and Evaluation

In order to balance both errors we take the viewpoint that – loosely speaking – fingerprint images are highly determined by patterns that have frequencies only in a specific band of the Fourier spectrum (prior knowledge). Focusing on these frequencies occuring in true fingerprint images (FOTIs), we aim at the following goals:

  1. Equally preserving all FOTIs while attenuating all non-FOTIs.

  2. Removing all image artifacts in the FOTI spectrum, not due to the true fingerprint pattern.

  3. Returning a (smooth) texture image containing only FOTI features from the true fingerprint pattern.

  4. Morphological methods returning the ROI.

In order to meet these goals we have developed a factorized directional bandpass (FDB) segmentation method.

The FDB method

At the core of the FDB method is a classical Butterworth bandpass filter which guarantees Goal 1. Notably Goal 1 cannot fully be met by Gaussian based filtering methods such as the Gabor filter. Obviously, due to the Gaussian bell shaped curve, FOTIs would not be filtered alike. Because straightforward Fourier methods cannot cope with curvature (as could e.g. curved Gabor filters [2]) we perform separate filtering into a few isolated orientations only, via directional Hilbert transformations. The composite directional Hilbert Butterworth bandpass filter (DHBB) incorporates our prior knowledge about the range of possible values of ridge frequencies (between and pixels) or interridge distances (between 3 and 25 pixels) [2], assuming a sensor resolution of 500 DPI and that adult fingerprints are processed. In the case of adolescent fingerprints [3] or sensors with a different resolution, the images can be resized to achieve an age and sensor independent size – not only for the first segmentation step, but also for all later processing stages. Our parameters can be tuned to reach an optimal tradeoff between treating all realistic frequencies alike and avoiding Gibbs effects. Moreover we use a data friendly rectangular spectral shape of the bandpass filter employed which preserves the rectangular shape of the spatial image.

A second key ingredient is the factorization of the filter into two factors in the spectral domain, between which a thresholding operation is inserted. After preserving all FOTIs and removing all non-FOTIs in application of the first factor, all FOTI features not due to the true fingerprint pattern (which are usually less pronounced) are removed via a shrinkage operator: soft-thresholding. Note that albeit removing less pronounced FOTI features, thresholding introduces new unwanted high frequencies. These are removed, however, by application of the second factor, which also compensates for a possible phase shift due to the first factor, thus producing a smoothed image with pronounced FOTI features only.

At this stage, non-prominent FOTI features have been removed, not only outside the ROI, but also some due to true fingerprint features inside the ROI. In the final step, these “lost” regions are restored via morphological operations (convex hull after binarization and two-scale opening and closing).

The careful combination of the above ingredients in our proposed FDB method yields segmentation results far superior to existing segmentation methods.

Benchmark

In order to verify this claim, because of the lack of a suitable benchmark in the literature, we provide a manually marked ground truth segmentation for all 12 databases of FVC2000 [4], FVC2002 [5] and FVC2004 [6]. Each databases consists of 80 images for training and 800 images for testing. Overall this benchmark consists of 10560 marked segmentation images. This ground truth benchmark is made publicly available, so that other researchers can evaluate segmentation algorithms on it.

Evaluation against existing methods

We conduct a systematic performance comparison of widely used segmentation algorithms on this benchmark. In total, more than 100 methods for fingerprint segmentation can be found in literature. However, it remains unclear how these methods compare with each other in terms of segmentation performance and which methods can be considered as state-of-the-art. In order to remedy the current situation we chose four of the most often cited fingerprint segmentation methods and compared their performance: a method based on mean and variance of gray level intensities and the coherence of gradients as features and a neural network as a classifier

[7], a method using Gabor filter bank responses [8], a Harris corner response based method [9] and an approach using local Fourier analysis [10].

1.2 Related Work

Early methods for fingerprint segmentation include Mehtre et al. [11] who segment an image based on histograms of local ridge orientation and in [12] additionally the gray-level variance is considered. A method proposed by Bazen and Gerez [7] uses the local mean and variance of gray-level intensities and the coherence of gradients as features and a neural network as a classifier. Similarly Chen et al. [13] use block based features including the mean and variance in combination with a linear classifier. Both methods perform morphology operations for postprocessing. A method by Shen et al. is based on Gabor filter bank responses of blocks [8]. In [2], all pixels are regarded as foreground for which a valid ridge frequency based on curved regions can be estimated. Wu et al. [9] proposed a Harris corner response based method and they apply Gabor responses for postprocessing. Wang et al. [14]

proposed to use Gaussian-Hermite moments for fingerprint segmentation. The method of Zhu

et al. [15] uses a gradient based orientation estimation as the main feature, and a neural network detects wrongly estimated orientation and classifies the corresponding blocks as background. Chikkerur et al. [10] applied local Fourier analysis for fingerprint image enhancement. The method performs implicitly fingerprint segmentation, orientation field and ridge frequency estimation. Further approaches for fingerprint enhancement in the Fourier domain include Sherlock et al. [16], Sutthiwichaiporn and Areekul [17] and Bartůněk et al. [18, 19, 20].

1.3 Setup of Paper

The paper is organized as follows: in the next section, we describe the proposed method beginning with the design of the DHBB filter for texture extraction in Section 2.1. Subsequently, the extracted and denoised texture is utilized for estimating the segmentation as described in 2.2 which summarizes the FDB segmentation procedure. In Section 3, the manually marked ground truth benchmark is introduced and applied for evaluating the segmentation performance of four widely used algorithms and for comparing them to the proposed FDB segmentation method. The results are discussed in Section 4.

Figure 1: Overview over the segmentation by the FDB method: In the analysis step, the original image (top row, left) is transformed into the Fourier domain (second column) and filtered by the first DHBB factor obtaining 16 directional subbands (third and fourth columns). Next soft-thresholding is applied to remove spurious patterns (second row, third and fourth columns). In the synthesis step, the feature image (second column) is reconstructed from these subbands using the second DHBB factor. Finally, the feature image is binarized and the ROI is obtained by morphological operations. The estimated ROI (third row, left) is compared to manually marked ground truth segmentation (third row, right) in order to evaluate the segmentation performance.

2 Fingerprint Segmentation by FDB Methods

Our segmentation method uses a filter transforming an input 2D image into a feature image

(1)

Due to our filter design, the product above as well as all convolutions, integrals and sums are understood in the principal value sense

Having clarified this, the symbol “pv” will be dropped in the following. At the core of (1) is the DHBB filter conveyed by ( counts directions, and are tuning parameters providing sharpness). In fact, we suitably factorize the filter conveyed by in the Fourier domain where with the argument reversion operator “” and apply a thresholding procedure “in the middle”. Underlying this factorization is a factorization of the bandpass filter involved. The precise filter design will be detailed in the following. Note that the directional Hilbert transform is also conveyed by a non-symmetric kernel. Reversing this transform (as well as the factor of the Butterworth) restores symmetry. It is inspired by the steerable wavelet [21, 22, 23] and to some extend similar in spirit to the curvelet transform [24], [25] and the curved Gabor filters [2]. We deal with curvature by analyzing single directions  separately before the final synthesis.

Via factorization, possible phase shifts are compensated and unwanted frequencies introduced by the thresholding operator are eliminated, yielding a sparse smoothed feature image. This allows for easy binarization and segmentation via subsequent morphological methods, leading to the ROI.

Note that (1) can be viewed as an analog to the projection operator in sampling theory with the analysis and synthesis steps (e.g. [26]). In this vein we have the following three steps:

  1. Forward analysis (prediction): A first application of the argument reversed DHBB filter to a fingerprint image corresponds to a number of directional selections in certain frequency bands of the fingerprint image giving above.

  2. Proximity operator (thresholding): In order to remove intermediate coefficients due to spurious patterns (cf. [27]) we perform soft tresholding on the filtered grey values yielding above.

  3. Backward synthesis: Subsequently we apply the filter (non-reversed) again giving assembled from all subbands. A numerical comparison to other synthesis methods, summation (corresponding to a naive reconstruction) and maximal response in the appendix 5, shows the superiority of this smoothing step.

Due to the discrete nature of the image , we work with the discrete version of at in Eq. (1).

2.1 Filter Design for Fingerprint Segmentation

(a)
(b) 1D:
(c) 2D:
(d)
(e)
(f)
(g)
(h)
Figure 2: Image (a) displays the angularpass filter with , . Images (b-c) show the 1D and 2D Butterworth bandpass filters and with , (d) the spectrum of the DHBB filter . Images (e-f) visualize the real and imaginary part of the DHBB filter . Images (g-h) display the squared magnitude of the spectrum of the DHBB in the frequency and spatial domains which acts somewhat like a Gabor filter.

The features of interest in a fingerprint image are repeated (curved) patterns which are concentrated in a particular range of frequencies after a Fourier transformation. In principle, the frequencies lower than these range’s limits correspond to homogeneous regions and those higher to small scale objects, i.e. noise, respectively. Taking this prior knowledge into account, we design an algorithm that captures these fingerprint patterns in different directional subbands in the frequency domain for extracting the texture.

In this section, we design angularpass and bandpass filters. The angularpass filter builds on iterates of the directional Hilbert transformation, a multidimensional generalization of the Hilbert transform called the Riesz transform. It can be represented via principal value convolution kernels. The bandpass filter builds on the Butterworth transform which can be represented directly via a convolution kernel. We follow here a standard technique designing a bandpass filter from a lowpass filter which has an equivalent representation in analog circuit design.

2.1.1 The Order Directional Hilbert Transform of a Butterworth Bandpass

Although a fingerprint image

is a discrete signal observed over a discrete grid, we start our considerations with a signal

assuming values in a continuum . The frequency coordinates in the spectral domain will be denoted by .

As usual, the following operators are defined first for functions in the Schwartz Space of rapidly-decaying and infinitely differentiable test functions:

and continuously extended onto

In our context we only need . Further, we denote the Fourier and its inverse transformations by

where denotes the imaginary unit with .

Butterworth bandpass

For and frequency bounds , setting , , the one-dimensional () Butterworth bandpass transform is defined via

cf. [28]. It is easy to verify that tends to zero for and

and has unique maximum at the geometric mean

with value . In consequence, for high values of , this filter approximates the ideal filter

The ideal filter, however, suffers from the Gibbs effect. Letting , we factorize the bandpass Butterworth as

with (), the negative squares of which representing the different complex roots of . Then, with the below complex valued factor of called the transfer function,

we use the approximation: to obtain

This approximation is often called the bilinear transform, which turns out to reduce the frequency bandwidth of interest, cf. Figure 7.

The 1D filter is then generalized to a 2D domain. The McClellan transform [29], [30], [31], [32], would be one favorable method. Also, recently, bandpass filtering with a radial filter in the Fourier domain has been proposed by [33], [34] and [35] et al. for enhancing fingerprint images. However, for a simpler reconstruction of 2D filter and a data-friendly alternative to the polar tiling of the frequency plane, a Cartesian array is used instead (see [24], [25], [36], [37]).

Thus, on a rectangular domain with common cuttoff frequencies

and the two characteristic functions

(see Figure 9), define

(2)

as the spectrum of our two-dimensional Butterworth filter . Note that since , there is a well defined

Figure 2 (b) and 2 (c) show an example of the 1D and 2D Butterworth bandpass filters.

-th order directional Hilbert transformations

For more detail on the Hilbert transform and the Riesz transform , we refer the reader to the literature for an in-depth discussion [21], [23], [38], [39], [40], [41, 42, 43], [44], and [45].

Consider a vector

and set and compute, respectively, for

(3)
(4)

The first line (3) called the Riesz transform has a representation as a principal value integral

where

Setting and we have for the third line (4) called the -order directional Hilbert transform that

Since and high powers preserve the values near while forcing all other values in towards , this filter gives roughly the same result as an inverse Fourier transform of a convolution of the signal’s Fourier transform with

for small . The directional Hilbert transform, however, suffers less from a Gibbs effect than this sharp cutoff filter.

In 2D, the direction vector is with the discretized and , where is the total number of orientation. Rewrite the impulse response of the order directional Hilbert transform in (4) as

(5)

Putting together (Eq. (2), (4), (5)), for a fixed bandpass and directional subbands we have thus the DHBB filter of order :

(6)

2.1.2 Thresholding

For given , soft-thresholding is defined as follows

(7)

Thus, the thresholded coefficients are Note that is a solution of the -shrinkage minimization problem

yielding soft-thresholding (cf. [46]). Figure 4 visualizes the effect of the soft-thresholding and the comparison with the others (such as: hard [46], semi-soft [47] and nonlinear [48] thresholding operators).

2.2 Fingerprint Segmentation

After having designed the FDB filter, let us now ponder on parameter selection, image binarization and morphological processing.

2.2.1 Parameter Choice for Texture Extraction

A fingerprint image will be rescaled such that its oscillation pattern stays in a specific range in the Fourier domain, the coordinates of which are . For choosing the cutoff frequencies and , we incorporate our prior knowledge about adult fingerprint images at resolution of 500 DPI: Valid interridge distances remain in a known range approximately from 3 to 25 pixels [2]. This corresponds exactly to as a limit for high frequencies. A limit of for low frequencies of the Butterworth bandpass filter corresponds to an interridge distance of about 12 pixels. The range contains the small scale objects which are considered as noise. The range contains the low frequency objects, corresponding to homogeneous regions.

The number of directions in and the order of the directional Hilbert transform involves a tradeoff between the following effects. We observe that with increased order the filter’s shape becomes thinner in the Fourier domain. Although this sparsity smooths the texture image in the spatial domain, in order to fully cover all FOTIs, needs to grow with . However, a disadvantage of choosing large and is that errors occur on the boundary due to the over-smoothing effect as illustrated in Figure 5 (o).

The next parameter to select is the order of the Butterworth filter . An illustration of the filter for different orders and with cutoff frequencies and is shown in Figure 6, its bilinear approximation in Figure 7. As increases the filter becomes sharper. For very large values of , it approaches the ideal filter which is known to cause the unfavorable Gibbs effect.

The thresholding value separates large coefficients corresponding to the fingerprint pattern (FOTIs) (which are slightly attenuated due to soft-thresholding) from small coefficients corresponding to non-FOTIS and FOTIs which are not features due to the fingerprint pattern (these are eliminated). On the one hand, if is chosen too large, more prominent parts of true fingerprint tend to be removed. On the other hand, if is chosen too small, not all all unwanted features (as above) are removed which may cause segmentation errors.

In order to find good trade-offs, as described above, , , and are trained as described in Section 3.1. In fact, since different fingerprint sensors have different properties, is adaptively adjusted to the intensity of coefficients in all subbands as

(8)

Thus, instead of , is trained for each sensor.

2.2.2 Texture Binarization

In the first step, the texture is decomposed by the operator (1) to obtain the reconstructed image . Then, is binarized using an adaptive threshold adjusted to the intensity of . Thus, the threshold is chosen as , with from (8). If is larger than this threshold, it will be set to 1 (foreground), otherwise, it is set to 0 (background) as illustrated in Figure 1.

2.2.3 Morphological Processing

In this final phase, we apply mathematical morphology (see Chapter 13 in [49]), to decide for each pixel whether it belongs to the foreground or background. Firstly, at each pixel , we build an block centered at and 8 neighboring blocks (cf. Figure 3). Then, for each block, we count the white pixels and check whether their number exceeds the threshold with another parameter . If at least blocks are above threshold, the pixel is considered as foreground.
Figure 3: The morphological element.

(9)

Then, the largest connected white pixel component is selected by a region filling method. Its convex hull is then the ROI. For better visualization we have inverted white and black, i.e. display the background by white pixels and the ROI by black pixels, cf. Figure 1.

3 Evaluation Benchmark and Results

The databases of FVC2000, 2002 and 2004 [4, 5, 6] are publicly available and established benchmarks for measuring the verification performance of algorithms for image enhancement and fingerprint matching. Each competition comprises four databases: three of which contain real fingerprints acquired by different sensors and a database of synthetically generated images (DB 4 in each competition).

It has recently been shown that real and synthetic fingerprints can be discriminated with very high accuracy using minutiae histograms (MHs) [50]. More specifically, by computing the MH for a minutiae template and then computing the earth mover’s distance (EMD) [51] between the template MH and the mean MHs for a set of real and synthetic fingerprints. Classification is simply performed by choosing the class with the smaller EMD.

The nine databases containing real fingerprints have been obtained by nine different sensors and have different properties. The fingerprint image quality ranges from good quality images (especially FVC2002 DB1 and DB2) to low quality images which are more challenging to process (e.g. the databases of FVC2004). Some aspects of image quality concern both the segmentation step and the overall verification process, other aspects pose problems only for later stages of the fingerprint verification procedure, but have no influence on the segmentation accuracy.

Aspects of fingerprint image quality which complicate the segmentation:

  • dryness or wetness of the finger

  • a ghost fingerprint on the sensor surface

  • small scale noise

  • large scale structure noise

  • image artifacts e.g. caused by reconstructing a swipe sensor image

  • scars or creases interrupting the fingerprint pattern

Aspects of fingerprint image quality which make an accurate verification more difficult, but do not have any influence on the fingerprint segmentation step:

  • distortion, nonlinear deformation of the finger

  • small overlap area between two imprints

Each of the 12 databases contains 110 fingers with 8 impressions per finger. The training set consists of 10 fingers (80 images) and the test set contains 100 fingers (800 images). In total there are 10560 fingerprint images giving 10560 marked ground truth segmentations for training and testing.

3.1 Experimental Results

Segmentation Performance Evaluation

Let and be the width and height of image in pixels. Let be number of pixels which are marked as foreground by human expert and estimated as background by an algorithm (missed/misclassified foreground). Let be number of pixels which are marked as background by human expert and estimated as foreground by an algorithm (missed/misclassified background). The average total error per image is defined as

(10)

The average error over 80 training images is basis for the parameter selection. In Table 3, we report the average error over all other 800 test images for each database and for each algorithms.

Parameter Selection
Parameters Description
a constant for selecting the threshold in Eq. (8)
which removes small coefficients corresponding to noise.
the order of the directional Hilbert transform which
corresponds to the angularpass filter in Eq. (4).
the number of orientations in the angularpass filter in Eq. (4).
the order of the Butterworth bandpass filter in Eq. (2).
the window size of the block in the postprocessing step in Eq. (9).
a constant for selecting the morphology threshold in Eq. (9).
the number of the neighboring blocks in Eq. (9).
Table 1: Overview over all parameters for the factorized directional bandpass (FDB) method for fingerprint segmentation. Values are reported in Table 2.
FVC DB
2000 1 0.06 4 5
2 0.07 2 5
3 0.06 4 4
4 0.03 1 5
2002 1 0.04 1 4
2 0.05 1 7
3 0.09 1 5
4 0.03 1 6
2004 1 0.04 1 7
2 0.08 2 5
3 0.07 1 6
4 0.05 1 5
Table 2: Overview over the parameters learned on the training set. The other four parameters are , , and for all databases.

Experiments were carried out on all 12 databases and are reported in Table 3. For each method listed in Table 3, the required parameters were trained on each of the 12 training sets: the choice of the threshold values for the Gabor filter bank based approach by Shen et al. [8], and the threshold values for the Harris corner response based method by Wu et al. [9]. The parameters of the method by Bazen and Gerez are chosen as described in [7]

: the window size of the morphology operator and the weights of the perceptron which are trained in

iterations due to the large number of pixels in the training database. For the method of Chikkerur et al., we used the energy image computed by the implementation of Chikkerur, performed Otsu thresholding and mathematical morphology as explained in [49].

For the proposed FDB method, the involved parameters are summarized in Table 1 and the values of the learned parameters are reported in Table 2. Also, the mirror boundary condition with size 15 pixels is used in order to avoid boundary effects. In a reasonable amount of time, a number of conceivable parameter combinations were evaluated on the training set. The choice of these parameters balances the smoothing properties of the proposed filter attempting to avoid both under-smoothing and over-smoothing.

FVC DB GFB [8] HCR [9] MVC [7] STFT [10] FDB
2000 1 13. 26 11. 15 10. 01 16. 70 5. 51
2 10. 27 6. 25 12. 31 8. 88 3. 55
3 10. 63 7. 80 7. 45 6. 44 2. 86
4 5. 17 3. 23 9. 74 7. 19 2. 31
2002 1 5. 07 3. 71 4. 59 5. 49 2. 39
2 7. 76 5. 72 4. 32 6. 27 2. 91
3 9. 60 4. 71 5. 29 5. 13 3. 35
4 7. 67 6. 85 6. 12 7. 70 4. 49
2004 1 5. 00 2. 26 2. 22 2. 65 1. 40
2 11. 18 7. 54 8. 06 9. 89 4. 90
3 8. 37 4. 96 3. 42 9. 35 3. 14
4 5. 96 5. 15 4. 58 5. 18 2. 79
Avg. 8. 33 5. 78 6. 51 7. 57 3. 30
Table 3: Error rates (average percentage of misclassified pixels averaged over 800 test images per database) computed using the manually marked ground truth segmentation and the estimated segmentation by these methods: a Gabor filter bank (GFB) response based method by Shen et al. [8], a Harris corner response (HCR) based approach by Wu et al. [9], a method by Bazen and Gerez using local gray-level mean, variance and gradient coherence (MVC) as features [7], a method applying short time Fourier transforms (STFT) by Chikkerur et al. [10] and the proposed method based on the factorized directional bandpass (FDB).

This systematic comparison of fingerprint segmentation methods clearly shows that the factorized directional bandpass method (FDB) outperforms the other four widely used segmentation methods on all 12 databases. An overview of visualized segmentation results by the FDB method is given in Figure 12. A few challenging examples for which the FDB method produces a flawed segmentation are depicted in Figure 13. Moreover, a comparison of all five segmentation methods and their main features for five example images are shown in Figure 14 to 18.

4 Conclusions

In this paper, we designed a filter specifically for fingerprints which is based on the directional Hilbert transform of Butterworth bandpass filters. A systematic comparison with four widely used fingerprint segmentation showed that the proposed FDB method outperforms these methods on all 12 FVC databases using manually marked ground truth segmentation for the performance evaluation. The proposed FDB method for fingerprint segmentation can be combined with all methods for orientation field estimation like e.g. the line sensor method [52] or by a global model based on quadratic differentials [53] followed by liveness detection [54] or fingerprint image enhancement [2, 55]. It can also be used in combination with alternative approaches, e.g. as a preprocessing step for locally adaptive fingerprint enhancement in the Fourier domain as proposed by Bartůněk et al. [20]

or before applying structure tensor derived symmetry features for enhancement and minutiae extraction proposed by Fronthaler

et al. [56].

Notably, the filter is similar to the Gabor filter which could have been used instead of the DHBB filter. Similarly, Bessel or Chebbychev transforms as well as B-splines as generalizations ([57]) could replace the Butterworth. We expect, however, for reasons elaborated, relying on the DHBB filter gives superior segmentation results.

The manually marked ground truth benchmark and the implementation of the FDB method are available for download at
www.stochastik.math.uni-goettingen.de/biometrics/.

In doing so, we would like to facilitate the reproducibility of the presented results and promote the comparability of fingerprint segmentation methods.

Acknowledgements

S. Huckemann and C. Gottschlich gratefully acknowledge the support of the Felix-Bernstein-Institute for Mathematical Statistics in the Biosciences and the Niedersachsen Vorab of the Volkswagen Foundation. The authors would like to thank Benjamin Eltzner, Axel Munk, Gerlind Plonka-Hoch and Yvo Pokern for their valuable comments.

References

  • [1] D. Maltoni, D. Maio, A. K. Jain, and S. Prabhakar. Handbook of Fingerprint Recognition. Springer, London, U.K., 2009.
  • [2] C. Gottschlich. Curved-region-based ridge frequency estimation and curved Gabor filters for fingerprint image enhancement. IEEE Transactions on Image Processing, 21(4):2220–2227, April 2012.
  • [3] C. Gottschlich, T. Hotz, R. Lorenz, S. Bernhardt, M. Hantschel, and A. Munk. Modeling the growth of fingerprints improves matching for adolescents. IEEE Transactions on Information Forensics and Security, 6(3):1165–1169, September 2011.
  • [4] D. Maio, D. Maltoni, R. Capelli, J. L. Wayman, and A. K. Jain. FVC2000: Fingerprint verification competition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(3):402–412, March 2002.
  • [5] D. Maio, D. Maltoni, R. Capelli, J. L. Wayman, and A. K. Jain. FVC2002: Second fingerprint verification competition. In Proc. ICPR, pages 811–814, 2002.
  • [6] D. Maio, D. Maltoni, R. Capelli, J. L. Wayman, and A. K. Jain. FVC2004: Third fingerprint verification competition. In Proc. ICBA, pages 1–7, Hong Kong, 2004.
  • [7] A.M. Bazen and S.H. Gerez. Segmentation of fingerprint images. In Proc. ProRISC, pages 276–280, Veldhoven, The Netherlands, November 2001.
  • [8] L.L. Shen, A. Kot, and W.M. Koo. Quality measures of fingerprint images. In Proc. AVBPA, pages 266–271, Halmstad, Schweden, June 2001.
  • [9] C. Wu, S. Tulyakov, and V. Govindaraju. Robust point-based feature fingerprint segmentation algorithm. In Proc. ICB 2007, pages 1095–1103, Seoul, Korea, August 2007.
  • [10] S. Chikkerur, A. Cartwright, and V. Govindaraju. Fingerprint image enhancement using STFT analysis. Pattern Recognition, 40(1):198–211, 2007.
  • [11] B.M. Mehtre, N.N. Murthy, S. Kapoor, and B. Chatterjee. Segmentation of fingerprint images using the directional image. Pattern Recognition, 20(4):429–435, August 1987.
  • [12] B.M. Mehtre and B. Chatterjee. Segmentation of fingerprint images - a composite method. Pattern Recognition, 22(4):381–385, August 1989.
  • [13] X. Chen, J. Tian, J. Cheng, and X. Yang. Segmentation of fingerprint images using linear classifier. EURASIP Journal on Applied Signal Processing, 2004(4):480–494, 2004.
  • [14] L. Wang, H. Suo, and M. Dai. Fingerprint image segmentation based on gaussian-hermite moments. In Proc. ADMA, pages 446–454, Wuhan, China, July 2005.
  • [15] E. Zhu, J. Yin, C. Hu, and G. Zhang. A systematic method for fingerprint ridge orientation estimation and image segmentation. Pattern Recognition, 39(8):1452–1472, August 2006.
  • [16] B.G. Sherlock, D.M. Monro, and K. Millard. Fingerprint enhancement by directional Fourier filtering. IEE Proc. Vision, Image and Signal Processing, 141(2):87–94, April 1994.
  • [17] P. Sutthiwichaiporn and V. Areekul. Adaptive boosted spectral filtering for progressive fingerprint enhancement. Pattern Recognition, 46(9):2465–2486, September 2013.
  • [18] J.S. Bartůněk, M. Nilsson, J. Nordberg, and I. Claesson. Adaptive fingerprint binarization by frequency domain analysis. In Proc. ACSSC, pages 598–602, Pacific Grove, CA, USA, October 2006.
  • [19] J.S. Bartůněk, M. Nilsson, J. Nordberg, and I. Claesson. Improved adaptive fingerprint binarization. In Proc. CISP, pages 756–760, Sanya, China, May 2008.
  • [20] J.S. Bartůněk, M. Nilsson, B. Sällberg, and I. Claesson. Adaptive fingerprint image enhancement with emphasis on preprocessing of data. IEEE Transactions on Image Processing, 22(2):644–656, February 2013.
  • [21] M. Unser and D. Van De Ville. Wavelet steerability and the higher-order Riesz transform. IEEE Transactions on Image Processing, 19(3):636–652, March 2010.
  • [22] S. Held, M. Storath, P. Massopust, and B. Forster. Steerable wavelet frames based on the Riesz transform. IEEE Transactions on Image Processing, 19(3):653–667, March 2010.
  • [23] M. Unser, N. Chenouard, and D. Van De Ville. Steerable pyramids and tight wavelet frames in . IEEE Transactions on Image Processing, 20(10):2705–2721, October 2011.
  • [24] J. Ma and G. Plonka. The curvelet transform. IEEE Signal Processing Magazin, 27(2):118–133, March 2010.
  • [25] E. Candès, L. Demanet, D. Donoho, and L. Ying. Fast discrete curvelet transforms. Multiscale Model. Simul., 5(3):861–899, September 2006.
  • [26] M. Unser. Sampling - 50 years after Shannon. Proceedings of the IEEE, 88(4):569–587, April 2000.
  • [27] M. Haltmeier and A. Munk. Extreme value analysis of empirical frame coefficients and implications for denoising by soft-thresholding. Applied and Computational Harmonic Analysis, 36(3):434–460, May 2014.
  • [28] J.G. Proakis and D.K. Manolakis. Digital Signal Processing: Principles, Algorithms, and Applications. Prentice Hall, Upper Saddle River, NJ, USA, 2007.
  • [29] J.H. McClellan. The design of two-dimensional digital filters by transformations. In Proc. APCIS, pages 247–251, Princeton, NJ, USA, 1973.
  • [30] R.M. Mersereau, W.F.G. Mecklenbräuker, and Jr. T.F. Quatieri. McClellan transformations for two-dimensional digital filtering: I - design. IEEE Transactions on Circuits and Systems, 23(7):405–414, July 1976.
  • [31] W.F.G. Mecklenbräuker and R.M. Mersereau. McClellan transformations for two-dimensional digital filtering: II - implementation. IEEE Transactions on Circuits and Systems, 23(7):414–422, July 1976.
  • [32] C.-C. Tseng. Design of two-dimensional FIR digital filters by McClellan transform and quadratic programming. IEE Proceedings - Vision, Image and Signal Processing, 148(5):325–331, October 2001.
  • [33] M. Kočevar, B. Kotnik, A. Chowdhury, and Z. Kačič. Real-time fingerprint image enhancement with a two-stage algorithm and block-local normalization. Journal of Real-Time Image Processing, pages 1–10, July 2014.
  • [34] M. Ghafoor, I.A. Taj, W. Ahmad, and N.M. Jafri. Efficient 2-fold contextual filtering approach for fingerprint enhancement. IET Image Processing, 8(7):417–425, July 2014.
  • [35] J. Yang, N. Xiong, and A.V. Vasilakos. Two-stage enhancement scheme for low-quality fingerprint images by learning from the images. IEEE Transactions on Human-Machine Systems, 43(2):235–248, March 2013.
  • [36] S. Yi, D. Labate, G.R. Easley, and H. Krim. A shearlet approach to edge analysis and detection. IEEE Transactions on Image Processing, 18(5):929–941, May 2009.
  • [37] M.N. Do and M. Vetterli. The contourlet transform: An efficient directional multiresolution image representation. IEEE Transactions on Image Processing, 14(12):2091–2106, December 2005.
  • [38] K.N. Chaudhury and M. Unser. Construction of Hilbert transform pairs of wavelet bases and Gabor-like transforms. IEEE Transactions on Signal Processing, 57(9):3411–3425, September 2009.
  • [39] M. Felsberg and G. Sommer. The monogenic signal. IEEE Transactions on Signal Processing, 49(12):3136–3144, December 2001.
  • [40] S.L. Hahn. Hilbert transforms in signal processing. Artech House, Boston, MA, USA, 1996.
  • [41] K. G. Larkin, D. J. Bone, and M. A. Oldfield. Natural demodulation of two-dimensional fringe patterns. I. General background of the spiral phase quadrature transform. J. Opt. Soc. Am. A, 18(8):1862–1870, August 2001.
  • [42] K. G. Larkin. Natural demodulation of two-dimensional fringe patterns. II. Stationary phase analysis of the spiral phase quadrature transform. J. Opt. Soc. Am. A, 18(8):1871–1881, August 2001.
  • [43] K. G. Larkin and P. A. Fletcher. A coherent framework for fingerprint analysis: Are fingerprints holograms? Optics Express, 15(14):8667–8677, 2007.
  • [44] M. Unser, D. Sage, and D. Van De Ville. Multiresolution monogenic signal analysis using the Riesz-Laplace wavelet transform. IEEE Transactions on Image Processing, 18(11):2402–2418, November 2009.
  • [45] S. Häuser, B. Heise, and G. Steidl. Linearized Riesz transform and quasi-monogenic shearlets. International Journal of Wavelets, Multiresolution and Information Processing, 12(3), May 2014.
  • [46] D.L. Donoho and J.M. Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81(3):425–455, August 1994.
  • [47] T.F. Sanam and C. Shahnaz. A semisoft thresholding method based on Teager energy operation on wavelet packet coefficients for enhancing noisy speech. EURASIP Journal on Audio, Speech, and Music Processing, 2013(1):1–15, November 2013.
  • [48] M. Nasri and H. Nezamabadi-pour. Image denoising in the wavelet domain using a new adaptive thresholding function. Neurocomputing, 72(4-6):1012–1025, January 2009.
  • [49] M. Sonka, V. Hlavac, and R. Boyle. Image Processing, Analysis, and Machine Vision. Thomson, Toronto, Canada, 2008.
  • [50] C. Gottschlich and S. Huckemann. Separating the real from the synthetic: Minutiae histograms as fingerprints of fingerprints. IET Biometrics, 3(4):291–301, December 2014.
  • [51] C. Gottschlich and D. Schuhmacher. The shortlist method for fast computation of the earth mover’s distance and finding optimal solutions to transportation problems. PLOS ONE, 9(10):e110214, October 2014.
  • [52] C. Gottschlich, P. Mihăilescu, and A. Munk. Robust orientation field estimation and extrapolation using semilocal line sensors. IEEE Transactions on Information Forensics and Security, 4(4):802–811, December 2009.
  • [53] S. Huckemann, T. Hotz, and A. Munk. Global models for the orientation field of fingerprints: an approach based on quadratic differentials. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(9):1507–1517, September 2008.
  • [54] C. Gottschlich, E. Marasco, A.Y. Yang, and B. Cukic. Fingerprint liveness detection based on histograms of invariant gradients. In Proc. IJCB, Clearwater, FL, USA, September 2014.
  • [55] C. Gottschlich and C.-B. Schönlieb. Oriented diffusion filtering for enhancing low-quality fingerprint images. IET Biometrics, 1(2):105–113, June 2012.
  • [56] H. Fronthaler, K. Kollreider, and J. Bigun. Local features for enhancement and minutiae extraction in fingerprints. IEEE Transactions on Image Processing, 17(3):354–363, March 2008.
  • [57] D. Van De Ville, T. Blu, and M. Unser. Isotropic polyharmonic B-splines: scaling functions and wavelets. IEEE Transactions on Image Processing, 14(11):1798–1813, November 2005.

Supplementary Appendix

5 Comparison of the Operator in the FDB Method with the Summation and Maximum Operators

We briefly illustrate the differences between the proposed FDB filter (1) and the maximum and summation operators for the coefficients in all directional subbands. Figure 10 compares the results of these operators for a low-quality and a good quality example. The functions are described as follows

  • The maximum operator without and with the shrinkage operator (7) (depicted in the second and third row in Figure 10)

    with .

  • The summation operator without and with the shrinkage operator (7) (displayed in the fourth and fifth row in Figure 10)

6 Additional Figures

Figure 4: Four typical thresholding functions (red: hard, black: soft, green: semi-soft, magenta: nonlinear) are compared (top left). The following six pairs show an image and the visualization of the corresponding 1D cross section along the red line. F.l.t.r and top to bottom: the original image , the coefficient and the thresholded coefficients for the soft, hard, semisoft and nonlinear thresholding operators. Comparing the four cross sections in the bottom row, we observe that soft-thresholding achieves the sparsest solution.
(a)
(b)
(c)
(d)
(e) n=20
(f) n=100
(g) n=3
(h) n=20
(i) n=100
Figure 5: Angular bandpass at and different orders and their responses (last row).
(a)
(b)
(c)
(d)
Figure 6: Butterworth bandpass filter at and different , angular bandpass filter with , and their responses. row: 1D Butterworth, row: 2D Butterworth, row: , row: , row: , row: their responses.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Figure 7: Image (a) displays a 1D Butterworth bandpass filter (blue) and its approximation (red). Image (b) shows the 2D Butterworth bandpass filter at , and the corresponding DHBB filter in the Fourier and spatial domains (c, d) for the approximation by the bilinear transform. Image (e) visualizes the 2D version of the original filter and the corresponding DHBB filter (f, g).
Figure 8: The ground truth segmentation (a) and the binarized texture image (b) for an example fingerprint. Applying a standard morphology operation like closing (dilation followed by erosion) instead of the proposed method connects in this example the white fingerprint texture with structure noise close to the margin of the texture and the result is a defective segmentation (c). The proposed morphology avoids this undesired effect by considering neighborhoods on two scales: cells of size pixels and blocks of cells.
Figure 9: The indicator functions in the horizontal direction (a) and vertical direction (b).
(a) Err = 1.98
(b) Err = 1.44
(c) Err = 7.18
(d) Err = 5.1
(e) Err = 5.15
(f) Err = 4.29
(g) Err = 8.67
(h) Err = 6.42
(i) Err = 8.53
(j) Err = 2.82
Figure 10: Comparison of five image reconstruction strategies and their effect on the resulting segmentation. columns: segmented images (error in percent) and reconstructed images for a low-quality image and columns for a good quality image. row: the proposed operator. rows: maximum operator without and with the shrinkage operator (7), respectively. rows: summation operator without and with the shrinkage operator (7), respectively.
Figure 11: Visualization of the coefficients in the 16 subbands of the DHBB filter for .
Figure 12: Segmented fingerprint images and the corresponding reconstructed texture images by the FDB method for FVC2000 (first and second row), FVC2002 (third and fourth row) and FVC2004 (fifth and sixth row). Columns f.l.t.r correspond to DB1 to DB4.
(a) Err = 25.23
(b) Err = 18.97
(c) Err = 11.39
(d) Err = 5.98
Figure 13: Examples of incorrectly segmented fingerprint images due to: (a) a ghost fingerprint on the sensor surface, (b) dryness of the finger, (c) texture artifacts in the reconstructed image, (d) wetness of the finger. The first row shows the segmentation obtained by the FDB method, the second row displays the reconstructed image and the last row visualizes the manually marked ground truth segmentation.
(a)
(b) Err = 2.71
(c) Err = 10.08
(d) Err = 11.26
(e) Err = 5.17
(f) Err = 3.12
(g)
(h)
(i)
(j)
(k)
(l)
(m)
Figure 14: Segmented fingerprint images and their features of different methods for FVC2002_DB3_IM_15_1. (a) ground truth; (b, g) FDB, (c, h) Gabor, (d, i) Harris, (e, j, k, l) Mean-Variance-Coherence, (f, m) STFT.
(a)
(b) Err = 0.6
(c) Err = 2.08
(d) Err = 1.34
(e) Err = 3.87
(f) Err = 2.45
(g)
(h)
(i)
(j)
(k)
(l)
(m)
Figure 15: Segmented fingerprint images and their features of different methods for FVC2004_DB1_IM_24_7. (a) ground truth; (b, g) FDB, (c, h) Gabor, (d, i) Harris, (e, j, k, l) Mean-Variance-Coherence, (f, m) STFT.
(a)
(b) Err = 7.03
(c) Err = 20.55
(d) Err = 14.98
(e) Err = 21.22
(f) Err = 13.45
(g)
(h)
(i)
(j)
(k)
(l)
(m)
Figure 16: Segmented fingerprint images and their features of different methods for FVC2000_DB3_IM_17_3. (a) ground truth; (b, g) FDB, (c, h) Gabor, (d, i) Harris, (e, j, k, l) Mean-Variance-Coherence, (f, m) STFT.
(a)
(b) Err = 5.24
(c) Err = 13.83
(d) Err = 15.57
(e) Err = 6.74
(f) Err = 5.62
(g)
(h)
(i)
(j)
(k)
(l)
(m)
Figure 17: Segmented fingerprint images and their features of different methods for FVC2004_DB2_IM_56_8. (a) ground truth; (b, g) FDB, (c, h) Gabor, (d, i) Harris, (e, j, k, l) Mean-Variance-Coherence, (f, m) STFT.
(a)
(b) Err = 5.51
(c) Err = 12.43
(d) Err = 6.46
(e) Err = 4.47
(f) Err = 6.42
(g)
(h)
(i)
(j)
(k)
(l)
(m)
Figure 18: Segmented fingerprint images and their features of different methods for FVC2004_DB2_IM_65_7. (a) ground truth; (b, g) FDB, (c, h) Gabor, (d, i) Harris, (e, j, k, l) Mean-Variance-Coherence, (f, m) STFT.