The Fuzzy Vault for fingerprints is Vulnerable to Brute Force Attack

08/22/2007
by   Preda Mihailescu, et al.
GWDG
0

The fuzzy vault approach is one of the best studied and well accepted ideas for binding cryptographic security into biometric authentication. The vault has been implemented in connection with fingerprint data by Uludag and Jain. We show that this instance of the vault is vulnerable to brute force attack. An interceptor of the vault data can recover both secret and template data using only generally affordable computational resources. Some possible alternatives are then discussed and it is suggested that cryptographic security may be preferable to the one - way function approach to biometric security.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

03/30/2019

A New Biometric Template Protection using Random Orthonormal Projection and Fuzzy Commitment

Biometric template protection is one of most essential parts in putting ...
12/16/2021

Revisiting Fuzzy Signatures: Towards a More Risk-Free Cryptographic Authentication System based on Biometrics

Biometric authentication is one of the promising alternatives to standar...
03/07/2010

Secured Cryptographic Key Generation From Multimodal Biometrics: Feature Level Fusion of Fingerprint and Iris

Human users have a tough time remembering long cryptographic keys. Hence...
09/28/2018

A Symmetric Keyring Encryption Scheme for Biometric Cryptosystems

In this paper, we propose a novel biometric cryptosystem for vectorial b...
03/18/2020

Neural Fuzzy Extractors: A Secure Way to Use Artificial Neural Networks for Biometric User Authentication

Powered by new advances in sensor development and artificial intelligenc...
05/25/2018

Generating protected fingerprint template utilizing coprime mapping transformation

The identity of a user is permanently lost if biometric data gets compro...
06/23/2020

Security analysis of cancellable biometrics using constrained-optimized similarity-based attack

Cancellable biometrics (CB) intentionally distorts biometric template fo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Secure communication relays on trustable authentication. The most wide spread authentication methods still use passwords and pass-phrases as a first step towards identity proving. Secure pass-phrases are hard to remember, and the modern user needs a large amount of dynamic passwords for her security. This limitation has been known for a long time and it can in part be compensated by the use of chip cards as universal access tokens.

Biometrical identification, on the other hand, is based on the physical identity of a person, rather than on the control of a token. Reliable biometric authentication would thus put an end to password insecurity, various repudiation disputes and many more short comings of phrase or token based identities. Unlike the deterministic keys which are common for cryptography, the biometric data are only reproducible within vaguely controlled error bounds, they are prone to various physical distortions and have quite low entropy.

Overcoming the disadvantages of the two worlds by using their reciprocal advantages is an important concern. Unsurprisingly, we look back to almost a decade in which the biometrics community developed an increasing interest in the security and privacy of biometrical systems. We refer to the survey [UPPJ] of Uludag et. al. on biometric cryptosystems for an update overview of the work in this area.

Researchers from cryptography and coding theory, attempted to develop new concepts allowing to model and evaluate biometric data from an information theoretical point of view. The resulting algorithms deal with the specific restraints of biometrics: non - uniformly distributed data with incomplete reproducibility and low, hard to estimate, entropy. Both communities are motivated by the wish to handle biometrics like a classical password, thus protecting it by some variant of one-time functions and performing the verification in the image space. Unlike passwords, the biometrics are not deterministic. This generates substantial challenges for the verification after one - time function transforms. Furthermore, disclosure of biometric templates is considered to be a more vital loss then the loss of a password.

Juels and Wattenberg [JW] and then Juels and Sudan [JS] have developed with the fuzzy commitments and fuzzy vault two related approaches with a strong impact in biometric security. The papers of Dodis et. al. [DORS, BDKOS] can be consulted for further theoretic development of the concepts of Juels et. al. and their formalization in an information theoretic frame. It is inherent to the problem, that core concepts of the theory, such as the entropy of a biometric template, are hard to even estimate. Thus the security proofs provided by the theory do not translate directly in practical estimates or indications. As a consequence, we shall see that concrete implementation studies of the fuzzy vault have to be considered for security investigations, whilst the original paper [JS] does not contain sufficient directions of application in order to allow such an analysis. Since we focus on the application of the vault to the biometry of fingerprints, we shall often use the term of fingerprint vault for this case.

Clancy, Kiyavash and Lin gave in 2003 [CKL] a statistically supported analysis for a realistic implementation of the vault to fingerprints. The authors observe from the start that the possible parameter choices in this context are quite narrow for allowing sufficient security; they succeed to define a set of parameters which they claim to provide the cryptographically acceptable security of operations for an attack. We show that faster attacks are possible in the given frame, thus making brute force possible. The analysis in [CKL]

are outstanding and have been used directly or indirectly in subsequent papers; the good security was obtained at the price of a quite high error probability (

).

Uludag and Jain provided in [UJ1, UJ2] an implementation of the fuzzy vault for fingerprints which uses alignment help - data and was applied to the fingerprints from the database [FVC]. This improves the identification rate; however some of the simplifications they make with respect to [CKL] reduce security quite dramatically. The ideas of these authors could however very well be combined with the more defensive security approach of Clancy et. al.

Yang and Verbauwhede [YV] describe an implementation of the vault, with no alignment help, which follows closely the concepts of [CKL] and focuses upon adapting to various template qualities and the number of minutiae recognized in these templates.

We describe the original fuzzy vault in Section 2 and argue that the security proofs and remarks given in [JS] are insufficient for fingerprint applications. In the same section we describe and prove our approach to brute force attack. In Section 3 we discuss the various implementations mentioned above and show that brute force can be performed in feasible time in all instances. In Section 4 we discuss possible variants and alternatives and suggest the use of additional sources of information, thus raising the security vault to acceptable cryptographic standards.

2. The Fuzzy Vault

The fuzzy vault is an algorithm for hiding a secret string in such a way that a user who is in possession of some additional information can easily recover , while an intruder should face computationally infeasible problems in order to achieve this goal. The information can be fuzzy, in the sense that the secret is locked by some related, but not identical data . Juels and Sudan define the vault in general terms, allowing multiple applications. Biometry is one of them and we shall restrict our description directly to the setting of fingerprints. Generalizations are obvious, or can be found in [JS, DORS, BDKOS].

The string is prepared for the transmission in the vault as follows. Let be a secret string of bits length. The user (Alice, say) that wishes to be identified by the string has her finger scanned and a locking set comprising the Cartesian coordinates of minutiae in the finger scan is selected from this finger template ; the couples of coordinates are concatenated to single numbers . One selects a finite field attached to the vault and lets be the number of elements in necessary to encode . One assumes that and maps by some convention. Selecting to be a polynomial of degree with coefficients which encode in some predetermined way, one builds the genuine set:

which encodes the information of . The genuine verifier Bob has an original template of Alice’s finger and should use this information in order to recover and then . In order to make an intruder’s (Victor, say) attempt to recover computationally hard, the genuine set is mixed with a large set of chaff points

with and ; the chaff points should be random uniformly distributed. Chaff points and genuine lists are shuffled to a common vault with parameters:

Upon reception, Bob will generate an unlocking set . This set contains those coordinates of vault points, which well approximate coordinates of minutiae in . Note that in order to have a reasonable approximation of minutiae coordinates of the same finger in different templates, this templates must

  • Have negligeable non linear distortions.

  • Be aligned modulo affine transforms.

The second condition is addressed in [UJ2]. The unlocking set may be erronated, thus either allowing some chaff points which are closer to then locking points, or making the choice of a sufficiently large unlocking set hard. Both problems can be addressed within given limits by error correcting codes. Thus Juels and Sudan suggest using Reed Solomon codes for decoding .

The security argumentation in [JS] is based upon the expectation that the chaff points will build an important amount of subsets of

elements, whose coordinates are interpolated by polynomials

of degree , thus hiding the correct value of from Victor among the set of polynomials . The argument is backed up by the following Lemma, a proof of which can be found in [JS, CKL].

Lemma 1.

For every and every vault , there are at least random polynomials such that contains couples .

2.1. A brute force attack

If Victor intercepts a vault , but has no additional information about the location of minutiae or some of their statistics, he may still try to recover by brute force trials. For this he needs to find points in the genuine list . The chances that points of the vault are also in the genuine list are:

(1)

This, together with the fact that the probability that a random pair lays on the graph of a given polynomial is equal to , yield the ground for the proof of Lemma 1.

Lagrange interpolation of a polynomial of degree can be done in operations [GG]; checking whether an additional point lays on the graph of (so ) requires steps, so such verifications can be done at the cost of one interpolation.

We assume now with Clancy et. al., that there is a degree which is minimal with the property that among all polynomials of degree which interpolate vault points, is the only one which interpolates with probability close to at least points. This yields a criterium for identifying [CKL]:

Lemma 2.

The complexity of the brute force attack problem using a suitable value as above is .

We suggest the following brute force attack, which is not dependent on the value of and is stronger then the previous:

Lemma 3.

Let be a fuzzy fingerprint vault and be chosen as above. Then an intruder having intercepted can recover the secret in operations, where .

Proof.

We have shown that in less then trials, Victor can find a set of points from the locking set . In order to find such a set and then , for each - tuple Victor has to

  • Compute the interpolating polynomial . It is proved in [GG] that the implicit constant for Lagrange interpolation is ; let . Thus computing all the interpolation polynomials requires operations.

  • Search a point

    (2)

    This requires the equivalent of Lagrange interpolations. If no point is found, then discard .

  • If was not discarded, search for a further point which verifies (2). This step is met with probability . If a point is found, add it to ; otherwise discard .

  • Proceed until a break condition is encountered (no more points on the graph of ) or points have been found in , and thus with high probability.

Adding up the numbers of operations required by the steps 1.-4., with weights given by the probabilities of occurrence, one finds:

as claimed. ∎

Here are some remarks on factors that influence the complexity of the brute force attack:

  • Restricting the region of interest from which Victor chooses points for his unlocking set is irrelevant, if minutiae are assumed to be uniformly distributed over the template. In this case, and are scaled by the same factor and thus and the complexity of brute force remains unchanged.

  • The complexity grows when increasing the degree of the polynomial . However, high degrees require large unlocking sets, which may is a problem for average quality fingerprints and scanners. Thus one can only augment the degree to an extent which depends on the quality of both scanner and fingerprint. The issue of adapting to these factors is addressed in [YV].

  • The complexity grows when increasing the number of chaff points

    . There is a bound to this number, given by the size of the image on the one side and the variance in the minutiae location between various data capturings and extractions

    [CKL], on the other. Clancy and his coauthors find empirically the lower bound for the distance between chaff points, and this distance was essentially respected also by the subsequent works.

  • The complexity grows when reducing the size of the genuine list. This is however also detrimental for unlocking, since it may reduce the size of the unlocking set below the required minimum.

What can be inferred about the security of fingerprint vaults from the seminal paper [JS]? First, one observes that Juels and Sudan suggest the use of error correcting codes, thus avoiding to transmit in the vault explicite indications to whether an interpolation polynomial is the correct . Uludag and Jain suggest on the other hand in [UJ2] the use of CRC codes: thus

is padded by a CRC code, adding

to the minimal degree needed to encode

. Upon decoding, Bob can check the CRC and ascertain that he found the correct secret. This simplifies the unlocking procedure, but also allows the attacker to verify if he has found a correct unlocking set. Does this bring advantages to Victor? The odds to find a correct CRC are equal to the probability that

points are interpolated by the same polynomial of degree . Thus, if the degree , Victor has no gain, as follows form the Lemma 3.

It is shown in the Chapters 4 and 5 of [JS] that the amount of chaff points is essential for security. The suggested minimum lays at about . For fingerprints, a large amount of chaff points naturally decreases the average distance between these points in the list. The value leads to an average distance of pixels between the point coordinates, depending on the resolution of the original image. This is below realistic limits as mentioned in (iii) above. At this distance, even in presence of a perfect alignment, the genuine verifier Bob should need some additional information - like CRC or other - providing the confirmation of the correct secret. But such a confirmation is contrary to the security lines on base of which Juels and Sudan make there evaluation.

There is an apparent conflict between the general security proofs in [JS] and realistic applications of the fuzzy vault to fingerprints. In the implementation chapter, the authors explicitely warn that applications involving privacy-protected matching cannot achieve sufficient security. It is conceivable that fingerprint matching would be considered by the authors as belonging to this category.

3. Implementations of the fingerprint vault

We start with the most in depth analysis of security parameters for the fingerprint vault, which was done by Clancy and coauthors in [CKL]. The paper focuses on applications to key release on smart cards. They suggest using multiple scans in order to obtain, by correlation, more reliable locking sets. As mentioned above, the variance of minutiae locations which they observed in the process leads to defining a minimum distance between chaff (and genuine) points, which is necessary for correct unlocking. This minimal distance implies an upper bound for the size of the vault and thus the number of chaff points!

The authors use very interesting arguments on packing densities and argue that in order to preserve the randomness of chaff points, these cannot have maximal packing density. On the other hand, assuming that the intruder has access to a sequence of vaults associated to the same fingerprint and he can align the data of these vaults, the randomness of chaff points allows a correlation attack for finding the genuine minutiae.

This observation suggests rather using perfectly regular high density chaff point packing. These are hexagonal grids with mutual distance between the points. The genuine minutiae can be rounded to grid points, and Victor will have no clue for distinguishing these from the chaff points. We shall comment below on this topic.

The implementation documented in this paper suggests the following parameters for optimal security: . The brute force attack in Lemma 3 is more efficient then the one of Theorem 1 of [CKL], on which they base their security estimates. Using the above parameters and Lemma 3, we find an attack complexity of . Comparing this to the complexity of genuine unlocking yields a security factor , which is below cryptographic security, unlike the deduced by Clancy et. al. in [CKL]. Since the empirical values of in [CKL] range in the interval with expected value , it is possible that their estimate was gained by using the minimal value of which corresponds to maximal security. However, this leads to a small difference between and and this may reduce the rate of correct decodings.

By the balanced arguments used in the parameter choice, the security bounds obtained on base of [CKL] are an indication of the vulnerability of the fingerprint vault in general.

Yang and Verbauwhede describe in [YV] an implementation of the vault, in which the degree of the polynomials varies in dependence of the size of the genuine list, which itself depends directly on template quality. From the point of view of security, the paper can be considered as a follow up of [CKL], which addresses the problem of poor image quality with its consequences for the size of the locking set. The size of the secret and polynomial degrees are adapted to the size of locking sets. The proposal is consistent, its vulnerability to attacks is comparable to [CKL] in general, and higher, when adapting to poor image quality.

The major contributions of Uludag and Jain in [UJ2] is to provide a useful set of helper data for easing image alignment. This has an important impact on the identification rate. As mentioned above, they bring the elegant and simple proposal of adding a CRC to the secret, thus easing the unlocking work. We discussed above the issue of the security risk increasment: this is arguably small. On the other hand, the degree of for the polynomial and vault size , whilst , makes their system more vulnerable, with an absolute attack complexity of . Better security can be achieved in this system by using the parameters of [CKL].

4. Security discussion

We discuss in this section several variants for improving security of the fingerprint vault. Future research shall analyze the practicality of some of them.

4.1. Using more fingers

We have shown that the parameters , allowing to control the security factor, are naturally bounded by image size, variance of minutiae location and average number of reliable minutiae. They cannot thus be modified beyond certain bounds and it is likely that this bounds have been well established in [CKL]. It lays thus at hand to propose using for instance the imprints of two fingers rather then only one, for creating the vault. This leads practically to a squaring of the security factor.

4.2. Non - random chaff points

As mentioned above, it is suggested in [CKL] that chaff points should have random distribution; this leads to halving the packing density compared to maximal density packing. However, one can embrace the opposite attitude. This consists in laying a hexagonal grid of size , proposed by the authors upon the fingerprint template, thus achieving maximal packing. Each grid point will be attached to some vault point - chaff or genuine. Thus Victor will have no means for distinguishing between chaff points and genuine ones, despite of the regularity of the grid.

Thanks to the error correcting codes, the genuine points can always be displaced by a distance at most to a grid point. This strategy improves the security of the vault in two ways: by doubling the size of the vault and by avoiding correlation attacks. The consequences need still be analyzed.

4.3. Quizzes using additional minutiae information

There is more information in a minutia than its mere coordinates. Such are for instance its orientation, the lengths and curvatures of incoming lines, neighboring data, etc. We propose to attach to each minutia a quiz which can be solved in robust manner by Bob, but which introduces for Victor several (say ) bits of uncertainty per minutia. Thus for polynomial degree , the security may be increased by a factor of .

Here is a simple example of how a quiz functions for the case of the orientation of minutiae. Let be the concatenated coordinates of a fixed minutia and let be its orientation, in a granularity of , for some small integer . Then, along with , the vault will also contain a value : thus the minutia is represented by . Upon reception, Bob computes the integer such that . The value of will then encode a certain transformation of the received value and the interpolating value will be set to be . Note that the vault creator has control on the generation of and it may be chosen such that can be safely recovered by the genuine user. For chaff points, is random. Several robust additional informations may as well increase the security of the fingerprint vault to a cryptographically acceptable level.

4.4. The alternative of cryptographic security

These observations lead to the question: is the use of one - way functions and template hiding an intrinsic security constraint, or just one in many conceivable approaches to securing biometric authentication? The second is the case, and it is perfectly feasible to construct a secure biometric authentication system based on the mechanisms used by state of the art certification authorities. The mechanisms are standard and have been implemented by some providers. An important advantage of public key cryptography is that it allows attaching time stamps to transmitted finger templates, thus reducing the consequences in the event that a template is compromised.

5. Conclusions

It has been attempted to achieve security in biometric applications either by using one-way functions adapted to the specificities of biometric data, or by direct application of strong cryptographic techniques. We showed that one of the leading methods of the first category, the fuzzy vault, allows a simple attack to its instantiation for fingerprint data [CKL, UJ1, UJ2, YV]. We have brought some suggestions which may help raising the security level of the fingerprint vault to cryptographic acceptable values.

One may argue that similar attacks could be possible to other related methods and thus cryptographic security is preferable, whenever it can be achieved or afforded. Subsequent work should consider variants of the one - way function ideas which could meet the standards of cryptographic security. More in depth statistical studies concerning the amount of information available from various fingerprint related data are called for, thus providing a solid foundation for security claims.

Also, cryptographic security can be brought in a wide scale of variants; analyzing pros and contras of such variants is an open topic.

Acknowledgment I thank K. Mieloch and U. Uludag for enlighting discussions and remarks.

References

  • [CKL] C. Clancy, N. Kiyavash and D. Lin: Secure Smartcard - Based Fingerprint Authentication, ACM Workshop on biometric methods and applications, (WBMA) November 2003.
  • [FVC] Database of the Second International Competition for Fingerprint Verification Algorithms http://bias.csr.unibo.it/fvc2002/
  • [BDKOS] X. Boyen, Y. Dodis, J. Katz, R. Ostrovsky and A. Smith: Secure Remote Authentication Using Biometric Data, revised online version of a paper appeared in the proceedings of EUROCRPYT 2005. http://www.cs.stanford.edu/ xb/eurocrypt05b/
  • [DORS] Y. Dodis, R. Ostrovsky, L. Reyzin and A. Smith: Fuzzy Extractors: How to generate Strong Keys from Biometrics and Other Noisy Data, Proceedings EUROCRYPT (2004), Lecture Notes in Computer Science 3027, pp. 523-540.
  • [GG] J. von zur Gathen and J. Gerhardt: Modern Computer Algebra, 2-nd Ed., Cambridge University Press (2000)
  • [JS] A. Juels and M. Sudan: A fuzzy vault scheme, In Proc. of the IEEE International Symposium on Information Theory (2002), p. 408, Ed. A. Lapidoth and E. Teletar.
  • [JW] A. Juels and M. Wattenberg: A fuzzy commitment scheme , In Proceedings of the Sixth ACM Conference on Computer and Communication Security (1999), pp. 28-36.
  • [UJ1] U. Uludag and A. Jain: Fuzzy Vault for Fingerprints, Proceedings of the workshop “Biometrics: Challenges Arising from Theory and Practice”, pp. 13-16, Cambridge UK, August 2004.
  • [UJ2] U. Uludag and A. Jain: Securing Fingerprint Template: Fuzzy Vault with Helper Data, Proc. IEEE Workshop on Privacy Research in Vision (PRIV), New York City, NY, June 2006.
  • [UPPJ] U. Uludag, S. Pankanti, S. Prabhakar and A. Jain: Biometric Cryptosystems: Issues and Challenges, Proceedings of the IEEE Vol. 92, No. 6, (2004) pp. 948-960.
  • [YV] S. Yang and I. Verbauwhede: Automatic secure fingerprint verification system based on fuzzy vault scheme, in Proc. IEEE Int. Conference on Acoustics, Speech and Signal Processing (2005), pp. 609-612.