Unsupervised Pre-trained, Texture Aware And Lightweight Model for Deep Learning-Based Iris Recognition Under Limited Annotated Data

02/20/2020 ∙ by Manashi Chakraborty, et al. ∙ IIT Kharagpur 0

In this paper, we present a texture aware lightweight deep learning framework for iris recognition. Our contributions are primarily three fold. Firstly, to address the dearth of labelled iris data, we propose a reconstruction loss guided unsupervised pre-training stage followed by supervised refinement. This drives the network weights to focus on discriminative iris texture patterns. Next, we propose several texture aware improvisations inside a Convolution Neural Net to better leverage iris textures. Finally, we show that our systematic training and architectural choices enable us to design an efficient framework with upto 100X fewer parameters than contemporary deep learning baselines yet achieve better recognition performance for within and cross dataset evaluations.



There are no comments yet.


page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Iris biometrics, over the last few years have shown immense potential as an infallible biometric recognition system [3, 4, 5, 20, 21]. Iris textures are highly subject discriminative [4] and being an internal organ of the eye, it is resilient to environmental perturbations and is also immutable over time.

The initial works on iris recognition focused on designing traditional hand engineered features [3, 15, 17, 14]. Recent success over a variety of vision applications on natural images [8, 11]

showcases the unprecedented advantage of deep Convolution Neural Networks (CNNs) over hand-crafted features. Inspired by the success of CNNs, iris biometric community also started exploring the prowess of deep learning. An appreciable gain in performance

[16, 7, 18] is observed compared to traditional methods. However, some intrinsic issues such as absence of large annotated datasets, explicit processing of texture information and lightweight architecture design have hardly been addressed. In this paper, we address the above concerns with several systematic modifications over conventional CNN training pipelines and architectural choices.
Handling Absence of Large Dataset:

CNNs are data greedy and usually require millions of annotated data for fruitful training. This is not an issue for natural images where datasets such as Imagenet

[6], MS-COCO [13] contain large volumes of annotated data. However, for iris biometrics, the sizes of the datasets are usually limited to few thousands. Thus, this short-coming still remains an open challenge for deep learning based iris biometric researchers. In this paper, we address this problem with a two-stage training strategy. In the first stage, we pre-train a parameterized feature encoder, , to capture iris texture signatures in an unsupervised training framework. In the second stage, acts as a feature extractor and is further refined along with a classification head, . We show that the combined training framework provides significant boost in performance compared to single stage training. Further, visualization with Layer Wise Relevance Propagation [1] shows that as opposed to single-stage training, our proposed stage-wise training drives the network weights to focus more on the iris textures. This further motivated us in designing systematic texture attentive architectural choices as mentioned below.
Energy Aware Pooling:

Non-parametric spatial sub-sampling (usually realised as Max-pooling) in conventional deep networks is a crucial and essential component fairly used to retain the maximum response of a specified window. In this paper, we show that on a texture-rich iris

[4] dataset, sub-sampling using Energy Aware Pooling (EAP) is a better alternative to operation.
Texture Energy Layer: Usually in deep networks, it is a common practise to have several fully-connected layers at the end to amalgamate global structure information. However, iris images are mainly rich in local textures. Toward this, we propose to use Texture Energy Layer (TEL) to specifically capture energy of the last convolutional filter bank responses. Such energy based features have been traditionally used for texture classification [9, 19, 10].
Light-weight Model for Inference: The systematic design strategies enable us to operate with much shallower architecture yet achieve better performance than the deeper baselines. Additionally, TEL layer obviates the requirement of computationally heavy penultimate fully-connected layer of our proposed base architecture. As a consequence, our model has significantly less parameter counts. This is particularly important since iris biometrics is gradually becoming an integral component of many handheld mobile devices.

Our above proposed architectural choices consistently outperforms traditional as well as recent deep nets by a noteworthy margin. Even in scenarios where target dataset is different from training data, our proposed model generalises with better performance without the need of even fine-tuning on the target data.

Figure 1: Stagewise training framework of proposed framework of

2 Related Work

Initial attempts of iris recognition were primarily inclined towards traditional techniques of extracting features from various filter bank responses. Daugman [3] extracted representative iris features from responses of 2-D Gabor filters. Masek et al. extracted response from 1D Log Gabor filters [15]. Ma et al. [14] proposed a bank of circularly symmetric sinusoidal modulated Gaussian filters banks to capture the discriminative iris textures. Wildes et al. [21] extracted discriminative iris textures from multi-scale Laplacian of Gaussian (LOG). Monro et al. used features from Discrete Cosine Transform (DCT) [17]. To summarize, the earlier works mainly focused on handcrafted feature representation. Initial attempts [16, 18]

of leveraging deep learning for iris recognition involved feature extraction using well known pre-trained (for ImageNet classification) neural networks followed by a supervised classification stage. Recently, Gangwar et al. 

[7] proposed DeepIrisNet, which is an to end-to-end trainable (from scratch) deep neural network and achieved appreciable boost over the traditional methods.

3 Methodology

3.1 Network Architecture

3.1.1 Stagewise Training

Stage-1: In the first phase, we follow an unsupervised framework for pre-training a feature encoder, 111subscript refers to set of trainable parameters to capture texture signatures. For this, we train a convolutional auto-encoder with reconstruction loss, . Specifically, given a normalised iris image, (an example of normalised iris image, is shown in Figure 1

), we project it to a smaller resolution (by strided convolution and spatial sub-sampling) using the encoder and then decode it back to the original resolution with a decoder,

. Configurations of various layers of encoder, and decoder, is shown in Table 1. is thus applied between original image, and reconstructed image, . In this paper, we have used the Structural Similarity (SSIM) metric as a proxy for gauging the similarity between original and reconstructed image. So, we minimise the following:


Stage-2 CombNet: In the second stage, activations of is passed to the classification branch, . Following the usual trend, the baseline

consists of two fully connected layers followed by a softmax activation layer to output class probabilities. The combination of

is optimised using cross entropy loss. We term this combined architecture as . We define , as the combined model whose encoder, is pre-trained with reconstruction loss from Stage-1. is the model in which the encoder is randomly initialised (without any pre-training).

Figure 2: Relevance map (red is most important while blue is least) of three different iris corresponding to three classes of the CASIA.v4-Distance dataset. Row 1: Normalised iris image. Row 2: Relevance map of (randomly initialised encoder). Row 3: Relevance map of (initialised with pre-trained encoder).

3.1.2 Energy Aware Pooling (Eap):

This layer is proposed to retain the local texture energy during spatial sub-sampling in CNN. The de facto choice for sub-sampling in CNN is by Max-pool which is more appropriate to determine the presence/absence of a particular feature over the sampled window. For iris images which have local textural patterns, it is more prudent to retain the energy of the sub-sampled window. With this in mind, for a pooling kernel of receptive field , EAP calculates the average of the pixels instead of finding the maximum as in Max-pool operation. Downsampling is achieved by operating this kernel with stride of 2 pixels. This way of retaining the energy while downsampling finds close analogy with energy of filter bank responses that has been traditionally used as discriminative feature for texture classification [9, 10]. We term the model with the proposed EAP layer as .

3.1.3 Texture Energy Layer (Tel):

This layer is designed to alleviate the need of penultimate fully connected layer of . This computationally heavy fully connected layer has entire image as its receptive field and thus looses local textures which are more important for iris recognition. Therefore, in this stage our present is made more texture attentive by adding TEL after the last convolution layer. In this layer we use spatial averaging kernels with spatial support equal to dimension of feature maps from previous layer. So, if input to TEL layer is , output from it is . These stacked average values closely corresponds to the energy of each activation maps of the previous layer. The output of TEL

is then finally passed to a single fully connected layer which is followed by softmax activation to get the final class probabilities. This combined texture attentive model having both

EAP and TEL layers is termed as which is shown in Figure 1. As TEL alleviates the need of penultimate fully connected layer, it helps in dramatically reducing the parameter count (46.72 cheaper) than our baseline having two fully connected layers as reported in Table 2.

Type Kernel Stride Padding Output
Conv 1 2 32
Batch Norm 32
Pooling 2 0 32
Conv 1 1 64
Batch Norm 64
Pooling 2 0 64
Conv 1 1 128
Batch Norm 128
Pooling 2 0 128
Conv 1 1 256
Batch Norm 256
Pooling 2 0 256
Pixel Shuffle [12] 64
Pixel Shuffle [12] 16
Pixel Shuffle [12] 4
Pixel Shuffle [12] 1
Table 1: Configurations of various layers of and

3.2 Matching Framework

Representative iris signatures (1024-D) were extracted from the TEL layer of . Two iris images are matched depending on the dissimilarity score obtained from the normalised euclidean distance between their respective iris signatures.

4 Experiments

4.1 Comparing Methods

We compare our proposed framework with three traditional baselines: Daugman [3], Masek [15] and Ma et al. [14]

. From deep learning paradigm, we compare against a pre-trained (on ImageNet) VGG-16 fined tuned on the iris dataset. This was one of the initial attempts of applying transfer learning with deep neural nets for iris data

[16, 18]. We also compare against DeepIrisNet [7] which is a much deeper model having 8 convolution and 3 fully connected layer. .

4.2 Dataset Description

We present our results on CASIA.v4-Distance [2] and CASIA.v4-Thousand [2]. Iris of left and right eye have disparate patterns [4] and are thus attributed to different classes i.e., number of classes is twice the number of subjects present in the dataset.

The framework of [22] is used for iris segmentation and normalization. Normalised iris of three different subjects of CASIA.v4-Distance dataset is shown in Figure 2. Spatial resolution of normalised iris images for all experiments is 51264 unless stated otherwise. For fair comparison, same segmentation and normalization protocol are followed for all experiments. We used the following two dataset configurations for performance evaluation.
Within Dataset: Here, ‘training+validation’ and test splits are selected from CASIA.v4-Distance dataset [2] having 142 subjects. Experiments were conducted on 4773 samples from 284 (left and right iris are considered as different classes) classes. Out of these 284 classes, ‘training+validation‘ split comprises of 80 of the classes and the remaining disjoint 20 forms the test split used for reporting verification results (using matching framework of section 3.2).
Cross Dataset: In this setting, all the pre-trained models (trained on CASIA.v4-Distance) were directly used on CASIA.v4-Thousand dataset without any fine-tuning. This challenging configuration therefore evaluates the generalization capability of the different competing deep learning frameworks. CASIA.v4-Thousand has 2000 classes (left and right iris belong to different classes). We perform 5-fold testing. Each fold consists of of total classes. Average matching performance over the 5-folds is reported.

Following the matching framework of [7], the test set for both the above configurations is divided into gallery (enrolled images) and probe (query) set. 50% of the identities in probe set are imposters (identities not enrolled in the system) while the rest are genuine identities.

Classification Accuracy
(in %)
60.53 135.5
74.09 135.5
92.61 2.9
Table 2: Self ablation of various architectural choices.

4.3 Results

Exp 1- Ablation study of various architectural choices: In this section, we perform self ablation of variants of architectural choices. We use classification accuracy on validation subset from the ’training+validation’ split as a metric for model selection. Metrics are reported in Table 2.
a) Benefit of Stage-wise Training: Classification accuracy of is 60.53% while that of is 53.11%. This clearly shows the benefit of pre-training the encoder part of over random initialised encoder (). Further, for reasoning the superiority of over

, we study relevance map of a given iris image correctly classified by both the models. Relevance map gives an indication of which input pixels were important for classification. Fig

2 shows relevance (heat) map of both the aforementioned models from three different classes of CASIA.v4-Distance dataset. It is evident from figure that pre-training the encoder encourages to focus more on the texture patterns as opposed to which primarily concentrates on the overall shape cues obtained from the boundary (separating iris region from background) pixels. Instigated from this observation, we incorporate additional improvements on that further exploits the textural cues for better performance.
b) Benefit of EAP and TEL layers: From Table 2 we observe, as Max-Pool layer is replaced by EAP, correspondingly classification accuracy increases from 60.53% to 74.09% . This bolsters our assumption that EAP layer is more beneficial for sub-sampling than Max-Pool on texture-rich images. With replacement of the penultimate fully connected layer of with TEL layer, we see a further improvement of performance by our model.
Exp 2- Within and Cross dataset comparison of our preferred architecture with existing methods: From Exp 1, it is clearly evident that outperforms our other architectural choices. Therefore, in this phase comparison of our best architectural choice with existing traditional as well as deep learning models are presented. Performance is evaluated based on EER (Equal Error Rate), and AUC (Area Under the Curve) of the Detection Error Tradeoff (DET) curve. We also report parameter counts of the competing deep nets which are metrics of computational complexity.

(in %)
(in )
Masek [15] 5.70 0.030 XXX
Li Ma et al. [14] 5.45 0.026 XXX
Daugman [3] 5.20 0.015 XXX
Deep Nets
VGG-16 4.88 0.012 135.2
DeepIrisNet [7] 4.80 0.011 291.2
(Proposed) 3.25 0.004 2.9
Table 3: Comparison on CASIA.v4-Distance (within dataset configuration).

Only test set (of within and cross dataset configuration) of both the dataset is used for reporting iris verification performance.
(a.) Within Dataset: First, we compare efficacy of our proposed with three traditional baselines of Daugman [3], Masek [15] and Ma et al. [14]. Across both the metrics reported in Table 3, our proposed framework outperforms all the three baselines by notable margins. Next, we compare with the recent deep learning frameworks. We initially compare against pre-trained (on Imagenet) VGG-16 fine tuned on CASIA.v4-Distance dataset similar to the work done by [16, 18]. Normalised iris of resolution is input to VGG-16 framework. Though fine-tuning a pre-trained (on Imagenet) VGG-16 performs better than the traditional methods, yet proves to be superior than it. This can be primarily attributed to the fact that the kernels of VGG-16 were trained to learn structure and shape cues present in natural images and not texture-rich contents as prevalent in iris images. Thus, naively applying transfer learning across such disparate domains is sub-optimal. From Table 3, we also observe that our proposed shallow performs better than DeepIrisNet [7]. This boost is primarily because of our systematic design choices. As argued before, our stage-wise training compels the network to focus more on discriminating iris textures which is further improved with incorporation of EAP and TEL layers. Also, for a iris dataset having paucity of annotated labels, it is more prudent to have less complex (parameter counts) models over deeper counterparts. Both DeepIrisNet as well as fine-tuned VGG-16 have much deeper and complex architectures for limited annotated iris datasets, and thus our model consistently outperforms those. Figure 3 depicts the DET curve of all the competing models of this phase.

(in %)
DeepIrisNet 6.6 0.033
VGG-16 6.6 0.028
(Proposed) 5.3 0.018
Table 4: Comparison on CASIA.v4-Thousand (cross dataset configuration).

(b.) Cross Dataset: From Table 4, it is evident that even in such challenging scenario, our proposed framework performs better than the comparing deep networks. This proves better generalization capability of our proposed framework over other deep learning frameworks. Figure 3 depicts the DET curve of one of the randomly selected folds of the competing deep nets. For fairness, same fold is chosen for all the comparing models.

Figure 3: DET curve of: Left: comparing traditional and deep learning methods on CASIA.v4-Distance (Within Dataset), Right: comparing deep learning methods on CASIA.v4-Thousand (Cross Dataset)

Reduction of Parameters: There is an increased demand to run biometrics systems on mobile devices. So lightweight models are favored for inference. In Table 2, we compare number of parameters of our different architectural choices. We see that replacing full-connected layers of with TEL layer in results in 46.72 reduction in parameters. From Table 3, it can be observed that compared to VGG-16 and DeepIrisNet [7], our model, is respectively 46.62 and 100.41 cheaper in terms of parameters; yet our performance is better than those. It is suggested in this section to note that input to VGG-16 are normalised iris of dimension , while all other models have input iris images dimension of .

5 Conclusion

This paper proposes stage-wise texture aware training strategies for building reliable iris verification system under limited annotated data. This paper showcases benefits of unsupervised auto-encoder based pre-traning as a good weight initializer for training networks with less data. Further, proposed EAP and TEL layers are shown to leverage local texture patterns of iris images. Our final framework is significantly lightweight and consistently outperforms competing baselines for within and cross dataset evaluations. Motivated by the success of auto-encoder based pre-training, in future, we wish to study the benefits of other recent generative models.


  • [1] S. Bach, A. Binder, G. Montavon, F. Klauschen, K. Müller, and W. Samek (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one 10 (7). Cited by: §1.
  • [2] C. I. Database CASIA.v4 Iris Database. Note: http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp Cited by: §4.2, §4.2.
  • [3] J. Daugman (2004-01) How iris recognition works. IEEE Transactions on Circuits and Systems for Video Technology 14 (1), pp. 21–30. External Links: Document, ISSN 1558-2205 Cited by: §1, §1, §2, §4.1, §4.3, Table 3.
  • [4] J. G. Daugman (1993) High confidence visual recognition of persons by a test of statistical independence. IEEE transactions on pattern analysis and machine intelligence 15 (11), pp. 1148–1161. Cited by: §1, §1, §4.2.
  • [5] D. de Martin-Roche, C. Sanchez-Avila, and R. Sanchez-Reillo (2001) Iris recognition for biometric identification using dyadic wavelet transform zero-crossing. In Proceedings IEEE 35th Annual 2001 International Carnahan Conference on Security Technology (Cat. No. 01CH37186), pp. 272–277. Cited by: §1.
  • [6] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    pp. 248–255. Cited by: §1.
  • [7] A. Gangwar and A. Joshi (2016) DeepIrisNet: deep iris representation with applications in iris recognition and cross-sensor iris recognition. In 2016 IEEE International Conference on Image Processing (ICIP), pp. 2301–2305. Cited by: §1, §2, §4.1, §4.2, §4.3, §4.3, Table 3.
  • [8] R. Girshick (2015) Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pp. 1440–1448. Cited by: §1.
  • [9] J. Han and K. Ma (2007)

    Rotation-invariant and scale-invariant gabor features for texture image retrieval

    Image and vision computing 25 (9), pp. 1474–1481. Cited by: §1, §3.1.2.
  • [10] M. Idrissa and M. Acheroy (2002) Texture classification using gabor filters. Pattern Recognition Letters 23 (9), pp. 1095–1102. Cited by: §1, §3.1.2.
  • [11] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105. Cited by: §1.
  • [12] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017)

    Photo-realistic single image super-resolution using a generative adversarial network

    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: Table 1.
  • [13] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick (2014) Microsoft coco: common objects in context. In European conference on computer vision, pp. 740–755. Cited by: §1.
  • [14] L. Ma, T. Tan, Y. Wang, and D. Zhang (2003) Personal identification based on iris texture analysis. IEEE transactions on pattern analysis and machine intelligence 25 (12), pp. 1519–1533. Cited by: §1, §2, §4.1, §4.3, Table 3.
  • [15] L. Masek et al. (2003) Recognition of human iris patterns for biometric identification. Ph.D. Thesis, Master’s thesis, University of Western Australia. Cited by: §1, §2, §4.1, §4.3, Table 3.
  • [16] S. Minaee, A. Abdolrashidiy, and Y. Wang (2016) An experimental study of deep convolutional features for iris recognition. In 2016 IEEE signal processing in medicine and biology symposium (SPMB), pp. 1–6. Cited by: §1, §2, §4.1, §4.3.
  • [17] D. M. Monro, S. Rakshit, and D. Zhang (2007) DCT-based iris recognition. IEEE transactions on pattern analysis and machine intelligence 29 (4), pp. 586–595. Cited by: §1, §2.
  • [18] K. Nguyen, C. Fookes, A. Ross, and S. Sridharan (2017) Iris recognition with off-the-shelf cnn features: a deep learning perspective. IEEE Access 6, pp. 18848–18855. Cited by: §1, §2, §4.1, §4.3.
  • [19] M. Unser (1995) Texture classification and segmentation using wavelet frames. IEEE Transactions on image processing 4 (11), pp. 1549–1560. Cited by: §1.
  • [20] R. P. Wildes, J. C. Asmuth, G. L. Green, S. C. Hsu, R. J. Kolczynski, J. R. Matey, and S. E. McBride (1996) A machine-vision system for iris recognition. Machine vision and Applications 9 (1), pp. 1–8. Cited by: §1.
  • [21] R. P. Wildes (1997) Iris recognition: an emerging biometric technology. Proceedings of the IEEE 85 (9), pp. 1348–1363. Cited by: §1, §2.
  • [22] Z. Zhao and K. Ajay (2015) An accurate iris segmentation framework under relaxed imaging constraints using total variation model. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3828–3836. Cited by: §4.2.