I-a Post-mortem iris recognition
Post-mortem iris recognition has recently gained considerable attention in the biometric community. While this method of personal identification works nearly perfectly when applied to living individuals, it has been shown that the performance will deteriorate when existing iris recognition algorithms are confronted with images obtained in the post-mortem scenario, i.e., from deceased subjects [1, 2]. This deterioration will continue as time since death elapses, due to significant distortions of the iris and the cornea caused by post-mortem decay processes, however, the first evaluations of the dynamics of post-mortem iris recognition degradation, published by Trokielewicz et al. , suggests that conventional iris recognition algorithms are able to deliver correct matches for samples acquired even 17 days after death when bodies are kept in the mortuary conditions. Bolme et al. studied the decomposition of iris, among other biometric capabilities, when the cadavers are put in an outdoor environment, simulating one of a typical forensic scenarios . More recently, Sauerwein et al.  showed in their experiments that irises stay viable up to 34 days post-mortem, when cadavers were kept in outdoor conditions and during the winter. No iris recognition method was used to support their claim, and it was based on the opinion of human experts acquiring the samples. However, it suggests that winter conditions increase the chances to see an iris even in the cadaver left outside for a longer time. All these papers suggest that automatic post-mortem iris recognition could lead to important application in forensics, being an additional tool for forensic examiners. This could help identify victims of crimes and accidents in cases when other methods of identification are unavailable or would prove more difficult to use.
Erratic image segmentation is often put forward as a potential cause of degrading the performance of iris recognition algorithms when they are made to work with difficult samples, with post-mortem samples being no exception. Post-mortem decay at the cellular level slowly leads to macroscopic changes in the eye, such as deviations from the pupil’s circularity, wrinkles on the cornea that cause additional specular reflections to appear, and changes in the iris texture . At the same time, it is known that the correct execution of the segmentation stage is crucial for ensuring good accuracy of iris recognition, which is conditional on encoding the actual iris texture, and not the surrounding portions of the eye. Hence, it is evident that making an iris recognition more reliable for iris image acquired after death, the segmentation methods should be designed so that to be sensitive to these new, post-mortem deformations.
To our knowledge, there are no prior papers, or published research introducing the iris image processing methodologies specific to post-mortem samples. Hence, this paper is unique in the sense that it makes the first step in making post-mortem iris recognition more reliable and more attractive for forensic analyses by proposing iris segmentation specifically designed for post-mortem iris samples. This paper offers the following three contributions:
an algorithm for the segmentation of post-mortem iris images based on a deep convolutional neural network and experimental results showing that it offers a considerable improvement over the segmentation results produced on the same data by a conventional segmentation method,
source codes of the end-to-end post-mortem segmentation method, discussed in this paper, along with the weights of the trained DCNN,
manually segmented masks for Warsaw-BioBase-Post-Mortem-Iris v1.0 database  to facilitate the development of other, post-mortem-aware segmentation methods.
Source codes, network weights and manual segmentation results can be obtained at
Structure of this study goes as follows: Section II provides an overview of the existing applications of deep convolutional networks for the purpose of segmenting difficult iris images. Dataset, network model, its training and evaluation, and comparison against the conventional iris recognition method are described in Section III. Finally, Section IV discusses the main accomplishments and further work.
Ii Related work
Ii-a Deep convolutional networks for image segmentation
Deep convolutional neural networks (DCNN) have recently shown great potential for solving selected computer vision tasks, such as natural image classification, with most popular architectures being described in[7, 8, 9], and image segmentation by dense labeling, which has been reviewed extensively in . These approaches are often named data-driven
, as they learn the correct solution from the data itself, with minimum use of prior knowledge and with a lot of parameters (weights) and hyperparameters to be guessed directly from the samples. This opposes tohand-crafted approaches that use the prior knowledge on the subject and the training encompasses fine-tuning of not-so-many hyperparameters, when compared to data-driven algorithms. Both approaches have upsides and downsides, and data-driven models are often used when our prior knowledge on the subject is limited or difficult to be transformed into formulas possible to be applied in hand-crafted algorithms. Segmentation of post-mortem iris images is an example of such problems. One of the most successful DCNN architectures built for semantic segmentation tasks is SegNet, comprising a fully convolutional encoder-decoder architecture 
. The encoder stage of SegNet is composed of the VGG-16 model graph. The decoder stage comprises several sets of convolution and upsampling layers, whose target is to retrieve spatial information from the encoder output, and produce a dense pixel-wise classification output of the softmax layer that is of the same size as the input image. Because of its state-of-the-art performance, including good accuracy in iris segmentation tasks, and recent inclusion in the MATLAB software, SegNet was chosen as a candidate network for the task described in this paper.
Ii-B Applications of convolutional networks to iris segmentation
Regarding the applications of iris segmentation utilizing neural networks, several attempts at this task have been made, mostly aiming to improve segmentation of difficult, noisy iris images, such as these collected in visible spectrum, using low quality equipment, and pictures captured on-the-move and at-a-distance.
Broussard and Ives  employed neural networks for determining which measurements (e.g.
, pixel value, mean, standard deviation) and which iris regions contain the most discriminatory information. This is done by training a multi-layer perceptron to identify and label an unwrapped polar iris image pixels as either belonging to the iris or not. No assumption of circularity is made, and the network serves as a multidimensional statistical classifier to combine data from multiple measurements into a binary decision for each pixel independently. Measurements for the MLP input were selected with respect to feature saliency,i.e., the authors tested which ones provide the most robust features (most discriminatory power). The proposed solution is said to approach the manually annotated ground truth masks.
Liu et al.  explored hierarchical convolutional neural networks (HCNNs) and multi-scale fully convolutional neural networks (MFCNs) for the purpose of improving segmentation of noisy iris images, e.g., visible light images with light reflections, blurry images captured on-the-move and/or at-a-distance, ’gaze-away’ eyes, etc., with iris pixels being located without any a priori
knowledge or hand-crafted rules. HCNNs constructed by the authors employ hierarchical patches as input, ranging from scales small to large for capturing both local and global iris information. However, this approach is said to lack efficiency due to the sliding of the path window, which increases the computational overhead and due to the field of neurons being limited by the patch size. MFCNs, on the other hand, are reported to overcome these limitations with no sliding window (all pixel labels predicted simultaneously) and no limitation of the neuron field size. MFCNs are said to use several layers ranging from shallow to very deep, for capturing both fine and coarse details of the iris image. Experiments were performed on the UBIRIS.v2 and CASIA.v4-distance databases, comprising noisy color images acquired in unconstrained conditions and NIR at-a-distance images, respectively. MFCNs are said to use the VGG-21 model, trained for natural image classification, which is later fine-tuned using iris images with annotated masks. The following segmentation errors, defined as deviation from the ground truth segmentation by the proportion of disagreeing pixels, are obtained by the authors: 0.9% on the UBIRIS.v2 dataset and 0.59% on the CASIA.v4-distance dataset. Limitations include trouble with segmenting irises in images with dark skin subjects.
He et al. approached the challenge of segmenting noisy iris images obtained in the visible spectrum with a modified DeepLab CNN model which is similar to VGG-16, but with fully connected layers replaced with fully convolutional layers of kernel size 1 and an additional upscaling layer to match the output size to this of the input. The authors trained their solution on the visible spectrum iris dataset consisting of low quality samples, and reported an accuracy of 92% IoU (Intersection over Union), which outperforms the traditional Hough transform method applied to the same data.
Similar problem is studied by Arsalan et al. , where two-stage method for segmenting noisy, visible spectrum iris images is proposed, comprising of initial approximation of iris boundary with the use of classic image processing methods, and further, finer localization with a CNN composed of a modified and fine-tuned VGG-face model. The solution is shown to achieve good accuracy in segmenting irregular specular reflections.
Jalilian and Uhl employed fully convolutional encoder-decoder networks (FCEDNs) to benchmark their performance on several iris datasets, including both good and poor quality samples . These FCEDNs, based on the SegNet architecture, are reported to offer segmentation accuracy comparable with traditional approaches for good quality samples, and better for those of low quality.
Ii-C Challenges in post-mortem iris image processing
An important conclusion that we can draw from this brief literature review, is that DCNNs built for semantic segmentation tasks are a promising solution for dealing with poor quality iris images. Post-mortem iris images represent another category of difficult iris samples since they are often heavily impacted by biological decay processes and show wrinkles on the iris texture, occurring due to excessive drying of the cornea, partial collapse of the iris due to loss of intraocular pressure, as well as additional light reflections associated with these changes. In addition to all of the above, metal retractors used to open the eyelids are often visible in the image as well, see Fig. 1. These make conventional iris segmentation methods, e.g., those based on Daugman’s idea of using circular approximations of the iris inner and outer boundaries, inaccurate and thus ineffective in algorithms targeting forensic analysis of iris samples.
Iii-a Experimental dataset
For the purpose of this study, we used the only, known to us, publicly available Warsaw-BioBase-PostMortem-Iris-v1 dataset, which gathers 1330 post-mortem iris images collected from 17 individuals during various times after death (from 5 hours up to 17 days) . These samples represent ocular regions of recently deceased subjects. Typical, near-infrared (NIR), as well as high quality visible light images are available in this dataset, and we chose to train our network using both types of samples. Careful examination of the samples shows that the nature of this data is different from any other iris dataset, with post-mortem changes being the more pronounced the more time has elapsed since a subject’s demise. Apart from additional specular reflections caused by the tissue’s decay, we can observe wrinkles on the cornea, haze, altered shape of the pupil, and even visible degradation of the iris tissue and partial collapses of the eyeball.
Iii-B Preparing ground truth data
For every sample in the dataset, we have carefully annotated the corresponding ground truth binary mask, which denotes regions of iris that are unaffected by both the post-mortem changes, as described above, and the specular reflections, regardless of their origin. Example images from the dataset and our binary ground truth masks are shown in Fig. 1. To expedite the training process and reduce memory overhead, the images were downsampled to the size of 120160 pixels, and the mask predictions produced by the network are later upscaled to retrieve the original size of 480640 pixels.
Iii-C Model architecture
For our solution, we use the SegNet model for semantic segmentation , which is a modified VGG-16 network with removed fully connected layers, and added decoder stage, so that the resulting architecture follows a concept of a coupled encoder-decoder network with five sets of convolutional and pooling/unpooling layers in each half of the network, Fig. 2
. SegNet performs the non-linear upsampling of the encoded data by employing stored indices from the max-pooling layers in a corresponding decoder. Thesoftmax layer is then followed by a pixel-level classification layer, which yields a binary decision for each pixel (in our case: iris or non-iris). We carried out our experiments in MATLAB 2017b environment, using the implementation of SegNet provided by the Neural Network Toolbox.
Iii-D Training and evaluation procedure
For training and testing procedure, 10 subject-disjoint train/test data splits were created by randomly choosing the data from 14 (out of 17) subjects to the train subset, and the data from the remaining 3 (out of 17) subjects to the test subset. All ten splits were made with replacement, making them statistically independent. The network is then trained with each train subset independently for each split, and evaluated on the corresponding test subset. This procedure gives 10 statistically independent evaluations and allows to assess the variance of the obtained results. The training, encompassing 60 epochs in each experiment, was accomplished with stochastic gradient descent as the minimization method. We applied momentum of 0.9, learning rate of 0.001, and L2 regularization of 0.0005. During testing, a prediction in the form of binary mask is obtained from the network for each of the images. For each predicted mask, Intersection over Union (IoU) is calculated between the prediction and the ground truth mask, which is available also for test partitions of the data. These are then averaged to get the mean IoU for each test split.
Iii-E Results and comparison with conventional iris segmentation
To compare the DCNN-based method developed in this work with a conventional segmentation method, we did exactly the same evaluations on the train/test splits using the OSIRIS v4.1  open source software that implements Daugman’s idea of using circular approximations of the iris boundaries. Additionally, OSIRIS uses a Viterbi algorithm for excluding non-iris portions within the annulus defined by two circles, so it should be able to effectively cut out obstructions such as specular reflections, eyelashes and other irregular intrusions. Similarly to the evaluation of the DCNN-based solution, IoU parameters are calculated and averaged within each test split, and compared with those obtained from the DCNN-based solution. Fig. 3 summarizes average IoU offered in all 10 splits by both solutions, and Table I details the results obtained in each split.
|Mean IoU (OSIRIS)||Mean IoU (CNN)||Improvement|
The DCNN-based solution proposed in this paper clearly outperforms the conventional segmentation method, not only on average for the entire experiment, but also individually in each split. It provide the segmentation accuracy as high as IoU 88.53%, while OSIRIS offers 73.58% IoU on average in identical evaluation. This means an average improvement of 12.8% presented by the proposed methods over the conventional algorithm. Looking at the results obtained in each data split, the DCNN-based solution always outperforms the OSIRIS, even by as much as 40.9% (split 7).
Iii-F Close-up analysis of the results
It is interesting to see example segmentation results for both DCNN-based and conventional algorithms, to discuss potential reasons of failures and room for improvement. Figures 4 through 5 present example segmentation results, along with ground truth annotation for comparison, in four categories:
both algorithms performed well (achieved simultaneously the highest IoU), Fig. 4,
both algorithms failed (achieved simultaneously the lowest IoU), Fig. 5,
DCNN-based solution failed (achieved the lowest IoU) when the conventional method did a good job (achieved the highest IoU), Fig. 6,
DCNN-based solution did a good job (achieved the highest IoU) when the conventional method failed (achieved the lowest IoU), Fig. 7.
As expected, both methods perform well for post-mortem iris images, whose quality does not diverge from a quality of alive iris images, and can be still classified as meeting the ISO/IEC 19794-6 and ISO/IEC 29794-6 requirements. Fig. 4
show an example post-mortem image captured only 5 hours after death, hence in the moment when post-mortem deformations are not yet excessively present. Additionally, metal retractors used in the acquisition process made the iris texture perfectly non-occluded.
In turn, both methods failed to accurately recognize a small portion of the non-deformed iris texture in the iris that underwent heavy post-mortem processes, Fig. 5. The DCNN-based method was not able to localize any iris portion in this difficult sample acquired 574 hours (almost 24 days) post-mortem, hence producing no prediction. However, this behavior is still more favorable than what the conventional segmentation did, namely finding the iris in the incorrect region.
There are samples which are easier to process by conventional segmentation method. Fig. 6 presents a post-mortem sample that displays a regularly shaped iris with good contrast between the iris and the background. Hence, this sample was relatively easy to process by OSIRIS software, which presents a high IoU in this case. However, the intensity and texture of the iris region departed from what the DCNN saw in the training samples, and thus our solution was very selective in annotating the iris areas, ending up with low IoU.
However, one can observe an opposite result more frequently: the DCNN-based segmentation was able to detect non-standard specular reflections and wrinkles, offering way better result than the conventional algorithm, Fig. 7. Similar results were often observed when neither the pupil nor the iris are perfectly circular, and the iris texture started to be muddy due to cornea opacification, resulting in low contrast between the iris and the surrounding areas. In such cases the supremacy of the proposed method is visible.
This study presents the first known to us method for post-mortem iris image segmentation aiming at making post-mortem iris recognition more reliable. The proposed solution incorporates a deep convolutional neural network (DCNN) that already proved to be useful in semantic segmentation tasks. We presented that the DCNN-based approach is able to effectively learn deformations of the iris specific to post-mortem biological processes, and use this knowledge effectively to skip these deformed regions in the segmentation. The DCNN-based method outperforms a conventional iris segmentation algorithm by a wide margin: the Intersection over Union (IoU), averaged over 10 statistically independent experiments, equals to 83%, where the conventional algorithm achieves IoU=73.6%. This work thus makes the first important step in adapting iris recognition methodology to post-mortem images, opening up many new opportunities for the forensic examiners and biometrics experts.
This paper follows the reproducibility guidelines by offering a) the source codes of the end-to-end post-mortem-aware iris segmentation method, b) trained DCNN model, and c) manual segmentation results for the publicly available post-mortem iris samples available to those who are interested in further research in post-mortem iris recognition. These, in particular, allow to fully reproduce the results presented in this paper.
Adam Czajka acknowledges the partial support of NASK under grant agreement no. 2/2017.
The authors would like to thank Ms Ewelina Bartuzi and Ms Katarzyna Roszczewska for their help with preparing manual annotations for iris image masks.
We are also indebted to NVIDIA for supporting us with a GPU unit that enabled this study to come to fruition.
-  A. Sansola, “Postmortem iris recognition and its application in human identification,” Master’s Thesis, Boston University, 2015.
-  M. Trokielewicz, A. Czajka, and P. Maciejewicz, “Post-mortem Human Iris Recognition,” 9th IAPR International Conference on Biometrics (ICB 2016), June 13-16, 2016, Halmstad, Sweden, 2016.
-  M. Trokielewicz, A. Czajka, and P. Maciejewicz, “Human Iris Recognition in Post-mortem Subjects: Study and Database,” 8th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS 2016), September 6-9, 2016, Buffalo, NY, USA, 2016.
-  D. S. Bolme, R. A. Tokola, and C. B. Boehnen, “Impact of environmental factors on biometric matching during human decomposition,” 8th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS 2016), September 6-9, 2016, Buffalo, NY, USA, 2016.
-  K. Sauerwein, T. B. Saul, D. W. Steadman, and C. B. Boehnen, “The effect of decomposition on the efficacy of biometrics for positive identification,” Journal of Forensic Sciences, vol. 62, no. 6, pp. 1599–1602, 2017. [Online]. Available: http://dx.doi.org/10.1111/1556-4029.13484
-  Warsaw University of Technology, “Warsaw-BioBase-PostMortem-Iris-v1.0: http://zbum.ia.pw.edu.pl/en/node/46,” 2016.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” pp. 1097–1105, 2012. [Online]. Available:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
-  K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” https://arxiv.org/abs/1409.1556, 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” https://arxiv.org/abs/1512.03385v1, 2015.
-  A. Garcia-Garcia, S. Orts-Escolano, S. Oprea, V. Villena-Martinez, , and J. Garcia-Rodriguez, “A Review on Deep Learning Techniques Applied to Semantic Segmentation,” https://arxiv.org/abs/1704.06857v1, 2017.
-  V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, 2017.
-  E. Jalilian and A. Uhl, “Iris Segmentation Using Fully Convolutional Encoder–Decoder Networks,” Deep Learning for Biometrics, in: Advances in Computer Vision and Pattern Recognition, B. Bhanu and A. Kumar (eds.), 2017.
-  R. P. Broussard and R. W. Ives, “Using Artificial Neural Networks and Feature Saliency to Identify Iris Measurements that Contain the Most Discriminatory Information for Iris Segmentation,” IEEE Workshop on Computational Intelligence in Biometrics: Theory, Algorithms, and Applications (CIB 2009), 2009.
-  N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, and T. Tan, “Accurate Iris Segmentation in Non-cooperative Environments Using Fully Convolutional Networks,” 9th IAPR International Conference on Biometrics (ICB 2016), https://arxiv.org/abs/1511.07122v3, 2016.
-  M. Arsalan, H. G. Hong, R. A. Naqvi, M. B. Lee, M. C. Kim, D. S. Kim, C. S. Kim, and K. R. Park, “Deep Learning-Based Iris Segmentation for Iris Recognition in Visible Light Environment,” Symmetry, vol. 9, 2017.
-  G. Sutra, B. Dorizzi, S. Garcia-Salitcetti, and N. Othman, “A biometric reference system for iris. OSIRIS version 4.1: http://svnext.it-sudparis.eu/svnview2-eph/ref_syst/iris_osiris_v4.1/,” accessed: October 1, 2014.