Post-Mortem Iris Recognition Resistant to Biological Eye Decay Processes

12/05/2019 ∙ by Mateusz Trokielewicz, et al. ∙ University of Notre Dame Medical University of Warsaw NASK 0

This paper proposes an end-to-end iris recognition method designed specifically for post-mortem samples, and thus serving as a perfect application for iris biometrics in forensics. To our knowledge, it is the first method specific for verification of iris samples acquired after demise. We have fine-tuned a convolutional neural network-based segmentation model with a large set of diversified iris data (including post-mortem and diseased eyes), and combined Gabor kernels with newly designed, iris-specific kernels learnt by Siamese networks. The resulting method significantly outperforms the existing off-the-shelf iris recognition methods (both academic and commercial) on the newly collected database of post-mortem iris images and for all available time horizons since death. We make all models and the method itself available along with this paper.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Identification of deceased subjects with their iris patterns has recently moved from the science-fiction realm into scientific reality thanks to ongoing research that has been happening in this field for several years now. Despite some popular claims, researchers have shown that post-mortem iris recognition can be viable under favorable conditions, such as cold storage in a hospital morgue [1, 22, 21, 23], but challenging in outdoor environment, especially in the summer [4, 18]. A method for detecting presentation attacks employing cadaver eyes has been also proposed [24], as well as experiments comparing machine and human attention to iris features in post-mortem iris recognition proceedings [25].

However, an end-to-end iris recognition method designed specifically for forensic applications has not yet been proposed, and all published experiments that assess feasibility of post-mortem iris matching are based on methods designed for samples acquired from live subjects. This may significantly underestimate the true performance due to multiple factors related to post-mortem eye decomposition not considered in off-the-shelf iris matchers. In this paper we make an attempt at improving the efficiency of iris representation by introducing data-driven kernels that are learnt from post-mortem iris images

. A shallow (incorporating only one convolutional layer, as in dominant Daugman’s iris recognition algorithm) Siamese network is employed for learning a novel descriptor in a form of two dimensional filter kernels that can be further used in any conventional iris recognition pipeline. An open-source OSIRIS implementation has been chosen to demonstrate the superiority of the newly designed kernel set, not only over the original approximations of Gabor kernels, but also over an example commercial iris recognition method. In addition, we fine-tuned the convolutional neural network (CNN) designed for post-mortem iris segmentation

[26] with more diverse training data, which allowed for further increase in the overall recognition accuracy.

Thus, this paper offers the first known to us and complete method designed specifically for post-mortem iris recognition, with the following contributions:

  • a new, post-mortem iris-specific feature representation method comprising filters learned from post-mortem data, offering a significant reduction of recognition errors when compared to methods designed for live irises,

  • analysis of false non-match rates at different false match thresholds suitable for a forensic setting,

  • an updated CNN-based iris segmentation model, fine-tuned with more diverse iris samples, offering better robustness against unusual deviations from the ISO/IEC 19794-6 observed in post-mortem iris data,

  • source codes for iris-specific filter training experiments, trained filter kernels, and new CNN iris segmentation model – everyting that is needed to fully replicate the experiments.

2 Related Work

2.1 One-shot recognition and Siamese networks

Recent advancements in deep learning allowed CNN-based image classifiers to achieve performance superior to many hand-crafted methods. However, one of the downsides of deep learning is the need for large quantities of labeled data in case of supervised learning. This becomes a problem in applications where prediction must be obtained about the data belonging to an under-sampled class, or about a class unknown during training.

Siamese networks, on the other hand, perform well in the so called one-shot recognition tasks, being able to give reliable similarity prediction about the samples from classes that were not included in the model training. Koch [11] introduced a deep convolutional model architecture consisting of two convolutional branches sharing weights and joined by a merge layer with cost function describing distance between the two inputs :

(1)

where

denotes the encoding function. This is combined with a sigmoid activation of the single neuron in the last layer, which maps the output to the range of [0, 1]. The applications of Siamese networks include many areas, most importantly one-shot image recognition, with good benchmark performances achieved on well-known datasets such as Omniglot (written characters recognition for multiple alphabets, 95% accuracy) and ImageNet (natural image recognition with 20000 classes, 87.8% accuracy)

[27], but also object co-segmentation [14], object tracking in video scenes [3], signature verification [5], and even matching resumes to job offers [12].

2.2 Data-driven image descriptors

Several approaches to learning feature descriptors for image matching have been explored, mostly in the field of visual geometry and mapping for image stitching, orientation detection, and similar general-purpose approaches.

Simo-Serra [19] presented a novel point descriptor, whose discriminative feature descriptors are learnt from the real-world, large datasets of corresponding and non-corresponding image patches from the MVS dataset, containing image patches sampled from 3D reconstructions of the Statue of Liberty, Notre Dame cathedral, and Half Dome in Yosemite. The approach is reported to outperform SIFT, while being able to serve as a drop-in replacement for it. The method employs a Siamese architecture of two coupled CNNs with three convolutional layers each, whose outputs are patch descriptors, and an norm of the output difference is minimized between positive patches and maximized otherwise. A similar approach was demonstrated by Zagoruyko and Komodakis [29], who train a similarity function for comparing image patches directly from the data employing several methods, one of them being a Siamese model with two CNN branches sharing weights, connected at the top by two fully connected layers.

Yi introduced a method that is intended to serve as a full SIFT replacement, not only as a drop-in descriptor replacement [28]

. The deep-learnt approach consists of a full pipeline with keypoint detection, orientation estimation, and feature description, trained in a form of a Siamese quadruple network with two positive (corresponding) input patches, one negative (non-corresponding) patch, and one patch without any keypoints in it. Hard mining of difficult keypoint pairs is employed, similarly to

[19]. DeTone [7] proposed a so-called SuperPoint network and a framework for self-supervised training of interest point detectors and descriptors that are able to operate on the full image as an input, instead of image patches. The method is able to compute both interest points and their descriptors in a single network pass.

Moving to the field of iris recognition, Czajka [6]

have recently employed human-inspired, iris-specific binarized statistical image features (BSIF) from iris image patches derived from an eye-tracking experiment, during which human iris examiners were asked to classify iris image pairs. Data-driven BSIF filters were also studied by Bartuzi for the purpose of person recognition based on thermal hand representations

[2].

Current literature seems not to offer any iris-driven filters, which would be designed specifically to address post-mortem iris recognition.

3 Proposed methodology

3.1 Iris recognition benchmarks

Previous works assessing post-mortem iris recognition accuracy [22, 21, 23] have employed several iris recognition matchers, including the open-source OSIRIS [16], and three commercially available products: VeriEye [15], MIRLIN111discontinued by the manufacturer [20], and IriCore [9].

In this paper we employ a popular open-source implementation of Daugman’s method (OSIRIS), as well as the IriCore commercial product, which was shown to offer the best performance when confronted with post-mortem, heavily degraded iris samples in previous papers [23]. The reasoning behind this choice of iris matchers is to be able to compare the proposed approach against the method that was to date the best performing one in the post-mortem iris recognition setup.

Figure 1: Aligning the polar-coordinate polar iris images based on eye corner location to compensate rotation of the camera during image acquisition.
Figure 2: Scheme for iris patch curation.

3.2 Image segmentation

It has been shown that a significant fraction of post-mortem iris recognition errors can be attributed to the badly executed segmentation stage [21, 23]. Trokielewicz [26] proposed an open-source CNN-based semantic segmentation model for predicting binary masks of post-mortem iris images (trained with iris images obtained from cadavers, elderly people, as well as ophthalmology patients), which in this paper has been further fine-tuned with more diverse datasets, including ISO-compliant images, as well as low quality, visible-light iris samples. In addition to the databases used for training the previous model, we have also employed several iris datasets with their corresponding ground truth masks, including the Biosec baseline corpus [8] (1200 images), the BATH database222http://www.bath.ac.uk/elec-eng/research/sipg/irisweb/ [13] (148 images), the ND0405 database333https://cvrl.nd.edu/projects/data/ (1283 images), the UBIRIS database [17] (1923 images), and the CASIA-V4-Iris-Interval database444http://www.cbsr.ia.ac.cn/english/IrisDatabase.asp (2639 images). This allowed to obtain more coherent, smoother predictions, as shown in Fig. 3. The model was trained with the SGDM optimizer with momentum = 0.9, learning rate = 0.001,

regularization = 0.0005, for 120 epochs with batch size of 4. The trained model along with sample codes is made available with this paper

555linktotherepotemporarilyremovedtomakethissubmissionanonymous.

(a) Example segmentation result for the original model [26].
(b) The same sample processed with the new segmentation model.
Figure 3: Segmentation results obtained for an example iris image using the model from earlier works and from the new, refined model.

3.3 Training data

To train our new, post-mortem-aware iris feature descriptor, we use NIR iris images from publicly available Warsaw-BioBase-Postmortem-Iris-v1.1 and Warsaw-BioBase-Postmortem-Iris-v2 databases, which were processed with the new segmentation model (cf. Sec. 3.2) and normalized to come up with polar iris images

pixels in size. Normalization stage included Hough Transform-based circle fitting to approximate inner and outer iris boundaries, and all pixels annotated as non-iris texture pixels found by segmentation model inside the iris annulus were discarded from feature extraction.

3.4 Evaluation data

For evaluation of the proposed method an in-house, newly collected database of post-mortem iris images is used. It comprises post-mortem images taken from 40 subjects with a total of 1094 near-infrared images and 785 visible-light images, collected up to 369 hours since demise. This dataset is subject-disjoint to the data used both in the training of the segmentation CNN model, and it was collected following the same acquisition protocol as in collecting the Warsaw data.

Figure 4: Shallow Siamese network used for learning the iris-specific feature representation.

3.5 Pre-processing of the training data

The first step in preparing the training data is to ensure spatial correspondence between iris features in multiple images of the same eye. Note that post-mortem iris images can have arbitrary orientation, as the cadavers may be approached by the operator taking the scans from different directions, for instance, depending on the crime scene layout. Leaving these pictures unaligned would end up with forcing the network to learn inappropriate kernels. This is done by doing within-class alignment of normalized iris images by canceling the mutual horizontal shifts between images that reflects eyeball rotation in the Cartesian coordinate system, Fig.

1. We performed here the image alignment procedure, which involved manual annotations of the eye corners. This allowed to calculate a relative rotation of the eyeball represented in the two images, and in turn the amount of pixels, by which the polar image must be shifted. Iris recognition methods typically shift iris codes in the matching stage to compensate for eyeball rotation, instead of shifting images. Therefore there is no justification to make the neural network learn how to discriminate between irises that are not ideally spatially aligned.

The resulting aligned polar iris images were then subject to examination in respect to the amount of occlusions caused by eyelids or eyelashes. This is done to ensure that the network will use iris-specific and not occlusion-specific regions in development of kernels. Thus, to ensure good quality of the training data, we divide each polar iris image into two patches. Since in our data eyelid occlusions are only present either on the left or on the right portion of the polar iris image (this is specific to post-mortem data acquisition protocol in our collection), this enables discarding such samples while at the same time saving the other, unaffected patch, Fig. 2. A total of 1801 patches were extracted for training.

3.6 Model architecture and filter learning

(a) Filter kernels
(b) Example iris codes
(c) Mean iris code value distributions
Figure 5: Learnt kernels from the Siamese network (a), an example set of iris codes they produce (b), and distributions of mean iris code values across the data set (c). The kernel no. 6 is discarded as it produces iris codes with almost all elements equal to 0.

For learning the iris-specific filters we use a shallow Siamese architecture composed of two branches, each responsible for encoding one image from the image pair being compared, comprising a single convolutional layer with 6 kernels of size (the size of the smallest OSIRIS filter), as shown in Fig. 4. In our experiments, the OSIRIS kernels resulted in better post-mortem iris recognition performance than either or (also found in OSIRIS). The weights are shared between the two branches. Following the convolutional layers is a merge layer calculating the distance (1

) between the two sets of features from the convolutional layer. A single neuron with a sigmoid activation function is then applied to yield a prediction from the range [0, 1], with 0 being a perfect match between the two images, and 1 being a perfect non-match.

The training data is passed to the network in batches containing 32 pairs of iris patches, out of which 16 are genuine, and 16 are impostor pairs, randomly sampled without replacement from the dataset during each training iteration, with a total of 20000 iterations. ADAM optimizer [10] is used with the learning rate

to minimize the loss function.

3.7 Filter set optimization

The learnt filter kernels, together with example iris codes that they produce, as well as distributions of mean iris code values produced by each of them are illustrated in Fig. 5. By analyzing the distributions of mean iris code values obtained by each of the new filters, we see that codes produced by the sixth filter do not represent the iris well, as most of the texture information is lost during encoding, resulting in a mostly zeroed iris code. This filter is discarded from all further experiments.

Notably, employing only iris-specific filter kernels instead of those found in OSIRIS did not yield better results – perhaps due to the fact that regular iris texture is well represented using the conventional Gabor wavelets, and the newly learnt filters are necessary to boost the performance for difficult, decay-affected samples. To utilize these new filters, and to offer an advantage over the baseline method, a modification of the OSIRIS Gabor filter bank was performed to propose a hybrid filter bank comprising a combination of Gabor wavelets and the post-mortem-iris-specific kernels.

The filter selection was solved by using Sequential Feature Selection, with a combination of Sequential Forward Selection (SFS) and Sequential Backward Selection (SBS), which involve adding the most discriminant features to the classifier, while removing the least discriminatory ones. During the feature selection procedure, post-mortem-specific filters were added to the original OSIRIS filter bank, whereas those OSIRIS filters, which do not contribute to decreasing the error rates were removed. The error metric minimized during the feature selection is the EER obtained for samples acquired up to 60 hours post-mortem.

The feature selection procedure can be enclosed in the following steps, starting with the original, unmodified OSIRIS filter bank comprising six Gabor wavelets:

  • Step 1. Calculate the performance obtained using the current filter bank and each of the Siamese filters added independently.

  • Step 2. Perform SFS by adding the most contributing filter to the filter bank.

  • Step 3. Calculate the performance obtained using the filter bank obtained in the previous step with each of the OSIRIS filters removed independently.

  • Step 4. Perform SBS by removing the least contributing filter from the filter bank.

  • Step 5. Go back to Step 1 and repeat until the error metric stops improving.

Figure 6: Filter selection for the new filter bank using Sequential Forward Selection and Sequential Backward Selection. Iris codes for an example iris produced by the proposed hybrid filter bank at different stages of filter selection are also shown.
Figure 7: SFS/SBS filter selection: Equal Error Rates in subsequent iterations of the procedure are shown. The filter selection is stopped at SBS iteration no. 2.

After the SFS/SBS feature selection procedure involving four iterations of SFS and SBS, Fig. 6, the EER was decreased by almost a third, from 6.40% obtained for the 60 hours post-mortem time horizon for the original OSIRIS filter bank, to 4.39% obtained for the new, hybrid filter bank, Fig. 7. This filter bank is then used in the final testing of our iris recognition pipeline with the new, subject-disjoint data. The final set of filter kernels is shown in Fig. 6.

4 Results and discussion

4.1 Testing data and methodology

During testing, we generate all possible genuine and impostor scores between images that were captured up to a certain time horizon after demise. Nine different time horizons are considered, namely: up to 12, 24, 48, 60, 72, 110, 160, 210, and finally up to 369 hours post-mortem, which encompasses all available testing data. Every one of these 9 experiments is repeated for comparison scores obtained from (a) the original OSIRIS software, (b) the commercial IriCore matcher, (c) the modified OSIRIS including the new image segmentation, and finally (d) the modified OSIRIS, which includes both the new segmentation and the new feature representation stage.

4.2 Recognition accuracy

Figures 8-10 present ROC curves obtained using the newly introduced filter bank, compared against ROCs corresponding to the best results obtained when only the segmentation stage is replaced with the proposed modifications.

When analyzing graphs presented in Fig. 8, which correspond to samples collected up to 12, 24, and 48 hours post-mortem, we do not see considerable improvements in recognition performance measured by the EER, which, compared against those EERs obtained only with the modified segmentation stage, are EER=0.76%0.58%, 0.69%0.68%, and 2.57%2.45%, respectively. However, the shapes of the red graphs corresponding to the scores obtained with the new filter bank show an improvement over the black graphs in the low FMR registers, meaning that the proposed system offers higher recognition rates in situations, when very few false matches are allowed.

Moving to more distant post-mortem sample capture time horizons, Fig. 9, the advantage of the proposed method becomes clearly visible in both the decreased EER values, as well as in the shapes of the ROC curves. Applying domain-specific filters allowed to reduce EER from 6.40% to 4.39%, 8.12%5.86%, and 9.99%7.78%, for samples acquired less than 60, 72, and 110 hours post-mortem, respectively.

Finally, Fig. 10 presents ROC curves obtained for samples collected during the three longest subject observation time horizons, namely up to 160, 210, and 369 hours after death. Here as well, a visible improvement offered by the new feature representation scheme is reflected in the decreased EER values – 14.59%11.88% for samples collected up to 160 hours, 17.09%14.98% for those captured up to 210 hours, and 21.36%19.27% for the longest and most difficult set, encompassing images acquired up to 369 hours (more than 15 days).

Figure 8: ROC curves obtained when comparing post-mortem samples with different observation time horizons: 12, 24, and 48 hours post-mortem, plotted for two baseline iris recognition methods OSIRIS (blue) and IriCore (green), OSIRIS with CNN-based segmentation (black), as well as OSIRIS with both the improved segmentation and new filter set (red).
Figure 9: Same as in Fig. 8, but for samples collected up to 60, 72, and 110 hours post-mortem.
Figure 10: Same as in Fig. 8, but for samples collected up to 160, 210, and 369 hours post-mortem.

4.3 False Non-Match Rate dynamics

In addition to the ROCs, we have also calculated the False Non-Match Rate (FNMR) values at acceptance thresholds which allow the False Match Rate (FMR) values to stay below certain values, namely 0.1%, 1% and 5%, Figs. 11, 12, 13, respectively. This is done to reveal the dynamics of the FNMR as a function of post-mortem time interval, and therefore to know the chances for a false non-match as time since death progresses. We plot this dynamics for the two baseline methods: original OSIRIS and IriCore, as well as for the CNN-based segmentation modification, and the proposed iris representation, coupled with the new segmentation.

While acceptance thresholds allowing FMR of 5% or even 1% can be considered as very relaxed for large-scale iris recognition systems, such criteria make sense in a forensic scenario. In such, the goal of an automatic system is typically to aid a human expert by proposing a candidate list, while minimizing the chances of missing the correct hit. Therefore, allowing a higher False Match Rate will make it more likely for the correct hit to appear within the candidate list.

Notably, for each moment during the increasing post-mortem sample capture time horizon, our proposed approach consistently offers an advantage over the other two algorithms, allowing to reach nearly perfect recognition accuracy for samples collected up to a day after a subject’s death, as shown in Figs.

11, 12, and Fig. 13. This difference in favor of our proposed solution is even larger when the acceptance threshold is relaxed to allow for 5% False Match Rate – in such a scenario, we can expect no false non-matches in the first 24 hours, and only approx. 1.5% chance of a false non-match in the first 48 hours. The new method for iris feature representation, on the other hand, shows its greatest advantage in the acquisition horizons that are longer than 48 hours, being able to reduce the errors by as much as a third in the 60-hour and 72-hour capture moments.

To be fair, however, we need to stress that – to our knowledge – neither OSIRIS nor IriCore were designed to deal with post-mortem samples, so the lower performance of these methods is understandable. This only demonstrates that new iris methods capable to address post-mortem decomposition processes need to be designed, if we want to include iris into the basket of forensic identification tools.

Figure 11: False Non-Match Rates (FNMR) as a function of post-mortem sample capture horizon for a set False Match Rate (FMR) of 0.1%, plotted for OSIRIS (blue), IriCore (red), OSIRIS with new segmentation (yellow), and OSIRIS with both new segmentation and new filter set (violet).
Figure 12: Same as in Fig. 12, but for a set FMR=1%.
Figure 13: Same as in Fig. 12, but for a set FMR=5%.

5 Conclusions

In this paper we have introduced the novel iris feature representation method, employing iris-specific image filters learnt directly from the data, and are thus optimized to be resilient against post-mortem changes affecting the eye during the increasing sample capture time since death. By finding the optimal combination of typical Gabor wavelet-based iris encoding with the new post-mortem-driven encoding we are able to reduce the recognition errors by as much as one third, significantly outperforming even the state-of-the-art commercial matcher. The source codes for the experiments, trained iris-specific filters, and the new segmentation model are made available along with the paper.

To our knowledge, it’s the first post-mortem iris-specific and end-to-end recognition pipeline, open sourced and ready to be deployed in forensic applications. However, the segmentation model and the methodology of filter learning can be directly applied to other challenging cases in iris recognition, as eye with diseases, or irises acquired from infants or elderly people.

References

  • [1] A. Sansola. Postmortem iris recognition and its application in human identification, Master’s Thesis, Boston University, 2015.
  • [2] E. Bartuzi, K. Roszczewska, A. Czajka, and A. Pacut. Unconstrained Biometric Recognition Based on Thermal Hand Images. International Workshop on Biometrics and Forensics (IWBF 2018), 2018.
  • [3] L. Bertinetto, J. Valmadre, J. F. Henriques, A. Vedaldi, and P. H. Torr. Fully-convolutional siamese networks for object tracking.

    European Conference on Computer Vision (ECCV 2016)

    , pages 850–865, 2016.
  • [4] D. S. Bolme, R. A. Tokola, C. B. Boehnen, T. B. Saul, K. A. Sauerwein, and D. W. Steadman. Impact of environmental factors on biometric matching during human decomposition. IEEE 8th International Conference on Biometrics Theory, Applications and Systems (BTAS 2016), September 6-9, 2016, Buffalo, USA, 2016.
  • [5] J. Bromley, I. Guyon, Y. LeCun, E. Säckinger, and R. Shah. Signature Verification Using a ”Siamese” Time Delay Neural Network. Proceedings of the 6th International Conference on Neural Information Processing Systems (NIPS 1993), pages 737–744, 1993.
  • [6] A. Czajka, D. Moreira, K. W. Bowyer, and P. J. Flynn. Domain-specific human-inspired binarized statistical image features for iris recognition. IEEE Winter Conference on Applications of Computer Vision, (WACV 2019), Hawaii, USA, 2019.
  • [7] D. DeTone, T. Malisiewicz, and A. Rabinovich. SuperPoint: Self-Supervised Interest Point Detection and Description.

    IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPR’W 2018)

    , 2018.
  • [8] J. Fierrez, J. Ortega-Garcia, D. T. Toledano, and J. Gonzalez-Rodriguez. Biosec baseline corpus: A multimodal biometric database. Pattern Recognition, 40(4):1389 – 1392, 2007.
  • [9] IriTech Inc. IriCore Software Developer’s Manual, version 3.6, 2013.
  • [10] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. 2014.
  • [11] G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for one-shot image recognition. ICML Deep Learning Workshop, 2, 2015.
  • [12] S. Maheshwary and H. Misra. Matching resumes to jobs via deep siamese network. Companion Proceedings of the The Web Conference, 2018.
  • [13] D. Monro, S. Rakshit, and D. Zhang. University of Bath, UK Iris Image Database, 2009.
  • [14] P. Mukherjee, B. Lall, and S. Lattupally. Object cosegmentation using deep Siamese network. 2018.
  • [15] Neurotechnology. VeriEye SDK, version 4.3, accessed: August 11, 2015.
  • [16] N. Othman, B. Dorizzi, and S. Garcia-Salicetti. Osiris: An open source iris recognition software. Pattern Recognition Letters, 82:124 – 131, 2016.
  • [17] H. Proenca, S. Filipe, R. Santos, J. Oliveira, and L. Alexandre. The UBIRIS.v2: A database of visible wavelength images captured on-the-move and at-a-distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1529–1535, 2010.
  • [18] K. Sauerwein, T. B. Saul, D. W. Steadman, and C. B. Boehnen. The effect of decomposition on the efficacy of biometrics for positive identification. Journal of Forensic Sciences, 62(6):1599–1602, 2017.
  • [19] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer. Discriminative Learning of Deep Convolutional Feature Point Descriptors. IEEE International Conference on Computer Vision (ICCV 2015), 2015.
  • [20] Smart Sensors Ltd. MIRLIN SDK, version 2.23, 2013.
  • [21] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Human Iris Recognition in Post-mortem Subjects: Study and Database. 8th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS 2016), September 6-9, 2016, Buffalo, USA, 2016.
  • [22] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Post-mortem Human Iris Recognition. 9th IAPR International Conference on Biometrics (ICB 2016), June 13-16, 2016, Halmstad, Sweden, 2016.
  • [23] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Iris recognition after death. IEEE Transactions on Information Forensics and Security, 14(6):1501–1514, 2018.
  • [24] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Presentation Attack Detection for Cadaver Iris. 9th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS 2018), October 22-25, 2018, Los Angeles, USA, 2018.
  • [25] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Perception of Image Features in Post-Mortem Iris Recognition: Humans vs Machines. 10th IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS 2018), September 23-26, 2019, Tampa, USA, 2019.
  • [26] M. Trokielewicz, A. Czajka, and P. Maciejewicz. Post-mortem Iris Recognition with Deep-Learning-based Image Segmentation. Arxiv preprint, 2019.
  • [27] O. Vinyals, C. Blundell, T. Lillicrap, k. kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. volume 29, pages 3630–3638. 2016.
  • [28] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. LIFT: Learned Invariant Feature Transform. European Conference on Computer Vision (ECCV 2016), 2016.
  • [29] S. Zagoruyko and N. Komodakis. Learning to compare image patches via convolutional neural networks. Conference on Computer Vision and Pattern Recognition (CVPR 2015), 2015.