The place where the criminals commit their unlawful act namely Scene of Crime (SoC) is for extreme importance for police . According to Locard’s exchange assumption, perpetrator of a crime will inevitably leave something into the SoC . Hence, based on this theory, finding and recovering the physical evidence is crucial and fundamental task in order to identify the criminals and exculpate the unduly accused .
Fingerprint, blood and hair are examples of clues that can be found in the SoC [4, 5, 6, 7, 8, 9]. Unfortunately criminals often try to adopt some techniques such as wearing gloves in order to neutralize these clues. On the other hand, although the shoe-prints are not unique, it has been noted that they have greater chance to be present in the SoC than latent fingerprints for instance [10, 11].
A shoe mark occurs due to the contact of a shoe with a surface (see Figure 1). Despite its uniqueness problem compared to other biometric traits [13, 14, 15, 16], footwear impressions hold a great and very promising potential in assisting forensic investigations. For instance, in case of multiple attacks in a short time, it would be unlikely that an attacker would discard or change his/her footwear between different crime places. It has also been reported by Alexandre  that approximately of shoe-prints can be retrieved in SoC. A lifted shoe-print from a SoC can potentially be used in two different tasks:
Match it against a database (such as Foster and Freeman Ltd) in order de determine its model.
Match it against other shoe-prints taken from other SoC to verify if the same shoe model has been used.
Unfortunately, carrying the matching based on the human knowledge (manually through a paper catalogue or semi-automatically through a computer database) is not a trivial task . Indeed, the limitations of such systems are obvious in case of large databases due to the need to match the retrieved sample to all database samples (one by one). Furthermore, it is harder to agree on the classification among several users and mostly in case of degraded shoe mark images. This clearly shows the need to a fully automated shoe-print identification system.
Despite the devoted efforts in order to introduce efficient automated computer systems able to search and match shoe-prints, there is no existing surveys bringing together all existing works. The main aim of this paper is to propose a comprehensive overview of existing automatic shoe-print identification. This is intended to provide researchers with state-of-the-art approaches in order to help advance the research topic as well as guiding the neophyte. Section 2 presents the main architecture of an automated shoe-print identification system. Section 3 introduces the holistic techniques. Section 4 describes the local techniques. Section 5 reports the evaluation and obtained performances. Section 6 gives the discussion. Finally, Section 7 offers our conclusion.
2 Automated shoe-print identification
The main architecture of an automated shoe-print identification system can be divided into three main tasks 
: removing the different distortions and enhancing the quality of images by pre-processing, generating discriminative features of a shoe-print using feature extraction techniques and finally classifying/matching the query sample with the whole database containing the shoe-print models and assigning its class label (i.e. shoe type) using the extracted features and a trained classifier or matching function (see Figure2).
Relevant and discriminative features are of critical and fundamental importance to achieve high performances in any automatic identification system . Feature extraction seeks to transform and fix the dimensionality of an initial input raw shoe-print image to generate a new set of features containing meaningful information contributing to assign the observations to the correct corresponding either on training samples or new unseen data class . Existing state-of-the-art techniques mainly differ by the type of the extracted features. They essentially can be organized in two main categories: holistic and structural methods.
3 Holistic techniques
The holistic or global methods seek to process shoe-print image as a whole. In this context, Bouridane et al.  employed Fractal decomposition in order to produce an ensemble of spatial transformations which can reproduce the same image when recursively applied to a nearly similar image. The matching is carried out using Mean Square Noise Error method (MSNE). De Chazal et al. 
took as features the squared magnitude of the 2D Discrete Fourier Transform (DFT) namely Power Spectral Density (PSD). A 2D correlation function has been used as a similarly measure and the query image is identified as the one with the highest correlation value in the database. Based on Oppenheim and Lim[24, 25] assumption claiming that in Fourier domain the phase information is much more important than magnitude in describing the patterns structure, Gueham et al.  introduced a Modified Phase-Only Correlation (MPOC) technique through a band pass spectral weighting function. The query sample is then classified as the one with highest matching score. Gueham et al.  evaluated two different advanced correlation filters: Optimal Trade-off Synthetic Discriminant Function (OTSDF) and Unconstrained OTSDF. The matching was carried out using three different metrics, peak height, peak to correlation energy and finally peak to sidelobe ratio. Gueham et al.  exploited Fourier-Mellin transform features obtained by a log-polar mapping followed by a DFT. The matching is performed based on a two dimensional correlation function. AlGarni and Hamiane 
extracted Hu’s moment invariants features, and then four different metrics have been used for the similarity measurement including Euclidean, city block, canberra and correlation. Jinget al.  enhanced the quality of the shoe marks by a pre-processing step including grayscale transformation, noise removal and principal component transformation. Then, four different type of features related to the directionality have been extracted, namely co-occurrence matrices, global Fourier transform, local Fourier transform and directional matrix. Finally, the sum of absolute difference between the previously mentioned features is used as a similarity metric. Patil and Kulkarni 
have exploited multiresolution features using Gabor transform. In order to be invariant to rotation, Radon transform has been used to estimate the rotation of the shoe-print to compensate the direction of the extracted features. The classification of a new shoe mark image was carried out using nearest-neighbor based on the Euclidean distance. Peiet al. 
combined odd and even Gabor features to describe the texture and geometry characteristics. Tang and Dai extracted several texture features including the dot texture and shape of edge. Li et al.  combined the integral histogram of the Gabor features with the Euclidean distance and histogram intersection for the similarity measurement. Wei and Gwo  used Zernike moments as features and carried out the classification through nearest-neighbor of Euclidean distance. Kong et al. 
extracted Gabor and Zernike features combined with normalized correlation for matching. Recently and with the progress in machine learning techniques, several learning-based techniques have been proposed, Kortylewski and Vetter suggested a probabilistic compositional active basis model. In the same context, Kong et al. et al.  proposed a manifold ranking based method using various extracted features. Recently, Zhang et al.  used a pre-trained VGG16 network further tuned using a data augmentation technique.
|Techniques||Features||Classification / Matching|
|• (Bouridane et al., 2000) ||Fractal Decomposition||Mean Square Noise Error|
|• (De Chazal et al.., 2005) ||Power Spectral Density||2D Correlation|
|• (Gueham et al., 2007) ||Phase||Modified Phase-Only Correlation|
|• (Gueham et al., 2008) ||OTSDF+UOTSDF||Peak Height, Peak to Correlation Energy, Peak to Sidelobe Ratio|
|• (Gueham et al., 2008) ||Fourier-Mellin Transform||2D Correlation|
|• (AlGarni and Hamiane, 2008) ||Hu’s Moments||Euclidean, City-Block, Canberra, Correlation|
|• (Jing et al., 2009) ||Co-occurrence, Global/Local Fourier||Sum of Absolute Difference|
|• (Patil and Kulkarni, 2009) ||Gabor||Euclidean|
|• (Pei et al., 2009) ||Odd and Even Gabor||Tree Similarity|
|• (Tang and Dai, 2010) ||Texture||Defined Similarity Function|
|• (Li et al., 2014) ||Gabor||Euclidean|
|• (Wei and Gwo, 2014) ||Zernike Moments||Euclidean|
|• (Wei and Gwo, 2014) ||Gabor+Zernike||Normalized Correlation|
|• (Kortylewski and Vetter, 2016) ||Raw Pixels||Probabilistic Model|
|• (Kong et al., 2017) ||Deep Features||Normalized Cross-Correlation|
|• (Wang et al., 2017) ||Hybrid Features (Region & Appearance)||Manifold Ranking|
|• (Zhang et al., 2017) ||Deep Features||Deep Neural Network|
|• (Zhang and Allinson, 2005) ||DFT Histogram Edge Direction||Euclidean|
|• (Pavlou and Allinson, 2006) ||MSER+GLOH+SIFT||Gaussian Weighted Function|
|• (Ghouti et al., 2006) ||Directional FilterBanks||Euclidean|
|• (Su et al., 2007) ||MHL+SIFT||Defined Similarity Function|
|• (Ramakrishnan and Srihari, 2008) ||Cosine Similarity+Entropy+Standard Deviation||Conditional Random Fields|
|• (Pavlou and Allinson, 2009) ||MSER+SIFT||Constraint Kernel|
|• (Nibouche et al., 2009) ||Multi-Scale Harris+SIFT||RANSAC|
|• (Dardi et al., 2009) [47, 48, 49]||PSD Mahalanobis Distance||Correlation|
|• (Tang et al., 2010) ||ISHT+MRHT||Footwear Print Distance|
|• (Li et al., 2011) .||SIFT||Cross-Correlation|
|• (Rathinavel and Arumugam, 2011) ||Discrete Cosine Transform||Euclidean|
|• (Hasegawa and Tabbone, 2012) ||HRT||Mean Local Similarity|
|• (Tang et al., 2010, 2012) [54, 55]||ARG||Footwear Print Distance|
|• (Wei et al.., 2013) ||SIFT||Cross-Correlation|
|• (Wang et al., 2014) ||Wavelet-Fourier||2D Correlation|
|• (Kortylewski et al., 2014) ||Periodicity||Defined Similarity Measure|
|• (Almaadeed et al., 2015) ||Harris+Hessian+SIFT||RANSAC|
|• (Alizadeh and Kose, 2017) ||Raw Pixels||Sparse Representation for Classification|
4 Local techniques
The local methods try to extract some discriminative features from local shoe-print regions. This includes keypoints or various overlapping/non-overlapping parts (we refer the reader to  for technical details of different keypoints detection techniques) . Zhang and Allinson  used DFT of the normalized histogram of edge direction as features and the Euclidean distance as measure of similarity. Pavlou and Allinson  exploited Maximally Stable Extremal Region (MSER) to detect the points of interest followed by Gradient Location and Orientation Histogram (GLOH) and Scale Invariant Feature Transform (SIFT) as feature descriptors. A Gaussian weighted function has been used as similarity metric. Ghouti et al.  extracted the block energy-dominant of Directional FilterBanks (DFBs). The matching was performed using Euclidean distance. Su et al.  combined the Modified Harris-Laplace (MHL) detector with the enhanced SIFT descriptor. The classification was carried out through nearest-neighbor. Ramakrishnan and Srihari  proposed a novel technique through the combination of three different features, cosine similarity, entropy and standard deviation with Conditional Random Fields (CRF). Pavlou and Allinson  located points of interest using MSER detector and then the corresponding features are extracted using SIFT descriptor further transformed to an histogram representation. The similarity is measured by a constraint kernel. Nibouche et al.  detected local points of interest through multi-scale Harris detector then SIFT descriptor is applied to extract the features. The matching is carried out iteratively using RANdom SAmple Consensus (RANSAC). Dardi et al. [47, 48, 49] divided the shoe-print image into blocks and then the Mahalanobis distance between all possible block pairs is calculated. The PSD of the obtained distance matrix is used as descriptor and the correlation as similarity measure. Tang et al.  exploited Iterative Straight-line Hough Transform (ISHT) and Modified Randomized Hough Transform (MRHT). Li et al.  combined SIFT detector with cross-correlation for matching. Hasegawa and Tabbone [54, 53] decomposed the shoe-print image into connected components and then Histogram Radon Transform (HRT) is used as descriptor to extract the features. The similarity is measured by the mean of local similarities. Rathinavel and Arumugam 
extracted Discrete Cosine Transform (DCT) coefficients of overlapped blocks further combined with Principal Component Analysis (PCA) and Fisher Linear Discriminant (FLD). The classification was carried out using nearest-neighbor of Euclidean distance. Tanget al.  encoded the structural features of shoe-print as an Attributed Relational Graph (ARG) and achieved the matching using a suggested Footwear Print Distance (FPD). Wei et al.  combined SIFT features with cross-correlation matching. Wang et al.  exploited Wavelet-Fourier transform features. Kortylewski et al.  extracted the pattern periodicity features. Almaadeed  et al. combined Harris and Hessian point of interest detectors with SIFT descriptors. The matching is carried out using RANSAC. Recently, Alizadeh and Kose  proposed an interesting method based on blocked sparse representation. Table I summarizes all the previously mentioned holistic and local shoe-print identification techniques.
The availability of large and public datasets is essential for a comparative study of the performances including a consistent evaluation. The main noted problem in the research topic of shoe-print identification is the lack or let even say the absence of public benchmarks with pre-defined and standardized evaluation protocols. Most published techniques in the literature were evaluated on non realistic and synthetically generated images by adding artificial distortions such as noise and blur [23, 27, 46]. Furthermore, the shoe model databases (i.e. training or gallery) were not made available. Thus a direct and fair comparison of the performance with the reported state-of-the-art techniques is unfortunately not possible. It should be also noted that [48, 50] have performed their evaluation based on real data which also was not made available.
Recently, we can notice a new introduced shoe-print database which has been made publicly available for algorithms evaluation namely Footwear Impression Database (FID-300) 111https://fid.dmi.unibas.ch/ . It has been collected in collaboration between German State Criminal Police Offices of Niedersachsen and Bayern and the company Forensity AG. This database contains 1175 gallery and 300 probe shoe-print images. The probe images has been digitized with a scanner after being lifted with a gel foil from the ground.
Despite the fact that different datasets, partitions and protocols have been used in the evaluation of the aforementioned state-of-the-art techniques, we give a general overview of the obtained performances (summarized in Table II). The results are reported in the format X%@Y, where it refers to the cumulative score X at the first Y matches. It can be clearly seen that various performances have been obtained ranging from 27.10% to 100%. This clearly shows the need to public datasets with standardized protocols for the algorithms evaluation.
|Techniques||Accuracy||Database Size||Studied Distortions|
|• (Bouridane et al.., 2000) ||88.00% @1||145||rotation & translation|
|• (De Chazal et al., 2005) ||87.00% @5%||475||rotation & translation|
|• (Zhang and Allinson, 2005) ||97.70% @4%||512||rotation, noise, scale & translation|
|• (Pavlou and Allinson, 2006) ||85.00% @1||368||rotation & translation|
|• (Gueham et al., 2007) ||100.00% @1||100||partial & noise|
|• (Gueham et al., 2008a) ||95.68% @1||100||rotation, noise & occlusion|
|• (AlGarni and Hamiane, 2008) ||99.40% @1||500||rotation & noise|
|• (Gueham et al., 2008b) ||99.00% @10||500||rotation, scale, noise & occlusion|
|• (Pavlou and Allinson, 2009) ||87.00% @1||374||-|
|• (Dardi et al., 2009a) ||49.00% @1||87||noise|
|• (Nibouche et al., 2009) ||90.00% @1||300||rotation, noise & occlusion|
|• (Patil and Kulkarni, 2009) ||91.00% @1||1400||rotation, noise & occlusion|
|• (Pei et al., 2009) ||61.70% @5||6000||noise & occlusion|
|• (Dardi et al., 2009c) ||73.00% @10||87||rotation, scale & translation|
|• (Tang et al., 2010b) ||71.00% @1%||2660||rotation, scale, translation & occlusion|
|• (Tang et al., 2012) ||70.00% @1%||2660||rotation, scale, translation & noise|
|• (Wang et al., 2014) ||90.87% @2%||210 000||rotation, translation & scale|
|• (Kortylewski et al., 2014) ||27.10% @1%||1175||translation & noise|
|• (Almaadeed et al., 2015) ||99.33% @1||300||rotation, scale, noise & occlusion|
|• (Kortylewski and Vetter, 2016) ||71.00% @20%||1175||-|
|• (Alizadeh and Kose, 2017) ||99.47% @1||190||noise, rotation & occlusion|
Actually, automatic shoe-print identification is a very challenging task in computer vision systems. Indeed, it suffers from different variations in shape and appearance due to the tread material and properties of surface (Figure3). Furthermore, shoe-prints are cluttered since gallery images have no background while probe ones have a complicated and structured background which is hardly distinguishable from patterns of interest (Figure 4). In addition to that, occlusion, noise, translation and limited training data are further problems .
6 Discussion and Current Challenges
A considerable amount of techniques have been introduced in order to the tackle the problem of shoe-print identification using a large variety of features. These extracted features determine which information and properties are available during the identification process . They should capture enough invariant properties within the same shoe class and variant ones between different ones . The conventional methods to identify the lifted shoe marks are mainly based on low-level handcrafted features designed based on the human knowledge. Unfortunately, despite their good performances in some controlled and specific tasks, handcrafted representations are usually ad-hoc, tend to overfitting and lack of generalization ability in various realistic scenarios. Indeed, shoe-print identification is not trial task due to the large intra-class variations caused by the rotation, noise, occlusion, translation and scale distortions. This clearly shows the need to robust techniques capable to operate in complicated and degraded scenarios.
In contrast to handcrafted feature engineering, feature learning approaches are capable to learn robust, discriminative and data-driven representations from the raw data without making use of any prior knowledge of the task [65, 66]
. Among the involved techniques we can find deep learning with the goal of end-to-end identification system. It seeks to stack more than the usual two neural layers where each layer encodes some specific properties further combined in order to learn representative and discriminative representations. Among the existing deep learning models which can potentially be applied to shoe-print identification, we can find Convolutional Neural Networks (CNN) . They seek to learn discriminative representations with invariant properties.
Up to day, handcrafted feature represent the most and widely used features for shoe mark identification since the deep models require a considerable and huge amount of data in order to be reliable. Unfortunately, the existing shoe-print identification datasets have a very limited size and mainly one example per each shoe class. To be effective and tackle the problem of limited training data, a possible solution is transfer learning. It consists in exploiting models that have been already pre-trained on a huge amount of data for another task followed by a fine tuning step to fit the model to the target application.
shoe-print represents an important clue in scene crime for the proper progress of investigations in order identify the criminals. A large variety of handcrafted features have been used for automatic shoe-print identification. These features have shown good performance in limited and controlled scenarios. Unfortunately, they fail when they are dealing with large intra-class variations caused by the noise, occlusions, rotation and various scale distortions. A good alternative to these conventional features are the learned ones, e.g. deep learning, which have more generalization ability in more complicated scenarios. To be effective, these models need to be trained on a large amount of data.
Large and public datasets are essential and of extreme importance for any comparative study of the performances including a consistent evaluation. The main noted problem in the research topic of shoe-print identification is the absence of public benchmarks with pre-defined and standardized evaluation protocols. Most published techniques in the literature were evaluated on non realistic and synthetically generated images. This is clearly show the need to build new large datasets in order to boost the shoe-print research topic.
The work carried out by Hugo Proença was supported by PEst-OE/EEI/LA0008/2013 research program.
-  C. Huynh, P. de Chazal, D. McErlean, R. B. Reilly, T. J. Hannigan, and L. M. Fleury, “Automatic classification of shoeprints for use in forensic science based on the fourier transform,” in IEEE International Conference on Image Processing (ICIP), 2003, vol. 3, 2003, pp. 569–572, DOI: 10.1109/ICIP.2003.1247308.
-  E. Locard, “The analysis of dust traces,” Am. J. Police Sci., vol. 1, p. 276, 1930, DOI: 10.1007/s001140050.
-  M. Vagač, M. Povinskỳ, and M. Melicherčík, “Detection of shoe sole features using dnn,” in 14th IEEE International Scientific Conference on Informatics, 2017, 2017, pp. 416–419, DOI: 10.1109/INFORMATICS.2017.8327285.
-  Y. Liu, D. Hu, J. Fan, F. Wang, and D. Zhang, “Multi-feature fusion for crime scene investigation image retrieval,” in IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA), 2017, 2017, pp. 1–7, DOI: 10.1109/DICTA.2017.8227466.
-  M. Benecke, “Dna typing in forensic medicine and in criminal investigations: a current survey,” Naturwissenschaften, vol. 84, no. 5, pp. 181–188, 1997, DOI: 10.1007/s001140050375.
-  J. R. Robertson, Forensic examination of hair. CRC Press, 2002, DOI: 10.1080/00450610109410825.
-  J. S. Buckleton, J.-A. Bright, and D. Taylor, Forensic DNA evidence interpretation. CRC press, 2016, ISBN: 9781482258899.
-  B. Robertson, G. A. Vignaux, and C. E. Berger, Interpreting evidence: evaluating forensic science in the courtroom. John Wiley & Sons, 2016, ISBN: 978-1-118-49243-7.
-  I. Rida, A. Bouridane, G. L. Marcialis, and P. Tuveri, “Improved human gait recognition,” in International Conference on Image Analysis and Processing. Springer, 2015, pp. 119–129, DOI: 10.1007/978-3-319-23234-8_12.
-  T. Thompson and S. Black, Forensic human identification: An introduction. CRC press, 2006, ISBN: 9780849339547.
-  W. J. Bodziak, Footwear impression evidence: detection, recovery and examination. CRC Press, 2017, ISBN: 9780849310454.
-  B. Kong, D. Ramanan, and C. Fowlkes, “Cross-domain forensic shoeprint matching,” in British Machine Vision Conference (BMVC), 2017, pp. 1–5, DOI: 10.5244/C.21.38.
-  I. Rida, X. Jiang, and G. L. Marcialis, “Human body part selection by group lasso of motion for model-free gait recognition,” IEEE Signal Processing Letters, vol. 23, no. 1, pp. 154–158, 2016, DOI: 10.1109/LSP.2015.2507200.
-  I. Rida, R. Herault, G. L. Marcialis, and G. Gasso, “Palmprint recognition with an efficient data driven ensemble classifier,” Pattern Recognition Letters, 2018, DOI: 10.1016/j.patrec.2018.04.033.
-  S. Bakshi and T. Tuglular, “Security through human-factors and biometrics,” in 6th International Conference on Security of Information and Networks. ACM, 2013, pp. 463–463, DOI: 10.1145/2523514.2523597.
-  I. Rida, N. Al-maadeed, and S. Al-maadeed, “Robust gait recognition: a comprehensive survey,” IET Biometrics, 2018, DOI: 10.1049/iet-bmt.2018.5063.
-  G. Alexandre, “Computerized classification of the shoeprints of burglars’ soles,” Forensic Science International, vol. 82, no. 1, pp. 59–65, 1996, DOI: 10.1016/0379-0738(96)01967-6.
-  J. H. Kerstholt, R. Paashuis, and M. Sjerps, “Shoe print examinations: effects of expectation, complexity and experience,” Forensic science international, vol. 165, no. 1, pp. 30–34, 2007, DOI: 10.1016/j.forsciint.2006.02.039.
-  I. Rida, N. Al-Maadeed, S. Al-Maadeed, and S. Bakshi, “A comprehensive overview of feature representation for biometric recognition,” Multimedia Tools and Applications, pp. 1–24, 2018, DOI: 10.1007/s11042-018-6808-5.
-  I. Rida, L. Boubchir, N. Al-Maadeed, S. Al-Maadeed, and A. Bouridane, “Robust model-free gait recognition by statistical dependency feature selection and globality-locality preserving projections,” in 39th IEEE International Conference on Telecommunications and Signal Processing (TSP), 2016, 2016, pp. 652–655, DOI: 10.1109/TSP.2016.7760963.
-  I. Rida, S. Al Maadeed, X. Jiang, F. Lunke, and A. Bensrhair, “An ensemble learning method based on random subspace sampling for palmprint identification,” in 2018 IEEE International conference on acoustics, speech and signal processing (ICASSP). IEEE, 2018, pp. 2047–2051.
-  A. Bouridane, A. Alexander, M. Nibouche, and D. Crookes, “Application of fractals to the detection and classification of shoeprints,” in IEEE International Conference on Image Processing (ICIP), 2000, vol. 1, 2000, pp. 474–477, DOI: 10.1109/ICIP.2000.900998.
-  P. De Chazal, J. Flynn, and R. B. Reilly, “Automated processing of shoeprint images based on the fourier transform for use in forensic science,” IEEE transactions on pattern analysis and machine intelligence, vol. 27, no. 3, pp. 341–350, 2005, DOI: 10.1109/TPAMI.2005.48.
-  A. V. Oppenheim and J. S. Lim, “The importance of phase in signals,” Proceedings of the IEEE, vol. 69, no. 5, pp. 529–541, 1981, DOI: 10.1109/PROC.1981.12022.
-  I. Rida, S. Almaadeed, and A. Bouridane, “Gait recognition based on modified phase-only correlation,” Signal, Image and Video Processing, vol. 10, no. 3, pp. 463–470, 2016, DOI: 10.1007/s11760-015-0766-4.
-  M. Gueham, A. Bouridane, and D. Crookes, “Automatic recognition of partial shoeprints based on phase-only correlation,” in IEEE International Conference on Image Processing (ICIP), 2007, vol. 4, 2007, pp. 441–444, DOI: 10.1109/ICIP.2007.4380049.
-  ——, “Automatic classification of partial shoeprints using advanced correlation filters for use in forensic science,” in 19th IEEE International Conference on Pattern Recognition (ICPR), 2008, 2008, pp. 1–4, DOI: 10.1109/ICPR.2008.4761058.
-  M. Gueham, A. Bouridane, D. Crookes, and O. Nibouche, “Automatic recognition of shoeprints using fourier-mellin transform,” in IEEE Conference on Adaptive Hardware and Systems (AHS), 2008, 2008, pp. 487–491, DOI: 10.1109/AHS.2008.48.
G. AlGarni and M. Hamiane, “A novel technique for automatic shoeprint image retrieval,”Forensic science international, vol. 181, no. 1-3, pp. 10–14, 2008, DOI: 10.1016/j.forsciint.2008.07.004.
-  M.-Q. Jing, W.-J. Ho, and L.-H. Chen, “A novel method for shoeprints recognition and classification,” in IEEE International Conference on Machine Learning and Cybernetics, 2009, vol. 5, 2009, pp. 2846–2851, DOI: 10.1109/ICMLC.2009.5212580.
-  P. M. Patil and J. V. Kulkarni, “Rotation and intensity invariant shoeprint matching using gabor transform with application to forensic science,” Pattern Recognition, vol. 42, no. 7, pp. 1308–1317, 2009, DOI: 10.1016/j.patcog.2008.11.008.
-  W. Pei, Y.-y. Zhu, Y.-n. Na, and X.-g. He, “Multiscale gabor wavelet for shoeprint image retrieval,” in 2nd IEEE International Congress on Image and Signal Processing (CISP), 2009, 2009, pp. 1–5, DOI: 10.1109/CISP.2009.5304124.
-  C. Tang and X. Dai, “Automatic shoe sole pattern retrieval system based on image content of shoeprint,” in IEEE International Conference on Computer Design and Applications (ICCDA), 2010, vol. 4, 2010, pp. 602–605, DOI: 10.1109/DICTA.2017.8227466.
-  X. Li, M. Wu, and Z. Shi, “The retrieval of shoeprint images based on the integral histogram of the gabor transform domain,” in International Conference on Intelligent Information Processing. Springer, 2014, pp. 249–258, DOI: 10.1504/IJGCRSIS.2012.049981.
-  C.-H. Wei and C.-Y. Gwo, “Alignment of core point for shoeprint analysis and retrieval,” in IEEE International Conference on Information Science, Electronics and Electrical Engineering (ISEEE), 2014, vol. 2. IEEE, 2014, pp. 1069–1072, DOI: 10.1109/InfoSEEE.2014.6947833.
-  X. Kong, C. Yang, and F. Zheng, “A novel method for shoeprint recognition in crime scenes,” in Chinese Conference on Biometric Recognition. Springer, 2014, pp. 498–505, DOI: 10.1007/978-3-319-12484-1_57.
-  A. Kortylewski and T. Vetter, “Probabilistic compositional active basis models for robust pattern recognition.” in British Machine Vision Conference (BMVC), 2016, pp. 1–12.
-  X. Wang, C. Zhang, Y. Wu, and Y. Shu, “A manifold ranking based method using hybrid features for crime scene shoeprint retrieval,” Multimedia Tools and Applications, vol. 76, no. 20, pp. 21 629–21 649, 2017, DOI: 10.1007/s11042-016-4029-3.
-  Y. Zhang, H. Fu, E. Dellandréa, and L. Chen, “Adapting convolutional neural networks on the shoeprint retrieval for forensic use,” in Chinese Conference on Biometric Recognition. Springer, 2017, pp. 520–527, DOI: 10.1109/InfoSEEE.2014.6947833.
-  L. Zhang and N. Allinson, “Automatic shoeprint retrieval system for use in forensic investigations,” in UK Workshop On Computational Intelligence, vol. 99, 2005, pp. 137–142.
-  M. Pavlou and N. M. Allinson, “Automatic extraction and classification of footwear patterns,” in International Conference on Intelligent Data Engineering and Automated Learning. Springer, 2006, pp. 721–728, DOI: 10.1007/11875581_87.
-  L. Ghouti, A. Bouridane, and D. Crookes, “Classification of shoeprint images using directional filter banks,” in International Conference on Visual Information Engineering (VIE), 2006. IET, 2006, pp. 167–173, DOI: 10.1049/cp:20060522.
-  H. Su, D. Crookes, A. Bouridane, and M. Gueham, “Local image features for shoeprint image retrieval,” in British machine vision conference (BMVC), vol. 2007, 2007, pp. 1–10, DOI: 10.5244/C.21.38.
-  V. Ramakrishnan and S. Srihari, “Extraction of shoe-print patterns from impression evidence using conditional random fields,” in 19th IEEE International Conference on Pattern Recognition (ICPR), 2008, 2008, pp. 1–4, DOI: 10.1109/ICPR.2008.4761881.
-  M. Pavlou and N. M. Allinson, “Automated encoding of footwear patterns for fast indexing,” Image and Vision Computing, vol. 27, no. 4, pp. 402–409, 2009, DOI: 10.1016/j.imavis.2008.06.003.
-  O. Nibouche, A. Bouridane, M. Gueham, and M. Laadjel, “Rotation invariant matching of partial shoeprints,” in 13th IEEE International Machine Vision and Image Processing (IMVIP), 2009, 2009, pp. 94–98, DOI: 10.1109/IMVIP.2009.24.
-  F. Dardi, F. Cervelli, and S. Carrato, “An automatic footwear retrieval system for shoe marks from real crime scenes,” in 6th IEEE International Symposium on Image and Signal Processing and Analysis (ISPA), 2009, 2009, pp. 668–672, DOI: 10.1109/ISPA.2009.5297667.
-  ——, “A texture based shoe retrieval system for shoe marks of real crime scenes,” in International Conference on Image Analysis and Processing (ICIAP). Springer, 2009, pp. 384–393, DOI: 10.1007/978-3-642-04146-4_42.
-  ——, “A combined approach for footwear retrieval of crime scene shoe marks,” in 3rd International Conference on Crime Detection and Prevention (ICDP), 2009. IET, 2009, pp. 1–6, DOI: 10.1049/ic.2009.0237.
-  Y. Tang, S. N. Srihari, H. Kasiviswanathan, and J. J. Corso, “Footwear print retrieval system for real crime scene marks,” in International Workshop on Computational Forensics. Springer, 2010, pp. 88–100, DOI: 10.1007/978-3-642-19376-7_8.
-  Z. Li, C. Wei, Y. Li, and T. Sun, “Research of shoeprint image stream retrival algorithm with scale-invariance feature transform,” in IEEE International Conference on Multimedia Technology (ICMT), 2011, 2011, pp. 5488–5491, DOI: 10.1109/ICMT.2011.6002147.
-  S. Rathinavel and S. Arumugam, “Full shoe print recognition based on pass band dct and partial shoe print identification using overlapped block method for degraded images,” International Journal of Computer Applications, vol. 26, no. 8, pp. 16–21, 2011, DOI: 10.5120/3126-4301.
-  M. Hasegawa and S. Tabbone, “A local adaptation of the histogram radon transform descriptor: an application to a shoe print dataset,” in Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, 2012, pp. 675–683, DOI: 10.1007/978-3-642-34166-3_74.
-  Y. Tang, S. N. Srihari, and H. Kasiviswanathan, “Similarity and clustering of footwear prints,” in IEEE International Conference on Granular Computing (GrC), 2010, 2010, pp. 459–464, DOI: 10.1109/GrC.2010.175.
-  Y. Tang, H. Kasiviswanathan, and S. N. Srihari, “An efficient clustering-based retrieval framework for real crime scene footwear marks,” International Journal of Granular Computing, Rough Sets and Intelligent Systems, vol. 2, no. 4, pp. 327–360, 2012, DOI: 10.1504/IJGCRSIS.2012.049981.
-  C.-H. Wei, Y. Li, and C.-Y. Gwo, “The use of scale-invariance feature transform approach to recognize and retrieve incomplete shoeprints,” Journal of forensic sciences, vol. 58, no. 3, pp. 625–630, 2013, DOI: 10.1111/1556-4029.12089.
-  X. Wang, H. Sun, Q. Yu, and C. Zhang, “Automatic shoeprint retrieval algorithm for real crime scenes,” in Asian Conference on Computer Vision (ACCV). Springer, 2014, pp. 399–413, DOI: 10.1016/j.forsciint.2017.05.025.
-  A. Kortylewski, T. Albrecht, and T. Vetter, “Unsupervised footwear impression analysis and retrieval from crime scene data,” in Asian Conference on Computer Vision (ACCV). Springer, 2014, pp. 644–658, DOI: 10.1007/978-3-319-16628-5_46.
-  S. Almaadeed, A. Bouridane, D. Crookes, and O. Nibouche, “Partial shoeprint retrieval using multiple point-of-interest detectors and sift descriptors,” Integrated Computer-Aided Engineering, vol. 22, no. 1, pp. 41–58, 2015, DOI: 10.3233/ICA-140480.
-  S. Alizadeh and C. Kose, “Automatic retrieval of shoeprint images using blocked sparse representation,” Forensic science international, vol. 277, pp. 103–114, 2017, DOI: 10.1016/j.forsciint.2017.05.025.
-  S. Krig, “Interest point detector and feature descriptor survey,” in Computer Vision Metrics. Springer, 2016, pp. 187–246, DOI: 10.1007/978-3-319-33762-3_6.
-  A. Kortylewski, “Model-based image analysis for forensic shoe print recognition,” Ph.D. dissertation, University_of_Basel, 2017.
-  I. Rida, N. Al Maadeed, G. L. Marcialis, A. Bouridane, R. Herault, and G. Gasso, “Improved model-free gait recognition based on human body part,” in Biometric Security and Privacy. Springer, 2017, pp. 141–161, DOI: 10.1007/978-3-319-47301-7_6.
-  I. Rida, S. Al Maadeed, and A. Bouridane, “Unsupervised feature selection method for improved human gait recognition,” in Signal Processing Conference (EUSIPCO), 2015 23rd European. IEEE, 2015, pp. 1128–1132, DOI: 10.1109/EUSIPCO.2015.7362559.
-  S. Al Maadeed, X. Jiang, I. Rida, and A. Bouridane, “Palmprint identification using sparse and dense hybrid representation,” Multimedia Tools and Applications, pp. 1–15, 2018, DOI: 10.1007/s11042-018-5655-8.
-  I. Rida, S. Al-Maadeed, A. Mahmood, A. Bouridane, and S. Bakshi, “Palmprint identification using an ensemble of sparse representations,” IEEE Access, vol. 6, pp. 3241–3248, 2018, DOI: 10.1109/ACCESS.2017.2787666.
-  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, p. 436, 2015, DOI: 10.1038/nature14539.
-  Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998, DOI: 10.1109/5.726791.