In this paper, we propose a method for image-set classification based on convex cone models that can deal with various types of features with non-negative constraint. We discuss the effectiveness of the combination with convolutional neural network (CNN) features extracted from a high-level hidden layer of a learned CNN.
For the last decade, image set-based classification methods [1, 2, 3, 4, 5, 6, 7], and particularly subspace-based methods, such as the mutual subspace method (MSM)  and constrained MSM (CMSM) [2, 6]
, have gaining substantial attention for various applications to multi-view images and videos, e.g., 3D object recognition and motion analysis, as they can handle a set of images effectively. In these methods, a set of images is compactly represented by a subspace in high dimensional vector space, where the subspace is generated by applying PCA to the image set without data centering. The classification of an input subspace is based on the canonical angles[8, 9] between the input and each reference subspace, as the similarity index.
Conventional subspace-based methods assume a raw intensity vector or a hand-crafted feature as the input. Regarding more discriminant features, many recent studies have revealed that CNN features are effective inputs for various types of classifiers[10, 11, 12, 13]. Inspired by these results, subspace-based methods with CNN features have been proposed and have achieved high classification performance .
CNN feature vectors have only non-negative values when the rectified linear unit (ReLU) is used as an activation function. This characteristic does not allow the combination of CNN features with negative coefficients; accordingly, a set of CNN features forms a convex cone instead of a subspace in a high dimensional vector space, as described in Sec.II-C. For example, it is well known that a set of front-facing images under various illumination conditions forms a convex cone, referred to as an illumination cone [16, 17]. Several previous studies have demonstrated the advantages of convex cone representation compared with subspace representation [18, 19]. These advantages naturally motivated us to replace a subspace with a convex cone in models of a set of CNN features.
In this framework, it is first necessary to consider how to calculate the geometric similarity between two convex cones. To this end, we define multiple angles between two convex cones by reference to the definition of the canonical angles [8, 9] between two subspaces. Although the canonical angles between two subspaces can be analytically obtained from the orthonormal basis vectors of the two subspaces, the definition of angles between two convex cones is not trivial, as we need to consider the non-negative constraint. In this paper, we define multiple angles between convex cones sequentially from the smallest to the largest by repeatedly applying the alternating least squares method . Then, the geometric similarity between two convex cones is defined based on the obtained angles. We call the classification method using this similarity index the mutual convex cone method (MCM), corresponding to the mutual subspace method (MSM).
Moreover, to enhance the performance of the MCM, we introduce a discriminant space , which maximizes the between-class variance (gap) among convex cones projected onto the discriminant space and minimizes the within-class variance of the projected convex cones, similar to the Fisher discriminant analysis . The class separability can be increased by projecting the class of convex cones onto the discriminant space , as shown in Fig.1. As a result, the classification ability of MCM is enhanced, similar to that of the projection of class subspaces onto a generalized difference subspace (GDS) in CMSM . Finally, we perform the classification using the angles between the projected convex cones . We call this enhanced method the “constrained mutual convex cone method (CMCM),” corresponding to the constrained MSM (CMSM).
The main contributions of this paper are summarized as follows.
We introduce a convex cone representation to accurately and compactly represent a set of CNN features.
We introduce two novel mechanisms in our image set-based classification: a) multiple angles between two convex cones to measure similarity and b) a discriminant space to increase the class separability among convex cones.
We propose two novel image set-based classification methods, called the MCM and CMCM, based on convex cone representation and the discriminant space.
The paper is organized as follows. In Section 2, we describe the algorithms in conventional methods, such as MSM and CMSM. In Section 3, we describe the details of the proposed method. In Section 4, we demonstrate the validity of the proposed method by classification experiments using a private database of multi-view hand shapes and two public datasets, i.e., ETH-80 and CMU face datasets. Section 5 concludes the paper.
Ii Related work
In this section, we first describe the algorithms for the MSM and CMSM, which are standard methods for image set classification. Then, we provide an overview of the concept of convex cones.
Ii-a Mutual subspace method based on canonical angles
MSM is a classifier based on canonical angles between two subspaces, where each subspace represents an image set.
where and are the canonical vectors forming the -th smallest canonical angle between and . The -th canonical angle is the smallest angle in the direction orthogonal to the canonical angles as shown in Fig.3.
The canonical angles can be calculated from the orthogonal projection matrices onto subspaces . Let be basis vectors of and be basis vectors of . The projection matrices and are calculated as and , respectively. is the
-th largest eigenvalue ofor . Alternatively, the canonical angles can be easily obtained by applying the SVD to the orthonormal basis vectors of the subspaces.
The geometric similarity between two subspaces and is defined by using the canonical angles as follows:
In MSM, an input subspace is classified by comparison with class subspaces using this similarity.
Ii-B Constrained MSM
The essence of the constrained MSM (CMSM) is the application of the MSM to a generalized difference subspace (GDS) , as shown in Fig.4. GDS is designed to contain only difference components among subspaces . Thus, the projection of class subspaces onto GDS can increase the class separability among the class subspaces to substantially improve the classification ability of MSM .
Ii-C Convex cone model
In this subsection, we explain the definition of a convex cone and the projection of a vector onto a convex cone. A convex cone is defined by finite basis vectors as follows:
As indicated by this definition, the difference between the concepts of a subspace and a convex cone is whether there are non-negative constraints on the combination coefficients or not.
Given a set of feature vectors , the basis vectors of a convex cone representing a distribution of can be obtained by non-negative matrix factorization (NMF) [24, 25]. Let and . NMF generates the basis vectors by solving the following optimization problem:
where denotes the Frobenius norm. We use the alternating non-negativity-constrained least squares-based method  to solve this problem.
Although the basis vectors can be easily obtained by the NMF, the projection of a vector onto the convex cone is slightly complicated by the non-negative constraint on the coefficients. In , a vector is projected onto the convex cone by applying the non-negative least squares method as follows:
The projected vector is obtained as .
In the end, the angle between the convex cone and a vector can be calculated as follows:
Iii Proposed method
In this section, we explain the algorithms in the MCM and CMCM, after establishing the definition of geometric similarity between two convex cones.
Iii-a Geometric similarity between two convex cones
We define the geometric similarity between two convex cones. To this end, we consider how to define multiple angles between two convex cones. Two convex cones and are formed by basis vectors and , respectively. Assume that for convenience. The angles between two convex cones cannot be obtained analytically like the canonical angles between two subspaces, as it is necessary to consider non-negative constraint. Alternatively, we find two vectors, and , which are closest to each other. Then, we define the angle between the two convex cones as the angle formed by the two vectors. In this way, we sequentially define multiple angles from the smallest to the largeset, in order.
First, we search a pair of -dimensional vectors and , which have the maximum correlation, using the alternating least squares method (ALS). The first angle is defined as the angle formed by and . The pair of and can be found by using the following algorithm:
Algorithm to search for the pair and
Let and be the projections of a vector onto and , respectively. For the details of the projection, see Section II-C.
Randomly initialize .
If is sufficiently small, the procedure is completed. Otherwise, return to 2) setting .
For the second angle , we search for a pair of vectors and with the maximum correlation, but with the minimum correlation with and . Such a pair can be found by applying ALS to the projected convex cones and on the orthogonal complement space of the subspace spanned by the vectors and as shown in Fig.5. is formed by and . In this way, we can obtain all of the pairs of vector forming -th angle , .
With the resulting angles , we define the geometrical similarity between two convex cones and as follows:
Iii-B Mutual convex cone method
The mutual convex cone method (MCM) classifies an input convex cone based on the similarities defined by Eq.(7) between the input and class convex cones. MCM consists of two phases, a training phase and a recognition phase, as summarized in Fig.6.
Given class sets with images.
CNN features are extracted from the images of class . Then, the extracted CNN features are normalized as .
The basis vectors of class convex cone are generated by applying NMF to the set of normalized CNN features.
are registered as the reference convex cone of class .
The above process is conducted for all classes.
A set of images is input.
CNN features are extracted from the images . Then, the extracted CNN features are normalized as .
The basis vectors of the input convex cone are generated by applying NMF to the input set of normalized CNN features.
The input image set is classified based on the similarity (Eq.(7)) between the input convex cone and each -th class reference convex cone .
Iii-C Generation of discriminant space
To enhance the class separability among multiple classes of convex cones, we introduce a discriminant space , which maximizes the between-class variance and minimizes the within-class variance for the convex cones projected on , similar to the Fisher discriminant analysis (FDA). In our method, the between class variance is replaced with gaps among convex cones. We define these gaps as follows. Let be the -th class convex cone with basis vectors , be the projection operation of a vector onto , and be the number of the classes. We consider vectors , ( such that the sum of the correlation is maximum. Such a set of vectors can be obtained by using the following algorithm. This algorithm is almost the same as the generalized canonical correlation analysis (CCA) [27, 28], except the non-negative least squares (LS) method is used instead of the standard LS method.
Procedure to search a set of first vectors
Randomly initialize .
Project onto each convex cone, and then normalize the projection as .
If is sufficiently small, the procedure is completed. Otherwise, return to 2) setting .
Next, we search for a set of second vectors with the maximum sum of the correlations under the constraint condition that they have the minimum correlation with the previously found . To this end, we project the convex cones onto the orthogonal complement space of the vector . The second vectors can be obtained by applying the above procedure to the projected convex cones. In the following, a set of -th vectors can be sequentially obtained by applying the same procedure to the convex cones projected onto the orthogonal complement space of . In this way, we finally obtain the sets of . With the sets of , we define a difference vector as follows:
Considering that each difference vector represents the gap between the two convex cones, we define using these vectors as follows:
Next, we define the within-class variance using the basis vectors for all of classes of convex cones as follows:
where . Finally, the -dimensional discriminant space is spanned by eigenvectors corresponding to the largest eigenvalues of the following eigenvalue problem:
Iii-D Constrained mutual convex cone method
We construct the constrained MCM (CMCM) by incorporating the projection onto the discriminant space into the MCM. CMCM consists of a training phase and a testing phase, as shown in Fig.7. In the following, we explain each phase for the case in which classes have images.
CNN features are extracted from the images . Further, the extracted CNN features are normalized as .
The basis vectors of the class convex cone are generated by applying NMF to each class set of normalized CNN features.
Sets of difference vectors are generated according to the procedure described in section III-C.
The discriminant space is generated by solving Eq.(11) using and .
The basis vectors are projected onto the discriminant space and then the lengths of the projected basis vectors are normalized to 1.0. A set of these basis vectors forms the projected convex cone.
are registered as the reference convex cones of class .
A set of images is input.
CNN features are extracted from the images . Further, the extracted CNN features are normalized as
The basis vectors of a convex cone are generated by applying NMF to the set of normalized CNN features.
The basis vectors are projected onto the discriminant space and then the lengths of the projected basis vectors are normalized to 1.0. The normalized projections are represented by
The input set is classified based on the similarity (Eq.(7)) between the input convex cone and each class reference convex cone .
Iv Evaluation Experiments
In this section, we demonstrate the effectiveness of the proposed methods by comparative analyses of performance, including conventional subspace-based methods, MSM, and CMSM with CNN features. We used three databases: 1) multi-view hand shape dataset, 2) ETH-80 dataset, and 3) CMU Multi-PIE face dataset . In the first three experiments, we conducted the classification using the three datasets with a sufficiently large number of training samples. In the final experiment, we show the robustness of the proposed methods against small sample sizes (SSS), considering the situation in which few training samples can be used for learning. For the implementation of the methods, we used the NMF toolbox
Iv-a Hand shape classification
Iv-A1 Details of the dataset
The multi-view hand shape dataset consists of 30 classes of hand shapes. Each class was determined from 100 subjects at a speed of 1 fps for 4 s using a multi-camera system equipped with seven synchronized cameras at intervals of 10 degrees. During data collection, the subjects were asked to rotate their hands at a constant speed to increase the number of viewpoints. Figure 8 shows several sample images in the dataset. The total number of images collected was 84000 (= 30 classes4 frames7 cameras 100 subjects).
Iv-A2 Experimental protocol
We used the same protocol as that described in . We randomly divided the subjects into two sets. One set was used for training, and the other was used for testing. That is, a reference convex cone for each hand shape was generated from a set of 1,400 (=7 cameras4 frames50 subjects) images. As an input image set, we used 28 (7 cameras4 frames) images. The total number of convex cones used for testing was 1,500 (=30 shapes50 subjects). We evaluated the classification performance of each method in terms of the average error rate (ER) of ten trials using randomly divided datasets.
We selected the parameters for the methods by cross validation using the training data. For MSM and CMSM with CNN features, the dimensions of class, input subspaces, and GDS were set to 80, 5, and 200, respectively. For conventional methods with raw images and FFT features, we used the same parameters as those in . For MCM and CMCM, the numbers of basis vectors of class and input convex cones were set to 30 and 7, respectively. The dimension of the discriminant space was set to 750.
To obtain CNN features under our experimental setting, we modified the original ResNet-50 
trained by the Imagenet database slightly for our experimental conditions. First, we replaced the final 1000-way fully connected (FC) layer of the original ResNet-50 with a 1024-way FC layer and applied the ReLU function. Further, we added a -way FC layer with softmax behind the previous 1024-way FC layer.
Moreover, to extract more effective CNN features from our modified ResNet, we fine-tuned our ResNet using the learning set. A CNN feature vector was extracted from the 1024-way FC layer every time an image was input into our ResNet. As a result, the dimensionality of a CNN feature vector was 1024.
In our fine-tuned CNN, an input image set was classified based on the average value of the output conviction degrees for each class from the last FC layer with softmax.
Iv-A3 Hand shape classification results
We can see that the subspace- or convex cone-based methods with CNN features are significantly superior to methods with conventional features. We can confirm the validity of CNN features. The results also indicate that a set of CNN features is more informative than the average value of the outputs from the last softmax layer. When comparing the convex cone-based methods with the subspace-based methods, CMCM achieves the best performance. This advantage suggests that a convex cone model is more suitable than a subspace model to represent a set of CNN features compactly and to effectively compare two sets.
Iv-B Object classification experiment
We conducted an analysis of object classification using the ETH-80 dataset.
Iv-B1 Details of the ETH-80 and experimental protocol
The ETH-80 dataset consists of object images in eight different categories, captured from 41 viewpoints. Each category has 10 kinds of object.
Five objects randomly sampled from each category set were used for training, and the remaining five objects were used for testing. As an input image set, we used 41 images for each object. We evaluated the classification performance of each method in terms of the average error rate (ER) of five trials using randomly divided datasets.
For MSM and CMSM, the dimensions of class subspaces, the input subspaces, and GDS were set to 55, 10, and 30, respectively. For MCM and CMCM, the numbers of the basis vectors of class and input convex cones were set to 30 and 7, respectively. The dimension of the discriminant space was set to 85. We determined these dimensionalities by cross-validation using the training data. CNN features were extracted from the fine-tuned ResNet under this experimental setting, according to the same procedure used in the previous experiments.
Iv-B2 Object classification result
Table II shows the error rates for the different methods. The CMCM exhibited the highest accuracy. This result also supports the conclusion that a convex cone model is more appropriate to represent a set of CNN features than a subspace model. In addition, we can confirm that the projection of the convex cones onto the discriminant space works well as a valid feature extraction.
Iv-C Face classification experiment
We conducted a face classification analysis using the CMU Multi-PIE dataset.
Iv-C1 Details of the CMU dataset and experimental protocol
The CMU Multi-PIE dataset consists of facial images of 337 different subjects captured from 15 viewpoints with 20 lighting conditions in 4 recording sessions. In this experiment, we used images of 129 subjects captured from three viewpoints: front, left, and right. Thus, the total number of the images used for this experiment was 30960 (129 subjects3 views20 illuminations4 sessions).
Two sessions were used for training, and the remaining two sessions were used for testing. As an input image set, we used 10 randomly sampled images from an image set for each subject. For MSM and CMSM, the dimensions of class, input subspaces, and GDS were set to 20, 5, and 520, respectively. For MCM and CMCM, the numbers of the basis vectors of class and input convex cones were set to 20 and 5, respectively. The dimension of the discriminant space was set to 530. We determined these parameters by cross validation. We used CNN features extracted from the fine-tuned ResNet using the training data, following the experimental setting.
Iv-C2 Face classification results
Table III shows the error rates for the different methods. The CMCM exhibited the highest performance, while the performance of the MCM was the lowest. This result supports the validity of the projection onto the discriminant space as a feature extraction. This implies that the gaps between convex cones capture useful geometrical information to enhance the class separability among all classes of convex cones.
Iv-D Robustness against limited training data
A major issue with deep neural networks is the requirement for a large quantity of training samples to accurately learn the networks. Therefore, the robustness against a small sample size (SSS) is a necessary characteristic for effective methods using CNN features in practical applications. In this experiment, we evaluated the robustness of the different methods against SSS.
Iv-D1 Experimental protocol
In this experiment, we used the hand shape dataset described in section IV-A1. The dataset was divided into two sets in the same manner used for the previous experiment. One set was used for training and another was used for testing. We evaluated the performances of the methods by setting the numbers of subjects used for training to 1, 2, 3, 4, 5, 10, and 15. In each case, the total number of training images was 30 classes7 cameras4 frames subjects, (). As an input image set, we used 28 (=7 cameras 4 frames) images, as in the previous experiment. Thus, the total number of convex cones for testing was 1500 (=30 classes50 subjects).
The parameters for the methods were determined by cross validation using training images. For MSM and CMSM, the dimensions of class, input subspaces, and GDS were set to 25, 7, and 725, respectively. For MCM and CMCM, the numbers of the basis vectors of class and input convex cones were set to 30 and 7, respectively. The dimension of the discriminant space was set to 800.
To extract CNN features from the images, we used the fine-tuned ResNet by using the training images under the experimental conditions.
Iv-D2 Summary of results
Figure 9 shows the error rates in terms of the number of training subjects. As shown in the figure, we can see that the overall performance of CMCM was better than that of the other methods. In particular, CMCM works well when the number of training subjects is small. For example, when is 1, CMSM and CMCM achieve an error rate of about half that for softmax. Moreover, CMCM improves the performances of the subspace-based methods, MSM and CMSM. This further indicates that the convex cone method can represent the distribution of a set of CNN features more accurately than the subspace-based methods.
In this paper, we proposed a method based on the convex cone model for image-set classification, referred to as the constrained mutual convex cone method (CMCM). We discussed a combination of the proposed method and CNN features, though our method can be applied to various types of features with non-negative constraint.
The main contributions of this paper are 1) the introduction of a convex cone model to represent a set of CNN features compactly and accurately, 2) the definition of the geometrical similarity of two convex cones based on the angles between them, which are obtained by the alternating least squares method, 3) the proposal of a method, i.e., MCM, for classifying convex cones using the angles as the similarity index, 4) the introduction of a discriminant space that maximizes between-class variance (gaps) between convex cones and minimizes within-class variance, 5) the proposal of the constrained MCM (CMCM), which incorporates the above projection into the MCM.
We demonstrated the validity of our methods by three experiments using the multi-view hand shape dataset, the CMU PIE dataset, and ETH-80. In the future, we will evaluate the introduction of non-linear mapping by a kernel function into the proposed methods.
This work was partially supported by JSPS KAKENHI Grant Number JP16H02842.
O. Yamaguchi, K. Fukui, and K. Maeda, “Face recognition using temporal image sequence,” inProceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 318–323.
-  K. Fukui and O. Yamaguchi, “Face Recognition Using Multi-viewpoint Patterns for Robot Vision,” in The Eleventh International Symposium of Robotics Research, 2005, pp. 192–201.
-  H. Sakano and N. Mukawa, “Kernel mutual subspace method for robust facial image recognition,” in International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies , vol. 1, 2000, pp. 245–248.
K. Fukui and O. Yamaguchi, “The Kernel Orthogonal Mutual Subspace Method and
Its Application to 3D Object Recognition,” in
Asian Conference on Computer Vision, 2007, pp. 467–476.
-  K. Fukui, B. Stenger, and O. Yamaguchi, “A Framework for 3D Object Recognition Using the Kernel Constrained Mutual Subspace Method,” in Asian Conference on Computer Vision, 2006, pp. 315–324.
-  K. Fukui and A. Maki, “Difference Subspace and Its Generalization for Subspace-Based Methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 11, pp. 2164–2177, 2015.
-  J. Lu, G. Wang, and J. Zhou, “Simultaneous Feature and Dictionary Learning for Image Set Based Face Recognition,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 4042–4054, 2017.
-  H. HOTELLING, “RELATIONS BETWEEN TWO SETS OF VARIATES,” Biometrika, vol. 28, no. 3-4, pp. 321–377, 1936.
-  S. N. Afriat, “Orthogonal and oblique projectors and the characteristics of pairs of vector spaces,” Mathematical Proceedings of the Cambridge Philosophical Society, vol. 53, no. 04, pp. 800–816, 1957.
A. Sharif Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features
off-the-shelf: an astounding baseline for recognition,” in
IEEE Conference on Computer Vision and Pattern Recognition workshops, 2014, pp. 806–813.
-  J.-C. Chen, V. M. Patel, and R. Chellappa, “Unconstrained face verification using deep CNN features,” in 2016 IEEE Winter Conference on Applications of Computer Vision, 2016, pp. 1–9.
Guanbin Li and Y. Yu, “Visual saliency based on multiscale deep features,” in2015 IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5455–5463.
-  H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson, “Factors of Transferability for a Generic ConvNet Representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 9, pp. 1790–1802, 2016.
-  T. Nakayama and K. Fukui, “The Effectiveness of CNN feature for mutual subspace method,” IEICE technical report, vol. 117, no. 238, pp. 49–54, 2017, (in Japanese).
V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in
Proceedings of the 27th international conference on machine learning, 2010, pp. 807–814.
-  A. Georghiades, P. Belhumeur, and D. Kriegman, “From few to many: illumination cone models for face recognition under variable lighting and pose,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643–660, 2001.
-  P. N. Belhumeur and D. J. Kriegman, “What Is the Set of Images of an Object Under All Possible Illumination Conditions?” International Journal of Computer Vision, vol. 28, no. 3, pp. 245–260, 1998.
-  T. Kobayashi and N. Otsu, “Cone-restricted subspace methods,” in International Conference on Pattern Recognition, 2008, pp. 1–4.
-  T. Kobayashi, F. Yoshikawa, and N. Otsu, “Cone-restricted kernel subspace methods,” in IEEE International Conference on Image Processing, 2010, pp. 3853–3856.
-  M. Tenenhaus, “Canonical analysis of two convex polyhedral cones and applications,” Psychometrika, vol. 53, no. 4, pp. 503–524, 1988.
-  R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of human genetics, vol. 7, no. 2, pp. 179–188, 1936.
-  B. Leibe and B. Schiele, “Analyzing appearance and contour based methods for object categorization,” in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, 2003, pp. 409–415.
-  R.Gross, I.Matthews, J.Cohn, T.Kanade, and S.Baker, “Multi-PIE,” Image and Vision Computing, vol. 28, no. 5, pp. 807 – 813, 2010.
-  H. S. Seung and D. D. Lee, “Learning the parts of objects by non-negative matrix factorization,” Nature, vol. 401, no. 6755, pp. 788–791, 1999.
-  H. Kim and H. Park, “Nonnegative Matrix Factorization Based on Alternating Nonnegativity Constrained Least Squares and Active Set Method,” SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 2, pp. 713–730, 2008.
-  R. Bro and S. De Jong, “A fast non-negativity-constrained least squares algorithm,” Journal of Chemometrics, vol. 11, no. 5, pp. 393–401, 1997.
-  J. Vía, I. Santamaría, and J. Pérez, “Canonical correlation analysis (CCA) algorithms for multiple data sets: Application to blind SIMO equalization,” in 13th European Signal Processing Conference, 2005, pp. 1–4.
-  J. Vía, I. Santamaría, and J. Pérez, “A learning algorithm for adaptive canonical correlation analysis of several data sets,” Neural Networks, vol. 20, no. 1, pp. 139–152, 2007.
-  Y. Ohkawa and K. Fukui, “Hand-shape recognition using the distributions of multi-viewpoint image sets,” IEICE transactions on information and systems, vol. 95, no. 6, pp. 1619–1627, 2012.
-  Y. Li and A. Ngom, “The non-negative matrix factorization toolbox for biological data mining,” Source Code for Biology and Medicine, vol. 8, no. 1, p. 10, 2013.
-  F. Chollet et al., “Keras,” 2017.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in The IEEE Conference on Computer Vision and Pattern Recognition, 2016.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015.
-  S. Mika, B. Schölkopf, A. J. Smola, K.-R. Müller, M. Scholz, and G. Rätsch, “Kernel PCA and de-noising in feature spaces,” in Advances in neural information processing systems, 1999, pp. 536–542.