Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering

05/19/2020 ∙ by Chong Peng, et al. ∙ University of Kentucky NetEase, Inc 0

In this paper, we propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF. It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step. In particular, projection matrices are sought under the guidance of building new data representations, such that the spatial information is retained and projections are enhanced by the goal of clustering, which helps construct optimal projection directions. Moreover, to exploit nonlinear structures of the data, manifold is constructed in the projected subspace, which is adaptively updated according to the projections and less afflicted with noise and outliers of the data and thus more representative in the projected space. Hence, seeking projections, building new data representations, and learning manifold are seamlessly integrated in a single model, which mutually enhance other and lead to a powerful data representation. Comprehensive experimental results verify the effectiveness of TS-NMF in comparison with several state-of-the-art algorithms, which suggests high potential of the proposed method for real world applications.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Matrix factorization is a powerful way for data representation and has been widely used for many problems in machine learning, data mining, computer vision, and statistical data analysis. Among various factorization algorithms, some have seen widespread successes, such as singular value decomposition (SVD)

[duda2012pattern]

, and principal component analysis (PCA)

[jolliffe2002principal].

Recently, a number of relatively new factorization algorithms have been developed to provide improved solutions to some special problems in machine learning [lee1999learning, peng2016fast]. In particular, nonnegative matrix factorization (NMF) [lee1999learning, lee2001algorithms] has drawn considerable attention. NMF represents nonnegative data with nonnegative basis and coefficients, which naturally leads to parts-based representations [lee1999learning]

. It has been used in many real world applications, such as pattern recognition

[li2001learning], multimedia analysis [cooper2002summarizing], and text mining [xu2003document]

. Recent studies have revealed interesting relationships between NMF and several other methods. For example, spectral clustering (SC)

[ng2002spectral]

is shown to be equivalent to a weighted version of kernel K-means

[dhillon2007weighted] and both of them are particular cases of clustering with NMF under a doubly stochastic constraint [zass2005unifying]

; the Kullback-Leibler divergence-based NMF turns out to be equivalent to the probabilistic latent semantic analysis

[ding2006nonnegative, hofmann1999probabilistic], which has been further developed into the fully probabilistic latent Dirichlet allocation model [blei2003latent].

Semi-NMF extends the repertoire of NMF by removing the non-negativity constraints on the data and basis, which expands the range of applications of NMF. It also strengthens the connections between NMF and K-means [ding2010convex]. It is noted that K-means can be written as a matrix factorization, where the two factor matrices represent the centroids and cluster indicators. Particularly, the centroids can be general whereas the cluster indicators are all nonnegative, which shows the connection between K-means and Semi-NMF. To exploit nonlinear structures of the data, graph-regularized NMF (GNMF) [cai2011graph] and robust manifold NMF (RMNMF) [huang2014robust] incorporate the graph Laplacian to measure nonlinear relationships of the data on manifold. In particular, GNMF including Frobenius-norm and divergence-based formulations, which require the basis and coefficient matrices to be nonnegative; RMNMF removes the constraints on the basis matrix and can be regarded as a variant of Semi-NMF by incorporating a structured sparsity-inducing norm to enhance its robustness.

These methods have been used on 2-dimensional (2D) data such as images, where 2D data are vectorized for further data processing in a preprocessing step. While the vectorization-based Semi-NMF methodology has been growingly useful, it fails to fully exploit the inherent 2D structures and correlations in the 2D data after vectorizing the 2D data. Furthermore, there is empirical evidence that building a model with vectorized high-dimensional features is not effective to filter the noisy or redundant information in the original feature spaces [fu2016tensor]

. Besides the way of vectorizing 2D data, tensor based approaches have been proposed. While they may potentially better exploit spatial structures of the 2D data

[zhang2015low], such approaches still have some limitations: They use all features of the data, hence noisy or redundant features may degrade the learning performance. Also, tensor computation and methods usually involve flattening and folding operations, which, more or less, have issues similar to those of vectorization operation and thus might not fully exploit the true structures of the data. Moreover, tensor methods usually suffer from the following major issues: 1) for candecomp/parafac (CP) decomposition based methods, it is generally NP-hard to compute the CP rank [lu2016tensor, kolda2009tensor]; 2) Tucker decomposition is not unique [kolda2009tensor]; 3) the application of a core tensor and a high-order tensor product would incur information loss of spatial details [letexier2008noise].

To address these limitations, in this paper, we propose a new Semi-NMF-like method for 2D data, where we directly use the original 2D data to help preserve their 2D spatial structures instead of vectorizing them. It is noted that recently there are tensor approaches to retain spatial information for 2D data [cao2013robust, huang2008simultaneous]. However, tensors are usually reduced to matrices for processing. For example, [zhang2015low] organizes different views of the data by a tensor structure, however in each view each sample is still vectorized and the image spatial information is still damaged. In this paper, we directly use 2D inputs whose inherent structure information is emphasized by two projection matrices, which makes our method starkly different from tensor approach. Specifically, we seek optimal projection matrices and building new representations of the data jointly, aiming at enhancing clustering. These projections matrices are optimal in the sense that they project 2D data to the most expressive subspace. Moreover, manifold is taken into consideration to capture nonlinear structures of the data. In our formulation, the manifold is adaptively updated with projection matrices capturing representative information from 2D data, and thus it is less afflicted with noise and outliers. Therefore, this paper seeks optimal projection directions, factors data for new representations, and learns intrinsic manifold structures in a single, seamlessly integrated framework, such that these tasks mutually enhance and lead to improved clustering as well as powerful representations of 2D data. It is noted that, as a special case, our method is applicable to 1-dimensional data. The main contributions of this paper are summarized as follows:

  • The optimal 2D data projections and an image subspace are sought for learning new representations of the 2D data and clustering 2D matrices.

  • The proposed method is able to retain intrinsic spatial information of 2D data, and alleviate the adverse effect of irrelevant or less important information.

  • Manifold learning is integrated to enhance the capability of exploiting nonlinear structures of the data. The manifold is adaptively updated according to the 2D projections that capture the most expressive information from the data, and the graph is less afflicted with irrelevant or grossly corrupted features.

  • The proposed model enables 2D feature extraction, adaptive manifold learning, and matrix factorization jointly, thus offering a powerful data representation ability.

  • An efficient optimization algorithm is developed with provable mathematical analysis; extensive experimental results verify the effectiveness of the proposed model and algorithm.

The rest of this paper is organized as follows. We review related work in Section II. Then we present the proposed model in Section III and its optimization in Section IV. We conduct extensive experiments and show the results in Section V. Finally, we conclude this paper in Section VI.

Ii Related Work

Ii-a Semi-NMF

Given data with being the dimension of the data and being the number of samples, the objective of Semi-NMF is

(1)

where contains basis in columns and are the new representations of the data in rows.

Ii-B Graph Laplacian

Graph Laplacian [chung1997spectral] is widely used to incorporate the intrinsic geometrical structure of the data on manifold. In particular, the manifold enforces the smoothness of the data in linear and nonlinear spaces by minimizing

(2)

where is the trace operator, is the weight matrix that measures the pair-wise similarities of original data points, is a diagonal matrix with , and . It is seen that by minimizing creftype 2, we can have a natural effect that if two data points are close in the intrinsic geometry of the data distribution, then their new representations with respect to the new basis, and , are also close [cai2011graph].

Ii-C 2dpca

Let be a collection of images of size , i.e., , then the 2D covariance matrix of

is estimated by

. 2DPCA seeks projection directions by solving the following [yang2004two]:

(3)

where contains orthonormal projection directions and

is an identity matrix of size

.

Iii Proposed Method

For 2D data X, creftype 1 naturally leads to a formulation as follows:

(4)

where is a set of 2D centroids. It is seen that all elements or features of 2D matrices are used to construct the new representations of the data and the expressiveness of 2D spatial information is not explicitly considered in creftype 4. To alleviate this drawback, we propose to better exploit 2D spatial information by building the new representation with respect to 2D centroids in a projected subspace with the most expressive spatial information:

(5)

It is noted that in creftype 5, projects the th centroid to a subspace of rank with the most expressive information, so that the sum of squared reconstruction errors of 2D matrices from the new basis and new representations can be minimized. As a result, the new representation and the new basis are sought jointly in the projected, most expressive, low-rank subspace to take the advantage of 2D spatial information. Let , then

(6)

where is the trace operator. The second equation is true because . The third equation is true because it can easily verify that . It is seen that, in the new formulation, the new representation is sought with the projected data ’s in the first term, while in the second term the projection ensures that the most expressive information of the data is retained in the subspace given by . With , it is straightforward that creftype 5 can be written as

(7)

It is noted that the first term in creftype 7 is essentially equivalent to the first term in last equation of creftype 5, but creftype 7 keeps the physical meanings of . With simple algebra, the second term in creftype 7 can be written as . We omit the constant term and introduce a balancing parameter to balance the two terms of creftype 7 to make it more versatile, which gives raise to

(8)

where we use the notation of . When , creftype 8 falls back to creftype 7. It is seen that, by minimizing creftype 8, is sought so that the data points are projected to the most expressive subspace, aiming at building new, expressive data representations for clustering. Because clustering is performed with projected data, the adverse affects of noise, occlusions or corruptions can be alleviated. Consequently, creftype 8 is inherently robust, even though we do not explicitly enforce robustness or use sparsity-inducing norms to measure reconstruction errors.

creftype 8 only considers the linear structures of the projected data while overlooking nonlinear ones which usually exist and are important in real world applications. To address this issue, we enforce the smoothness between linear and nonlinear structures on manifold with the following formulation:

(9)

where is a balancing parameter. Here, for the ease of notation, we define , , and define the operator to convert a set of 2D inputs, M, to a matrix containing each vectorized 2D input as a column for ease of notation. Different from creftype 2, we construct the one-to-one similarity matrix using instead of , such that the graph Laplacian is adaptively learned with the most expressive features. Correspondingly, and are constructed based on in a way similar to the construction of and based on as in creftype 2. Note that the above defined operators starkly differ from straight vectorization because spatial information has been retained and these operators only provide a simple way for notation without damaging information. It is seen that the tasks of seeking projections, recovering new data representations, and manifold learning mutually enhance each other and lead to a powerful data representation.

To further enhance the capability of capturing 2D spatial information, we develop the following Two-dimensional Semi-NMF (TS-NMF):

(10)

where contains projection directions to project X on left. Here, we define , , and is constructed in a similar way to where ’s are used instead of ’s, and is constructed in a similar way to where ’s are used instead of ’s. It is noted that creftype 10 is not convex. For any solution , is also a solution with the same objective value of creftype 10 with being a positive diagonal matrix. Furthermore, the objective value of creftype 10 can be reduced if increases. To eliminate this uncertainty, in practice one usually requires the Euclidean length of to be 1 [xu2003document, cai2011graph] in a post processing step. In this paper, we also adopt this strategy.

Iv Optimization

In this section, we will develop an efficient optimization algorithm to solve creftype 10. In the following, we will present the alternating optimization steps for each variable in detail.

Iv-a Updating

The subproblem for -minimization is111Inspired by [wang2014feature], is not included in the -minimization problem due the difficulty of writing it as a function of explicitly. Instead, is fixed when solving and will be updated accordingly after is updated. Similar strategy is used for -minimization.:

(11)

With straightforward algebra, creftype 11 can be rewritten as

(12)

where . Let , it is easy to see that is positive definite, hence, according to [yang2004two], can be obtained by

(13)

where

returns the eigenvectors of the input matrix corresponding to its smallest

eigenvalues.

Iv-B Updating

The subproblem for -minimization is:

(14)

Similarly to creftypeplural 13, 12 and 11, it is easy to see that can be solved by

(15)

where is positive definite.

Iv-C Optimizing

For convenience of theoretical analysis, we define

(16)

and separate a matrix into two parts by

(17)

Then the -minimization can be written as

(18)

where

(19)

Then, is updated by:

(20)

Regarding creftypeplural 20, 19 and 18, similar to the conclusion in [ding2010convex], we have the following theorem:

Theorem IV.1.

Fixing all other variables, the value of F(V) in creftype 19 is monotonically non-increasing under the updating creftype 20. Furthermore, the limiting solution of creftype 20 satisfies KKT condition.

The proof of Theorem IV.1 is provided in the Appendix. It is noted that creftype 20 provides an iterative way to solve creftype 18, which requires an inner loop for optimization. However, in a way similar to NMF [lee1999learning], GNMF [cai2011graph], and Semi-NMF [ding2010convex], we do not require an exact solution to the subproblem creftype 18. Instead, creftype 20 is performed once to solve creftype 18. Similar idea is also found in [lin2010augmented], where exact solutions are not required for intermediate updating.

Iv-D Optimizing U

The subproblem associated with U-minimization is

(21)

We investigate the two terms separately. The first term is minimized when it satisfies

(22)

which is equivalent to the following condition

(23)

It is seen that there are infinitely many choices for to meet the above condition, e.g., any such that is in the null space of . Here, we use the simplest way to meet this requirement by requiring

(24)

Similarly, we see that the second term in creftype 21 can be simultaneously minimized by creftype 24. Therefore, we adopt creftype 24 to update U. Here, it is noted that is usually invertible and computationally tractable due to its small size. Otherwise, pseudo-inverse is used as in [ding2010convex].

Finally, we adjust U and as follows, such that does not change:

(25)

Then standard K-means is applied to V to obtain cluster indicators. We summarize the overall procedure in Algorithm 1.

1:  Input: X, , , ,
2:  Initialize: , , , , .
3:  repeat
4:     Update and by creftypeplural 15 and 13, respectively;
5:     Update and by creftype 2 using and ;
6:     Update and U by creftypeplural 24 and 18, respectively;
7:     .
8:  until  or convergence
9:  Adjust U and according to creftype 25, and apply standard K-means to
10:  Output: Predicted class indicators
Algorithm 1 TS-NMF for Clustering

Iv-E Complexity Analysis

Because multiplications dominate the complexity, we only count multiplications. Given that , , , , let be the total number of iterations for Algorithm 1, then the total complexity of Algorithm 1 is . It is similar to GNMF and RMNMF. The complexity mainly comes from the updating of graph Laplacian matrices with complexity per iteration. Fortunately, it can be easily parallel for this step per iteration, and thus it is not a bottleneck for real world applications.

V Experiments

To demonstrate the effectiveness of TS-NMF, in this section, we present the comprehensive experimental results in comparison with several state-of-the-art algorithms. The performances are measured based on three evaluation metrics including clustering accuracy (ACC), normalized mutual information (NMI), and purity, whose details can be found in

[huang2014robust, peng2017nonnegative]. In the following, we will briefly introduce the benchmark data sets, the baseline methods in comparison, and present the experimental results in detail. For purpose of reproducibility, we provide the data and codes at xxxx.

V-a Benchmark Data Sets

We use seven data sets in the experiment, which are briefly described as follows: 1) Yale [belhumeur1997eigenfaces]. It contains 165 gray scale images of 15 persons with 11 images of size 3232 per person. 2) Extended Yale B (EYaleB) [georghiades2001few]. This data set has 38 persons and around 64 face images under different illuminations per each person. The images were cropped to 192168 and were resized to 3232 in our experiments. 3) ORL [samaria1994parameterisation]. This data has 40 individuals and 10 images were taken at different times, with varying facial expressions, facial details, and lighting conditions per each individual. Each image has 3232 pixels. 4) JAFFE [lyons1998japanese]. 10 Japanese female models posed 7 facial expressions and 213 images were collected. Each image has been rated on 6 motion adjectives by 60 Japanese subjects. 5) PIX [hond1997distinctive]. 100 gray scale images of pixels from 10 objects were collected. 6) Semeion. 1,593 handwritten digits written by around 80 persons were collected. These images were scanned, stretched into size 1616.

N Accuracy (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
5 23.0100.80 23.3800.68 23.0100.99 23.3201.06 23.7901.61 ————— 41.9709.86 28.8203.98 24.2002.27 77.2711.23
10 13.8401.19 13.4600.67 13.9301.03 13.8900.81 14.6600.39 ————— 30.0905.00 22.6702.02 16.8802.66 66.0207.15
15 11.4601.01 10.6000.47 10.9700.77 11.1300.94 11.3800.40 ————— 23.8006.49 20.9801.98 13.6601.70 61.4105.86
20 10.6901.08 09.6500.51 09.8500.80 10.4800.79 09.7700.43 ————— 21.1701.96 19.6401.19 13.2701.03 58.9205.04
25 09.3501.05 07.7000.47 08.3700.38 08.7800.72 08.5400.22 ————— 15.9601.99 17.7501.30 10.0700.61 55.5304.68
30 08.4800.71 07.3300.24 08.1400.66 08.6101.02 07.8800.17 ————— 16.4700.98 17.2101.10 10.2600.77 55.2903.18
35 08.8500.75 06.6300.28 08.6700.80 08.3100.48 07.1200.16 ————— 14.6400.76 16.4801.23 09.8500.76 53.7302.64
38 08.53 06.59 08.99 08.33 07.08 ————— 16.16 16.86 08.70 56.84
Average 11.78 10.67 11.49 11.61 11.28 ————— 22.53 20.05 13.36 60.63
N Normalized Mutual Information (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
5 00.9400.73 00.7800.44 00.7400.35 00.9500.87 01.2501.61 ————— 30.2215.58 06.4904.14 02.7303.44 71.3308.35
10 02.3902.21 01.5600.44 02.1701.15 02.2400.96 03.0700.61 ————— 29.6706.65 15.3401.93 06.9404.88 64.7205.26
15 03.9601.48 02.7700.68 03.3101.35 03.7001.63 04.0500.63 ————— 28.0111.77 19.8903.74 08.4903.26 64.9805.78
20 06.9101.59 05.0201.17 05.5001.99 06.6101.49 05.5400.56 ————— 27.6203.08 23.1101.25 11.9901.67 63.6802.96
25 07.0001.42 04.1801.05 05.4100.88 05.9301.49 06.1600.39 ————— 21.6003.34 24.0401.15 10.2301.69 62.4803.21
30 07.7901.19 05.5100.41 07.4901.02 07.7101.01 07.3400.28 ————— 23.7900.89 26.2801.09 12.7401.58 62.2602.40
35 10.0401.44 05.8100.55 09.4101.48 09.5600.74 08.0800.24 ————— 25.3601.27 27.6701.58 14.0801.14 61.1202.31
38 10.51 06.13 10.42 10.26 08.79 ————— 25.86 28.46 13.13 63.63
Average 06.19 03.97 05.56 05.87 05.54 ————— 26.52 21.41 10.04 64.28
TABLE I: Clustering Performance on EYaleB
N Accuracy (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
5 79.0013.54 81.0015.36 81.0014.06 80.2013.35 40.4004.97 59.6008.37 81.0014.43 76.8014.79 74.8011.78 81.6008.42
10 62.1008.67 64.0007.16 68.2009.72 65.8009.70 11.0000.00 37.0008.75 71.2008.18 66.7005.81 65.8006.32 76.3006.83
15 62.5305.99 61.1304.49 63.5306.80 64.5305.78 29.2001.80 28.6703.50 67.8005.99 66.6003.01 68.2706.90 73.2706.27
20 57.8006.21 58.1505.68 60.9005.78 61.8006.54 26.5502.77 25.8001.70 65.5007.86 61.1003.41 63.0503.86 70.8506.75
25 57.2403.21 57.0402.24 59.1603.08 60.1203.61 24.0801.16 23.6001.74 63.9605.28 62.7203.66 61.1604.06 68.0803.11
30 55.6303.17 52.5302.59 57.9702.71 57.4003.25 22.5001.00 22.1701.22 62.2303.03 58.5703.81 60.1004.66 67.4003.64
35 52.8302.54 50.4902.74 55.1103.29 56.3104.72 21.3100.65 20.7100.79 59.6004.22 56.6002.79 57.9103.26 64.2002.63
40 53.50 44.00 63.00 55.00 20.25 20.25 55.75 56.25 57.75 68.00
Average 60.09 58.54 63.61 62.56 24.41 29.73 65.88 63.17 63.61 71.21
N Normalized Mutual Information (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
5 73.7215.91 77.9715.06 77.5113.57 74.4316.23 23.6806.98 53.1709.32 77.6314.69 71.6415.32 72.3310.60 77.4806.51
10 67.8108.47 71.6305.75 74.1607.61 72.3907.42 11.7300.00 38.6409.37 77.1906.29 70.5603.50 73.4605.63 80.5105.15
15 72.9105.39 72.2703.98 73.6904.91 74.7304.78 37.8401.32 35.3204.15 77.2804.46 73.6702.22 77.5005.86 80.9504.26
20 71.6004.32 71.9104.57 73.4003.73 74.2304.82 40.1702.41 38.0201.01 76.9305.26 72.6202.62 74.9502.39 80.4204.02
25 72.4501.91 71.6301.49 73.3801.64 73.8902.32 41.0401.31 39.2201.46 77.7302.67 75.1502.24 74.9903.18 79.0702.08
30 72.5301.90 71.2001.99 73.9002.14 73.7901.89 42.4201.14 40.5300.89 77.0202.35 73.3402.33 75.4803.30 80.3402.03
35 70.7101.40 70.6001.50 72.1201.75 73.3202.95 43.0100.85 41.4800.66 75.5102.27 72.4301.70 74.5402.42 79.0101.61
40 71.82 69.07 72.35 74.07 43.01 42.64 74.72 73.03 75.32 81.27
Average 71.69 72.03 73.81 73.86 35.36 41.13 76.75 72.81 74.82 79.88
TABLE II: Clustering Performance on ORL
N Accuracy (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
2 89.0510.42 90.4208.58 92.1506.73 90.6608.57 69.3909.88 ————— 95.4605.63 87.5810.64 87.4110.76 95.1805.84
3 82.9708.58 83.1308.31 83.5708.32 83.2208.46 50.1509.24 ————— 85.4117.18 78.2309.17 79.8309.72 86.3608.10
4 75.4111.13 77.5507.30 75.1311.56 75.7909.24 43.4306.98 ————— 77.9013.92 65.2207.80 71.4509.13 83.2311.88
5 75.1607.44 77.5506.03 74.2307.17 75.4909.24 39.0904.57 ————— 82.7608.45 62.3307.31 71.6908.37 84.7309.07
6 63.4510.28 65.8110.08 65.2808.80 64.8409.07 33.6203.05 ————— 71.4711.65 54.6706.88 68.0705.96 73.5909.66
7 63.1606.17 69.0605.73 63.8305.63 63.5207.17 27.6402.11 ————— 63.8805.62 52.9406.03 64.9805.99 74.8305.79
8 67.9007.40 69.1105.11 67.0906.39 64.1005.46 26.4500.86 ————— 69.3707.02 48.2304.31 64.1304.94 75.6207.84
9 61.3805.31 61.3605.41 59.9405.76 62.1503.31 24.6900.99 ————— 61.3402.60 44.9002.77 57.9802.51 73.6208.18
10 54.55 64.28 54.36 60.33 22.91 ————— 63.03 43.57 60.14 71.00
Average 70.34 73.14 70.62 71.12 37.49 ————— 74.51 59.74 69.52 79.79
N Normalized Mutual Information (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
2 61.0829.93 63.4126.07 67.7721.94 64.3025.81 14.1509.50 ————— 77.9519.53 55.4828.88 56.0928.43 78.3717.47
3 58.7811.94 58.7811.50 60.4210.95 59.0612.09 13.6011.14 ————— 70.3015.81 50.3912.33 53.5512.81 68.7314.77
4 58.0209.08 58.8307.07 58.9209.32 56.1311.11 17.4409.41 ————— 66.3012.69 44.8306.88 51.5505.43 72.9612.13
5 61.1606.75 61.2907.05 60.3306.31 61.4107.31 21.7905.23 ————— 73.1606.35 43.4507.15 53.7108.96 74.2710.81
6 54.7108.23 54.0407.73 55.0607.93 55.3407.59 18.4203.73 ————— 62.9811.05 39.8106.31 51.4405.82 64.2910.86
7 54.3804.31 55.0005.03 55.1004.31 54.7105.78 16.1704.21 ————— 58.3103.67 41.7104.53 52.6404.71 65.8705.78
8 58.9404.37 56.6803.34 58.4303.70 56.6903.66 18.2901.67 ————— 64.0505.99 39.5103.19 53.4604.09 68.9306.20
9 55.0503.43 53.3803.08 54.4003.55 55.4403.30 17.4001.01 ————— 59.7902.67 36.5202.66 49.9202.11 69.6305.60
10 51.67 53.19 51.18 55.27 16.88 ————— 58.88 35.44 52.34 63.53
Average 57.09 57.18 57.96 57.60 17.13 ————— 65.75 43.02 52.74 69.62
TABLE III: Clustering Performance on Semeion
N Accuracy (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
2 100.000.00 100.000.00 100.000.00 100.000.00 64.9509.34 100.000.00 100.000.00 100.000.00 99.7500.79 100.000.00
3 98.4001.86 98.5501.93 100.000.00 98.4001.86 53.0907.96 84.9719.34 99.8400.51 97.6201.86 95.4406.40 99.8400.51
4 99.3001.83 99.3001.83 99.1902.57 97.7905.27 51.0205.49 72.5511.74 99.4201.26 98.8301.73 95.4505.99 99.5301.12
5 98.6802.00 98.6702.00 98.8702.42 98.5801.95 45.0904.83 74.0810.70 99.1501.37 97.4603.09 95.3405.04 99.6200.67
6 95.9704.13 97.1002.03 99.3801.95 93.3104.63 40.6404.38 63.2710.08 97.2506.55 95.1404.07 90.4006.39 99.3801.14
7 95.6506.03 96.7902.24 97.3302.63 93.5105.66 38.5305.85 59.0009.51 96.4806.69 90.2406.90 92.6605.58 99.1301.05
8 91.9706.28 95.9401.31 97.0502.19 91.3704.63 36.1403.36 61.6405.38 93.6808.43 91.6305.58 94.2705.43 99.1200.89
9 91.8704.43 94.1601.61 94.6301.25 89.9404.77 35.9603.10 61.3711.03 95.3007.35 90.7307.06 88.8309.06 99.0100.76
10 84.04 86.85 95.77 92.02 33.80 57.75 97.65 95.77 95.77 100.0
Average 95.10 96.37 98.02 94.99 44.36 70.51 97.64 95.27 94.21 99.51
N Normalized Mutual Information (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
2 100.000.00 100.000.00 100.000.00 100.000.00 13.9014.09 100.0000.00 100.000.00 100.000.00 98.5504.59 100.000.00
3 94.8805.88 95.4606.09 100.000.00 94.8805.88 20.0211.58 70.2319.02 99.4101.87 92.0205.91 88.5213.91 99.4101.87
4 98.3704.11 98.3704.11 98.4904.79 95.6510.08 30.3508.46 65.1410.88 98.5603.05 96.9803.88 91.2210.90 98.8902.55
5 97.3203.90 97.3203.90 98.0504.13 97.0803.79 29.2806.18 67.9108.45 98.3002.75 95.0105.29 92.1207.16 99.0301.69
6 94.1504.51 94.8003.15 99.1302.74 90.3305.49 27.8206.28 64.5809.68 97.5303.78 91.7605.45 87.6306.76 98.8002.15
7 94.8704.19 94.6403.40 96.2903.44 92.0804.41 29.3806.35 59.1303.78 96.6004.28 87.1205.60 90.9305.83 98.4001.85
8 90.9105.29 93.5801.91 95.9602.93 89.9503.31 29.0203.45 65.2004.20 91.2003.97 89.0905.20 93.0003.96 98.5201.44
9 90.8603.11 91.6802.16 93.5301.75 88.3703.53 31.1503.24 64.0509.52 94.0603.36 89.3405.09 89.2206.95 98.3401.03
10 82.68 86.07 94.16 90.20 29.75 66.82 96.50 93.54 93.38 100.0
Average 93.78 94.66 97.29 93.17 26.74 69.23 96.91 92.76 91.62 99.04
TABLE IV: Clustering Performance on JAFFE
N Accuracy (%)
K-Means PCA RPCA 2DPCA NMF SC GNMF RMNMF Semi-NMF TS-NMF
2 94.5010.39 94.5010.39 99.5001.58 99.5001.58 73.0011.11 94.5010.39 95.5008.32 96.5007.84 94.0010.22 100.000.00
3 96.0005.84 96.0005.84 96.0005.84 97.6706.30 60.6709.27 95.0006.89 96.0005.84 97.3303.06 95.3306.13 99.0001.61
4 96.25