Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering

05/21/2016 ∙ by Bo Yang, et al. ∙ 0

Dimensionality reduction techniques play an essential role in data analytics, signal processing and machine learning. Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification. Finding reduced-dimension representations that are well-suited for the intended task is more appealing. This paper proposes a joint factor analysis and latent clustering framework, which aims at learning cluster-aware low-dimensional representations of matrix and tensor data. The proposed approach leverages matrix and tensor factorization models that produce essentially unique latent representations of the data to unravel latent cluster structure -- which is otherwise obscured because of the freedom to apply an oblique transformation in latent space. At the same time, latent cluster structure is used as prior information to enhance the performance of factorization. Specific contributions include several custom-built problem formulations, corresponding algorithms, and discussion of associated convergence properties. Besides extensive simulations, real-world datasets such as Reuters document data and MNIST image data are also employed to showcase the effectiveness of the proposed approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Many signal processing and machine learning applications nowadays involve high-dimensional raw data that call for appropriate compaction before any further processing. Dimensionality reduction (DR) is often applied before clustering and classification, for example. Matrix and tensor factorization (or factor analysis

) plays an important role in DR of matrix and tensor data, respectively. Traditional factorization models, such as singular value decomposition (SVD) and principal component analysis (PCA) have proven to be successful in analyzing high-dimensional data – e.g., PCA has been used for noise suppression, feature extraction, and subspace estimation in numerous applications. In recent years, alternative models such as non-negative matrix factorization (NMF)

[2, 3] have drawn considerable interest (also as DR tools), because they tend to produce unique and interpretable reduced-dimension representations. In parallel, tensor factorization for multi-way data continues to gain popularity in the machine learning community, e.g., for social network mining and latent variable modeling [4, 5, 6, 7, 8].

When performing DR or factor analysis, several critical questions frequently arise. First, what type of factor analysis should be considered for producing useful data representations for further processing, e.g., classification and clustering? Intuitively, if the data indeed coalesce in clusters in some

low-dimensional representation, then DR should ideally map the input vectors to this particular representation – identifying the right subspace is not enough, for linear transformation can distort cluster structure (cf. Fig.

1). Therefore, if the data follow a factor analysis model that is unique (e.g., NMF is unique under certain conditions [9, 10]) and the data form clusters in the latent domain, then fitting that factor analysis model will reveal those clusters.

The second question is what kind of prior information can help get better latent representations of data? Using prior information is instrumental for fending against noise and modeling errors in practice, and thus is well-motivated. To this end, various constraints and regularization priors have been considered for matrix and tensor factorization, e.g., sparsity, smoothness, unimodality, total variation, and nonnegativity [11, 12, 13], to name a few.

Fig. 1: The effect of distance distortion introduced by . Left: on a 2-dimensional plane. Right: on a 2-dimensional plane.
Fig. 2: Contrasting data-domain and latent domain distance: The average cosine distance between 2500 pairs of data samples from different clusters. Data are taken from MNIST handwritten digits dataset and Yale Face B dataset, and the latent representations are found via NMF [2].

In this work, we consider using a new type of prior information to assist factor analysis, namely, the latent cluster structure. This is motivated by the fact the dimension-reduced data usually yields better performance in clustering tasks, which suggests that the cluster structure of the data is more pronounced in some latent domain relative to the data domain. Some evidence can be seen in Fig. 2, where we compare the average data-domain and latent domain cosine distances111The cosine distance between two vectors and is defined as . of data points from two clusters of image data from the Yale B 222Online available: http://web.mit.edu/emeyers/www/face_databases.html face image database and the MNIST handwritten digit image database333Online available: http://yann.lecun.com/exdb/mnist/, where the latent representations are produced via NMF. We see that the average latent domain distance between the data from two different clusters is significantly larger than the corresponding data domain distance in both cases. This observation motivates us to make use of such a property to enhance the performance of both factor analysis and clustering.

Using clustering to aid NMF-based DR was considered in [14, 15], where a distance graph of the data points was constructed and used as regularization for NMF – which essentially forces reduced-dimension representations to be far (close) in the latent domain, if the high-dimension vectors are far (resp. close) in the data domain. However, the data domain distance and the latent domain distance are not necessarily proportional to each other, as is evident from Fig. 2.

To see this clearly, consider a matrix factorization model where each column of represents a data point, and the corresponding column of is its latent representation. Consider the squared distance between the first two columns of , i.e., , where stands for all values of the respective argument. On the other hand, the distance of the latent representations of the first two columns of is given by . Notice how the matrix weighs the latent domain distance to produce the data domain distance; also see Fig. 1 for illustration.

An earlier line of work [16, 17] that is not as well-known in the signal processing community considered latent domain clustering on the factor , while taking the other factor (

) to be a semi-orthogonal matrix. However this cannot mitigate the weighting effect brought by

, since orthogonal projection cannot cancel the distorting effect brought in by the left factor .

Aiming to make full use of the latent cluster structure, we propose a novel joint factor analysis and latent clustering framework in this paper. Our goal is to identify the unique latent representation of the data in some domain which is discriminative for clustering purposes, and also to use the latent cluster structure to help produce more accurate factorization results at the same time. We propose several pertinent problem formulations to exemplify the generality and flexibility of the proposed framework, and devise corresponding computational algorithms that build upon alternating optimization, i.e., alternating between factorization and clustering, until convergence. Identifiability of the latent factors plays an essential role in our approach, as it counteracts the distance distortion effect illustrated in Fig. 1. This is a key difference with relevant prior work such as [16, 17, 18].

We begin with -means latent clustering using several identifiable factorization models of matrix and tensor data, namely, nonnegative matrix factorization, convex geometry (CG)-based matrix factorization model [19], and low-rank tensor factorization or parallel factor analysis (PARAFAC) [20]. Carefully designed optimization criteria are proposed, detailed algorithms are derived, and convergence properties are discussed. We next consider extension to joint factorization and subspace clustering, which is motivated by the popularity of subspace clustering [21]. The proposed algorithms are carefully examined in judiciously designed simulations. Real experiments using document, handwritten digit, and three-way E-mail datasets are also used to showcase the effectiveness of the proposed approach.

Notation: Capital letters with underscore denote tensors, e.g. ; bold capital letters denote matrices; denotes the Khatri-Rao (column-wise Kronecker) product; denotes the Kruskal rank of matrix ; denotes the transpose of and denotes the pseudo inverse of ; and denote the th row and the th column of , respectively; denotes the th matrix slab of the three-way tensor taken perpendicular to the third mode; and likewise for slabs taken perpendicular to the second and first mode, , , respectively; counts the non-zero elements in the vector ; calligraphic letters denote sets, such as , . , , denote the Frobenius norm, -norm and -norm, respectively. denotes the range space of matrix .

Ii Background

In this section, we briefly review the pertinent prior art in latent clustering and factor analysis.

Ii-a Clustering and Latent Clustering

Let us begin with the classical -means problem [22]: Given a data matrix , we wish to group the columns of into clusters; i.e., we wish to assign the column index of to cluster , , such that , , and the sum of intra-cluster dispersions is minimized. The -means problem can be written in optimization form as follows:

(1)

where the matrix is an assignment matrix, means that belongs to the th cluster, and denotes the centroid of the th cluster. When is very large and/or there are redundant features (e.g., highly correlated rows of ), then it makes sense to perform DR either before or together with clustering. Reduced -means (RKM) [16] is a notable joint DR and latent clustering method that is based on the following formulation:

(2)

where is a tall semi-orthogonal matrix. Essentially, RKM aims at factoring into and , while enforcing a cluster structure on the columns of – which is conceptually joint factorization and latent clustering. However, such a formulation loses generality since (the rank of the model) cannot be smaller than (the number of clusters); otherwise, the whole problem is ill-posed. Note that in practice, the number of clusters and the rank of are not necessarily related; forcing a relationship between them (e.g., ) can be problematic from an application modeling perspective. In addition, the cluster structure is imposed as a straight jacket in latent space (no residual deviation, modeled by in is allowed in (2)), and this is too rigid in some applications.

Another notable formulation that is related to but distinct from RKM is factorial -means (FKM) [17]:

(3)

FKM seeks a ‘good projection’ of the data such that the projected data can be better clustered, and essentially performs clustering on if admits a low-dimensional factorization as . FKM does not force a coupling between and , and takes the latent modeling error into consideration. On the other hand, FKM ignores the part of that is outside the chosen subspace, so it seeks some discriminative subspace where the projections cluster well, but ignores variation in the orthogonal complement of , since

where is a basis for the orthogonal complement of . So the cost of RKM equals the cost function of FKM plus a penalty for the part of in the orthogonal complement space. FKM was later adapted in [18], where a similar formulation was proposed to combine orthogonal factorization and sparse subspace clustering.

Seeking an orthogonal projection may not be helpful in terms of revealing the cluster structure of , since is still a general (oblique) linear transformation that acts on the columns of , potentially distorting cluster structure, even if is a basis for .

Ii-B Identifiable Factorization Models

Unlike RMK and FKM that seek an orthogonal factor or projection matrix , in this work, we propose to perform latent clustering using identifiable low-rank factorization models for matrices and tensors. The main difference in our approach is that we exploit identifiability of the latent factors to help unravel the hidden cluster structure, and in return improve factorization accuracy at the same time. This is sharply different from orthogonal projection models, such as RKM and FKM. Some important factorization models are succinctly reviewed next.

Ii-B1 Nonnegative Matrix Factorization (NMF)

Low-rank matrix factorization models are in general non-unique, but nonnegativity can help in this regard [9], [10]. Loosely speaking, if , where and are (element-wise) nonnegative and have sufficiently sparse columns and sufficiently spread rows (over the nonnegative orthant), then any nonnegative satisfying can be expressed as and , where is a full rank diagonal nonnegative matrix and is permutation matrix – i.e., and are essentially unique, or, identifiable up to a common column permutation and scaling-counterscaling. In practice, NMF is posed as a bilinear optimization problem,

(4)

NMF is an NP-hard optimization problem. Nevertheless, many algorithms give satisfactory results in practice; see [23]. Notice that scaling

The plain NMF formulation (4) may yield arbitrary nonnegative scaling of the columns of and the rows of due tho the inherent scaling / counter-scaling ambiguity, which can distort distances. This can be avoided by adding a norm-balancing penalty

(5)

It can be easily shown [11] that an optimum solution of (5) must satisfy .

Ii-B2 Volume Minimization (VolMin)-Based Factorization

NMF has been widely used because it works well in many applications. If or is dense, however, then the NMF criterion in (4) cannot identify the true factors, which is a serious drawback for several applications. Recent research has shown that this challenge can be overcome using Volume Minimization (VolMin)-based structured matrix factorization methods [19, 24, 25, 26] In the VolMin model, the columns of are assumed to live in the unit simplex, i.e., and for all , which is meaningful in applications like document clustering [24], hyperspectral imaging [27]

, and probability mixture models

[28]. Under this structural constraint, and are sought via finding a minimum-volume simplex which encloses all the data columns . In optimization form, the VolMin problem can be expressed as follows:

(6)

where measures the volume of the simplex spanned by the columns of and is usually a function associated with the determinant of or [19, 24, 25, 26]. Notably, it was proven in [19] that the optimal solution of (6) is and , where is again a permutation matrix, if the ’s are sufficient spread in the probability simplex and is full column rank. Notice that can be dense and even contain negative or complex elements, and still uniqueness can be guaranteed via VolMin.

Ii-B3 Parallel Factor Analysis (PARAFAC)

For tensor data (i.e., data indexed by more than two indices) low-rank factorization is unique under mild conditions [29, 30]. Low-rank tensor decomposition is known as parallel factor analysis (PARAFAC) or canonical (polyadic) decomposition (CANDECOMP or CPD). For example, for a three-way tensor , if

(7)

where is the Kruskal rank of , if is such that , then , , , where is a permutation matrix and are diagonal scaling matrices such that . Not that , and do not need to be full-column rank for ensuring identifiability.

Making use of the Khatri-Rao product, the tensor factorization problem can be written as

(8)

where is a matrix unfolding of the tensor . There are three matrix unfoldings for this three-way tensor that admit similar model expressions (because one can permute modes and , , accordingly)

Like NMF case, PARAFAC is NP-hard in general, but there exist many algorithms offering good performance and flexibility to incorporate constraints, e.g., [31], [32].

Our work brings these factor analysis models together with a variety of clustering tools ranging from -means to -subspace [33, 21] clustering to devise novel joint factorization and latent clustering formulations and companion algorithms that outperform the prior art – including two-step and joint approaches, such as RKM and FKM.

Iii Proposed Approach

Iii-a Problem Formulation

Suppose that , for some element-wise nonnegative and , where the columns of are clustered around centroids. A natural problem formulation is then

(9)

where the second term is a -means penalty that enforces the clustering prior on the columns of , and the tuning parameter balances data fidelity and the clustering prior. This formulation admits a Maximum A Posteriori (MAP) interpretation, if , where the data-domain noise and the latent-domain noise

are both drawn from i.i.d. (independent and identically distributed) Gaussian distribution and independent of each other, with variance

and , respectively, and .

Assuming that satisfy NMF identifiability conditions, and that is negligible (i.e., NMF is exact), will be exactly recovered and thus clustering will be successful as well. In practice of course the factorization model (DR) will be imperfect, and so the clustering penalty will help obtain a more accurate factorization, and in turn better clustering. Note that this approach decouples and , because it uses a clustering penalty instead of the hard constraint that RKM uses, which results in rank-deficiency when .

Formulation (9) may seem intuitive and well-motivated from a MAP point of view, but there are some caveats. These are discussed next.

Iii-B Design Considerations

The first problem is scaling. In (9), the regularization on implicitly favors a small-norm , since if is small, then simply taking works. On the other hand, the first term is invariant with respect to scaling of , so long as this is compensated in . To prevent this, we introduce norm regularization for , resulting in

(10)

Note that can be used instead of to encourage sparsity, if desired.

Fig. 3: Normalization in the latent domain helps bringing data points together, creating tight cluster structures. Figure generated by taking a plain NMF on two clusters of documents from Reuters-21578 dataset. Left: 2-dimensional representations of documents; Right: the normalized representations.

Another consideration is more subtle. In many applications, such as document clustering, it has been observed [34, 35]

that the correlation coefficient or cosine similarity are more appropriate for clustering than Euclidean distance. We have observed that this is also true for latent clustering, which speaks for the need for normalization in the latent domain. Corroborating evidence is provided in Fig. 

3, which shows the latent representations of two document clusters from the Reuters-25718 dataset. These representations were extracted by plain NMF using . In Fig. 3, the latent representations on the left are difficult to cluster, especially those close to the origin, but after projection onto the unit -norm ball (equivalent to using cosine similarity to cluster the points on the left) the cluster structure becomes more evident on the right.

If -means is applied in the data domain, the cosine similarity metric can be incorporated easily: by normalizing the data columns using their -norms in advance, -means is effectively using cosine dissimilarity as the distance measure. In our context, however, naive adoption of the cosine similarity for the clustering part can complicate things, since changes in every iteration. To accommodate this, we reformulate the problem as follows.

(11)

Introducing the diagonal matrix is crucial: It allows us to fix the columns of onto the unit -norm ball without loss of generality of the factorization model.

The formulation in (11) can be generalized to tensor factorization models. Consider a three-way tensor with loading factors , , . Assuming that rows of can be clustered into groups, the joint tensor factorization and -mode latent clustering problem can be formulated as

(12)

where the regularization terms and are there to control scaling. If one wishes to perform latent clustering in more modes, then norm regularization can be replaced by -means regularization for and/or modes as well. It is still important to have norm regularization for those modes that do not have -means regularization.

An interesting point worth mentioning is that if one adopts VolMin as factorization criterion, then introducing is not necessary, since VolMin already confines on the unit-(-)norm ball. We also do not need to regularize with the norm of , since in this case the scaling of cannot be arbitrary. This yields

(13)

Iii-C Extension: Joint Factor Analysis and Subspace Clustering

In addition to considering Problems (11)-(13), we also consider their subspace clustering counterparts, i.e., with -means penalties replaced by subspace clustering ones. Subspace clustering deals with data that come from a union of subspaces [36]. Specifically, consider , where denotes an index set of a subset of ’s columns, and . Also assume that , which is the independent subspace model [21]. Then, it is evident that

where denotes the set of indices of columns of in cluster , i.e. .

Under this data structure, consider a simple example: if , , and , then is a block diagonal matrix where the nonzero diagonal blocks are . From this illustrative example, it is easy to see that the columns of also come from different subspaces. The difference is that these subspaces that are spanned by and are not only independent, but also orthogonal to each other – which are much easier to distinguish. This suggests that performing subspace clustering on is more appealing.

In [18], Patel et al. applied joint dimensionality reduction and subspace clustering on using a semi-orthogonal , which is basically the same idea as FKM, but using subspace clustering instead of -means as in FKM. However, as we discussed before, when and are identifiable, taking advantage of the identifiability of the factors can further enhance the performance and should therefore be preferred in this case.

Many subspace clustering formulations such as those in [37], [38] can be integrated into our framework, but we limit ourselves to simple -subspace clustering, and assume that the number of subspaces and subspace dimensions are known, for simplicity. Taking joint NMF and -subspace clustering as an example, we can express the problem as

(14)

where is again a cluster assignment variable, denotes an orthogonal basis of the columns of which lie in the th cluster, is a mean vector of the th cluster, and is the coordinates of the th vector in the subspace. As in the VolMin case, subspace clustering is also insensitive to the scaling of , since the metric for measuring distance to a cluster centroid is the distance to a subspace. Therefore, the constraints that were added for normalizing can be removed.

Iv Optimization Algorithms

In this section, we provide algorithms for dealing with the various problems formulated in the previous section. The basic idea is alternating optimization – breaking the variables down to blocks and solving the partial problems one by one. Updating strategies and convergence properties are also discussed.

Iv-a Joint NMF and -means (JNKM)

We first consider (11) and (12). For ease of exposition, we use (11) as a working example. Generalization to Problem (12) is straightforward. Our basic strategy is to alternate between updating , , , , and one at a time, while fixing the others. For the subproblems w.r.t. and , we propose to use the corresponding (alternating) steps of classical -means [39]. The minimization w.r.t. needs more effort, due to the unit norm and nonegativity constraints. Here, we propose to employ a variable-splitting strategy. Specifically, we consider the following optimization surrogate:

(15)

where and is a slack variable. Note that is introduced to ‘split’ the effort of dealing with and in two different subproblems. Notice that when , (15) is equivalent to (11); in practice, a large can be employed to enforce .

Problem (15) can be handled as follows. First, can be updated by solving

(16)

which can be easily converted to a nonnegative least squares (NLS) problem, and efficiently solved to optimality by many existing methods. Here, we employ an alternating direction method of multipliers (ADMM)-based [40] algorithm to solve Problem (16). The update of , i.e.,

(17)

is also an NLS problem. The subproblem w.r.t. admits closed-form solution,

(18)

where . The update of is also closed-form,

(19)

The update of comes from the -means algorithm. Let . Then

(20)

The update of also comes from the -means algorithm

(21)

The overall algorithm alternates between updates (16)-(21).

Iv-B Joint NTF and -means (JTKM)

As in the JNKM case, we employ variable splitting and introduce a variable to (12)

(22)

The algorithm for dealing with Problem (22) is similar to that of the NMF case. By treating as , as and as , the updates of , , , and are the same as those in the previous section. To update and , we make use of the other two matrix unfoldings

(23a)
(23b)

These are nonnegative linear least squares problems, and thus can be efficiently solved.

Iv-C Joint VolMin and -means (JVKM)

For VolMin-based factorization, one major difficulty is dealing with the volume measure , which is usually defined as [25, 19, 24]. If clustering is the ultimate objective, however, we can employ more crude volume measures for the sake of computational simplicity. With this in mind, we propose to employ the following approximation of simplex volume [41]: , where . The regularizer measures the volume of the simplex that is spanned by the columns of by simply measuring the distances between the vertices. This approximation is coarse, but reasonable. Hence, Problem (13) can be tackled using a four-block BCD, i.e.,

(24a)
(24b)

Note that Problem (24a) is a convex quadratic problem that has closed-form solution, and Problem (24b) is a simplex-constrained least squares problem that can be solved efficiently via many solvers. The updates for and are still given by (20) and (21).

Iv-D Joint NMF and -subspace (JNKS)

Similar to JNKM, the updates of in (14) are both NLS problems, and can be easily handled. To update the subspace and the coefficients, we need to solve a -subspace clustering problem [21]

(25)

Let denote the data points in cluster , and . We can equivalently write (25) as

(26)

It can be readily seen that the update of each subspace is independent of the others. For cluster , we first remove its center, and then take a SVD,

(27a)
(27b)
(27c)
(27d)

To update the subspace assignment , we solve

(28)

With , the update of is given by [21]

(29)

Iv-E Convergence Properties

The proposed algorithms, i.e., JNKM, JVKM, JTKM, and their subspace clustering counterparts, share some similar traits. Specifically, all algorithms solve the conditional updates (block minimization problems) optimally. From this it follows that

Property 1

JNKM, JTKM, and JVKM, yield a non-increasing cost sequence in terms of their respective cost functions in (15), (22), and (13), respectively. The same property is true for their subspace clustering counterparts.

Monotonicity is important in practice – it ensures that an algorithm makes progress in every iteration towards the corresponding design objective. In addition, it leads to convergence of the cost sequence when the cost function is bounded from below (as in our case), and such convergence can be used for setting up stopping criteria in practice.

In terms of convergence of the iterates (the sequence of ‘interim’ solutions), when using a cyclical block updating strategy all algorithms fall under the Gauss-Seidel block coordinate descent (BCD) framework, however related convergence results [42] cannot be applied because some blocks involve nonconvex constraints. If one uses a more expensive (non-cyclical) update schedule, then the following holds.

Property 2

If the blocks are updated following the maximum block improvement (MBI) strategy (also known as the Gauss-Southwell rule), then every limit point of the solution sequence produced by JNKM, JTKM, and JVKM is a block-wise minimum of (15), (22), and (13), respectively. A block-wise minimum is a point where it is not possible to reduce the cost by changing any single block while keeping the others fixed.

The MBI update rule [43] is similar to BCD, but it does not update the blocks cyclically. Instead, it tries updating each block in each iteration, but actually updates only the block that brings the biggest cost reduction. The MBI result can be applied here since we solve each block subproblem to optimality. The drawback is that MBI is a lot more expensive per iteration. If one is interested in obtaining a converging subsequence, then a practical approach is to start with cyclical updates, and then only use MBI towards the end for ‘last mile’ refinement. We use Gauss-Seidel in our simulations, because it is much faster than MBI.

V Synthetic Data Study

In this section, we use synthetic data to explore scenarios where the proposed algorithms show promising performance relative to the prior art. We will consider real datasets that are commonly used as benchmarks in the next section.

Our algorithms simultaneously estimate the latent factors and cluster, so we need a metric to assess estimation accuracy, and another for clustering accuracy. For the latter, we use the ratio of correctly clustered points over the total number of points (in , the higher the better) [15, 14]. Taking into account the inherent column permutation and scaling indeterminacy in estimating the latent factor matrices, estimation accuracy is measured via the following matched mean-square-error measure (MSE, for short)

where and represents the ground truth factor and the estimated factor, respectively, is the set of all permutations of the set , is there to resolve the permutation ambiguity, and to resolve the sign ambiguity when there is no nonnegativity constraint.

V-a Joint Matrix Factorization and Latent -means

We generate synthetic data according to

(30)

where is the data matrix, is the basis matrix, is the factor where the clustering structure lies in, and are the centroid matrix and the assignment matrix, respectively, and for denote the modeling errors and the measurement noise, respectively. We define the Signal to Noise Ratio (SNR) in the data and latent space, respectively, as and . SNR is the ‘conventional’ SNR that characterizes the data corruption level, and SNR is for characterizing the latent domain modeling errors. All the simulation results are obtained by averaging 100 Monte Carlo trials.

We employ several clustering and factorization algorithms as baselines, namely, the original -means, the reduced -means (RKM), the factorial -means algorithm (FKM), and a simple two-stage combination of nonnegative matrix factorization (NMF) and -means (NMF-KM).

We first test the algorithms under an NMF model. Specifically, is drawn from i.i.d. standard Gaussian distribution, with all negative entries set to zero. is generated with the following steps:

  1. Generate by setting the first columns as unit vectors (to ensure identifiability of the NMF model), i.e. , and entries in the remaining

    columns are randomly generated from an i.i.d. uniform distribution in

    ; set as ;

  2. Draw from an i.i.d. standard Gaussian distribution, set and perform the following steps

    (31a)
    (31b)
    (31c)
    (31d)

    where in (31c) is a scaling constant determined by the desired , and takes the nonnegative part of its argument. We may need to repeat the above steps (31a)(31d) several times, till we get a nonnegative with desired (in our experience, usually one repetition suffices).

We then multiply and and add , where is drawn from an i.i.d. standard Gaussian distribution and scaled for the desired . With this process, and are sparse and identifiable (when ) with very high probability [9, 10]. Finally, we replace 3% of the columns of

with all-ones vectors that act as outliers, which are common in practice.

Table I presents the clustering accuracies of various algorithms using , , , and . The MSEs of the estimated of the factorization-based algorithms are also presented. We set the parameters of the JNKM method to be , and . Here, SNR is fixed to be 15 dB and SNR varies from 3 dB to 18 dB. The JNKM is initialized with an NMF solution [44], and the JVKM is initialized with the SISAL [25] algorithm. We see that, for all the SNR’s under test, the proposed JNKM yields the best clustering accuracies. RKM and FKM give poor clustering results since they cannot resolve the distortion brought by , as we discussed before. Our proposed method works better than NMF-KM, since JNKM estimates more accurately (cf. the MSEs of the estimated factor ) – by making use of the cluster structure on as prior information.

Note that in order to make the data fit the VolMin model, we normalized the columns of to unit -norm as suggested in [45]. Due to noise, such normalization cannot ensure that the data follow the VolMin model exactly, however we observe that the proposed JVKM formulation still performs well in this case.

[dB] 3 6 9 12 15 18
AC[%] KM 77.43 81.5 82.9 81.47 82.68 84.5
RKM 77.51 76.62 73.71 72.43 71.35 71.63
FKM 15.12 15.68 16.6 17.14 37.5 59.74
JNKM 88.1 95.12 96.51 96.13 96.43 95.65
JVKM 75.84 83.87 87.87 89.96 90.27 89.36
NMF-KM 84.72 86.62 88.96 90.95 90.87 92.34
MSE[dB] JNKM -28.09 -27.82 -27.54 -26.59 -26.91 -26.26
JVKM -16.41 -16.98 -16.37 -15.61 -15.19 -14.9
NMF-KM -27.09 -26.75 -26.7 -25.58 -26.05 -25.31
TIME[s] KM 0.14 0.05 0.05 0.07 0.06 0.06
RKM 0.18 0.13 0.16 0.17 0.17 0.18
FKM 1.45 0.59 0.64 0.79 0.82 0.56
JNKM 3.37 2.99 3.13 3.22 3.06 3.36
JVKM 5.7 5.06 4.73 4.53 4.42 4.45
NMF-KM 0.76 0.68 0.73 0.83 0.75 0.86
TABLE I: Clustering and factorization accuracy for identifiable NMF vs. , for , dB.
[dB] 5 10 15 20 25 30
AC[%] KM 78.65 77.89 82.89 84.53 88.43 86.97
RKM 79.54 72.84 72.87 71.15 71.37 72.06
FKM 17.91 17.3 16.76 16.68 16.44 16.51
JNKM 93.28 94.69 95.73 96.33 96.43 96.04
JVKM 71.78 82.74 87.43 91.68 92.43 93.17
NMF-KM 84.95 86.74 89.57 90.87 90.66 91.61
MSE[dB] JNKM -18.03 -23.95 -26.71 -27.17 -27.09 -26.19
JVKM -4.66 -12.19 -15.96 -20.21 -25.14 -31.05
NMF-KM -17.63 -23.36 -26.29 -27.48 -27.76 -27.13
TIME[s] KM 0.08 0.08 0.05 0.05 0.04 0.04
RKM 0.18 0.18 0.16 0.14 0.11 0.11
FKM 0.68 0.7 0.66