# Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering

Dimensionality reduction techniques play an essential role in data analytics, signal processing and machine learning. Dimensionality reduction is usually performed in a preprocessing stage that is separate from subsequent data analysis, such as clustering or classification. Finding reduced-dimension representations that are well-suited for the intended task is more appealing. This paper proposes a joint factor analysis and latent clustering framework, which aims at learning cluster-aware low-dimensional representations of matrix and tensor data. The proposed approach leverages matrix and tensor factorization models that produce essentially unique latent representations of the data to unravel latent cluster structure -- which is otherwise obscured because of the freedom to apply an oblique transformation in latent space. At the same time, latent cluster structure is used as prior information to enhance the performance of factorization. Specific contributions include several custom-built problem formulations, corresponding algorithms, and discussion of associated convergence properties. Besides extensive simulations, real-world datasets such as Reuters document data and MNIST image data are also employed to showcase the effectiveness of the proposed approaches.

## Authors

• 38 publications
• 22 publications
• 28 publications
• ### Using PCA and Factor Analysis for Dimensionality Reduction of Bio-informatics Data

Large volume of Genomics data is produced on daily basis due to the adva...
07/22/2017 ∙ by M. Usman Ali, et al. ∙ 0

• ### Determination of Latent Dimensionality in International Trade Flow

Currently, high-dimensional data is ubiquitous in data science, which ne...
02/29/2020 ∙ by Duc P. Truong, et al. ∙ 0

• ### Tensor Train Neighborhood Preserving Embedding

In this paper, we propose a Tensor Train Neighborhood Preserving Embeddi...
12/03/2017 ∙ by Wenqi Wang, et al. ∙ 0

• ### Probabilistic Dimensionality Reduction via Structure Learning

We propose a novel probabilistic dimensionality reduction framework that...
10/16/2016 ∙ by Li Wang, et al. ∙ 0

• ### Image Classification by Feature Dimension Reduction and Graph based Ranking

Dimensionality reduction (DR) of image features plays an important role ...
04/09/2013 ∙ by Yao Nan, et al. ∙ 0

• ### Learning Robust Representations for Computer Vision

Unsupervised learning techniques in computer vision often require learni...
07/31/2017 ∙ by Peng Zheng, et al. ∙ 0

• ### Hierarchical Bayesian image analysis: from low-level modeling to robust supervised learning

Within a supervised classification framework, labeled data are used to l...
12/01/2017 ∙ by Adrien Lagrange, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

Many signal processing and machine learning applications nowadays involve high-dimensional raw data that call for appropriate compaction before any further processing. Dimensionality reduction (DR) is often applied before clustering and classification, for example. Matrix and tensor factorization (or factor analysis

) plays an important role in DR of matrix and tensor data, respectively. Traditional factorization models, such as singular value decomposition (SVD) and principal component analysis (PCA) have proven to be successful in analyzing high-dimensional data – e.g., PCA has been used for noise suppression, feature extraction, and subspace estimation in numerous applications. In recent years, alternative models such as non-negative matrix factorization (NMF)

[2, 3] have drawn considerable interest (also as DR tools), because they tend to produce unique and interpretable reduced-dimension representations. In parallel, tensor factorization for multi-way data continues to gain popularity in the machine learning community, e.g., for social network mining and latent variable modeling [4, 5, 6, 7, 8].

When performing DR or factor analysis, several critical questions frequently arise. First, what type of factor analysis should be considered for producing useful data representations for further processing, e.g., classification and clustering? Intuitively, if the data indeed coalesce in clusters in some

low-dimensional representation, then DR should ideally map the input vectors to this particular representation – identifying the right subspace is not enough, for linear transformation can distort cluster structure (cf. Fig.

1). Therefore, if the data follow a factor analysis model that is unique (e.g., NMF is unique under certain conditions [9, 10]) and the data form clusters in the latent domain, then fitting that factor analysis model will reveal those clusters.

The second question is what kind of prior information can help get better latent representations of data? Using prior information is instrumental for fending against noise and modeling errors in practice, and thus is well-motivated. To this end, various constraints and regularization priors have been considered for matrix and tensor factorization, e.g., sparsity, smoothness, unimodality, total variation, and nonnegativity [11, 12, 13], to name a few.

In this work, we consider using a new type of prior information to assist factor analysis, namely, the latent cluster structure. This is motivated by the fact the dimension-reduced data usually yields better performance in clustering tasks, which suggests that the cluster structure of the data is more pronounced in some latent domain relative to the data domain. Some evidence can be seen in Fig. 2, where we compare the average data-domain and latent domain cosine distances111The cosine distance between two vectors and is defined as . of data points from two clusters of image data from the Yale B 222Online available: http://web.mit.edu/emeyers/www/face_databases.html face image database and the MNIST handwritten digit image database333Online available: http://yann.lecun.com/exdb/mnist/, where the latent representations are produced via NMF. We see that the average latent domain distance between the data from two different clusters is significantly larger than the corresponding data domain distance in both cases. This observation motivates us to make use of such a property to enhance the performance of both factor analysis and clustering.

Using clustering to aid NMF-based DR was considered in [14, 15], where a distance graph of the data points was constructed and used as regularization for NMF – which essentially forces reduced-dimension representations to be far (close) in the latent domain, if the high-dimension vectors are far (resp. close) in the data domain. However, the data domain distance and the latent domain distance are not necessarily proportional to each other, as is evident from Fig. 2.

To see this clearly, consider a matrix factorization model where each column of represents a data point, and the corresponding column of is its latent representation. Consider the squared distance between the first two columns of , i.e., , where stands for all values of the respective argument. On the other hand, the distance of the latent representations of the first two columns of is given by . Notice how the matrix weighs the latent domain distance to produce the data domain distance; also see Fig. 1 for illustration.

An earlier line of work [16, 17] that is not as well-known in the signal processing community considered latent domain clustering on the factor , while taking the other factor (

) to be a semi-orthogonal matrix. However this cannot mitigate the weighting effect brought by

, since orthogonal projection cannot cancel the distorting effect brought in by the left factor .

Aiming to make full use of the latent cluster structure, we propose a novel joint factor analysis and latent clustering framework in this paper. Our goal is to identify the unique latent representation of the data in some domain which is discriminative for clustering purposes, and also to use the latent cluster structure to help produce more accurate factorization results at the same time. We propose several pertinent problem formulations to exemplify the generality and flexibility of the proposed framework, and devise corresponding computational algorithms that build upon alternating optimization, i.e., alternating between factorization and clustering, until convergence. Identifiability of the latent factors plays an essential role in our approach, as it counteracts the distance distortion effect illustrated in Fig. 1. This is a key difference with relevant prior work such as [16, 17, 18].

We begin with -means latent clustering using several identifiable factorization models of matrix and tensor data, namely, nonnegative matrix factorization, convex geometry (CG)-based matrix factorization model [19], and low-rank tensor factorization or parallel factor analysis (PARAFAC) [20]. Carefully designed optimization criteria are proposed, detailed algorithms are derived, and convergence properties are discussed. We next consider extension to joint factorization and subspace clustering, which is motivated by the popularity of subspace clustering [21]. The proposed algorithms are carefully examined in judiciously designed simulations. Real experiments using document, handwritten digit, and three-way E-mail datasets are also used to showcase the effectiveness of the proposed approach.

Notation: Capital letters with underscore denote tensors, e.g. ; bold capital letters denote matrices; denotes the Khatri-Rao (column-wise Kronecker) product; denotes the Kruskal rank of matrix ; denotes the transpose of and denotes the pseudo inverse of ; and denote the th row and the th column of , respectively; denotes the th matrix slab of the three-way tensor taken perpendicular to the third mode; and likewise for slabs taken perpendicular to the second and first mode, , , respectively; counts the non-zero elements in the vector ; calligraphic letters denote sets, such as , . , , denote the Frobenius norm, -norm and -norm, respectively. denotes the range space of matrix .

## Ii Background

In this section, we briefly review the pertinent prior art in latent clustering and factor analysis.

### Ii-a Clustering and Latent Clustering

Let us begin with the classical -means problem [22]: Given a data matrix , we wish to group the columns of into clusters; i.e., we wish to assign the column index of to cluster , , such that , , and the sum of intra-cluster dispersions is minimized. The -means problem can be written in optimization form as follows:

 minS∈ZK×J,M∈RI×K ∥X−MS∥2F (1) s.t. ∥S(:,j)∥0=1, S(i,j)∈{0,1},

where the matrix is an assignment matrix, means that belongs to the th cluster, and denotes the centroid of the th cluster. When is very large and/or there are redundant features (e.g., highly correlated rows of ), then it makes sense to perform DR either before or together with clustering. Reduced -means (RKM) [16] is a notable joint DR and latent clustering method that is based on the following formulation:

 minS∈ZK×J,M∈RF×K,P∈RI×F ∥X−PMS∥2F (2) s.t. ∥S(:,j)∥0=1, S(i,j)∈{0,1}, PTP=I,

where is a tall semi-orthogonal matrix. Essentially, RKM aims at factoring into and , while enforcing a cluster structure on the columns of – which is conceptually joint factorization and latent clustering. However, such a formulation loses generality since (the rank of the model) cannot be smaller than (the number of clusters); otherwise, the whole problem is ill-posed. Note that in practice, the number of clusters and the rank of are not necessarily related; forcing a relationship between them (e.g., ) can be problematic from an application modeling perspective. In addition, the cluster structure is imposed as a straight jacket in latent space (no residual deviation, modeled by in is allowed in (2)), and this is too rigid in some applications.

Another notable formulation that is related to but distinct from RKM is factorial -means (FKM) [17]:

 minS∈ZK×J,M∈RF×K,P∈RF×I ∥PTX−MS∥2F (3) s.t. ∥S(:,j)∥0=1, S(i,j)∈{0,1}, PTP=I.

FKM seeks a ‘good projection’ of the data such that the projected data can be better clustered, and essentially performs clustering on if admits a low-dimensional factorization as . FKM does not force a coupling between and , and takes the latent modeling error into consideration. On the other hand, FKM ignores the part of that is outside the chosen subspace, so it seeks some discriminative subspace where the projections cluster well, but ignores variation in the orthogonal complement of , since

 ∥X−PMS∥2F =∥[P P⊥]T(X−PMS)∥2F, =∥PTX−MS∥2F+∥∥PT⊥X∥∥2F,

where is a basis for the orthogonal complement of . So the cost of RKM equals the cost function of FKM plus a penalty for the part of in the orthogonal complement space. FKM was later adapted in [18], where a similar formulation was proposed to combine orthogonal factorization and sparse subspace clustering.

Seeking an orthogonal projection may not be helpful in terms of revealing the cluster structure of , since is still a general (oblique) linear transformation that acts on the columns of , potentially distorting cluster structure, even if is a basis for .

### Ii-B Identifiable Factorization Models

Unlike RMK and FKM that seek an orthogonal factor or projection matrix , in this work, we propose to perform latent clustering using identifiable low-rank factorization models for matrices and tensors. The main difference in our approach is that we exploit identifiability of the latent factors to help unravel the hidden cluster structure, and in return improve factorization accuracy at the same time. This is sharply different from orthogonal projection models, such as RKM and FKM. Some important factorization models are succinctly reviewed next.

#### Ii-B1 Nonnegative Matrix Factorization (NMF)

Low-rank matrix factorization models are in general non-unique, but nonnegativity can help in this regard [9], [10]. Loosely speaking, if , where and are (element-wise) nonnegative and have sufficiently sparse columns and sufficiently spread rows (over the nonnegative orthant), then any nonnegative satisfying can be expressed as and , where is a full rank diagonal nonnegative matrix and is permutation matrix – i.e., and are essentially unique, or, identifiable up to a common column permutation and scaling-counterscaling. In practice, NMF is posed as a bilinear optimization problem,

 minW,H ∥X−WH∥2F (4) s.t. W≥0, H≥0.

NMF is an NP-hard optimization problem. Nevertheless, many algorithms give satisfactory results in practice; see [23]. Notice that scaling

The plain NMF formulation (4) may yield arbitrary nonnegative scaling of the columns of and the rows of due tho the inherent scaling / counter-scaling ambiguity, which can distort distances. This can be avoided by adding a norm-balancing penalty

 minW,H (5) s.t. W≥0, H≥0.

It can be easily shown [11] that an optimum solution of (5) must satisfy .

#### Ii-B2 Volume Minimization (VolMin)-Based Factorization

NMF has been widely used because it works well in many applications. If or is dense, however, then the NMF criterion in (4) cannot identify the true factors, which is a serious drawback for several applications. Recent research has shown that this challenge can be overcome using Volume Minimization (VolMin)-based structured matrix factorization methods [19, 24, 25, 26] In the VolMin model, the columns of are assumed to live in the unit simplex, i.e., and for all , which is meaningful in applications like document clustering [24], hyperspectral imaging [27]

, and probability mixture models

[28]. Under this structural constraint, and are sought via finding a minimum-volume simplex which encloses all the data columns . In optimization form, the VolMin problem can be expressed as follows:

 minW∈RI×F,H∈RF×J vol(W) (6) s.t. X=WH H≥0, 1TH=1T,

where measures the volume of the simplex spanned by the columns of and is usually a function associated with the determinant of or [19, 24, 25, 26]. Notably, it was proven in [19] that the optimal solution of (6) is and , where is again a permutation matrix, if the ’s are sufficient spread in the probability simplex and is full column rank. Notice that can be dense and even contain negative or complex elements, and still uniqueness can be guaranteed via VolMin.

#### Ii-B3 Parallel Factor Analysis (PARAFAC)

For tensor data (i.e., data indexed by more than two indices) low-rank factorization is unique under mild conditions [29, 30]. Low-rank tensor decomposition is known as parallel factor analysis (PARAFAC) or canonical (polyadic) decomposition (CANDECOMP or CPD). For example, for a three-way tensor , if

 kA+kB+kC≥2F+2, (7)

where is the Kruskal rank of , if is such that , then , , , where is a permutation matrix and are diagonal scaling matrices such that . Not that , and do not need to be full-column rank for ensuring identifiability.

Making use of the Khatri-Rao product, the tensor factorization problem can be written as

 minA,B,C ∥∥X(1)−(C⊙B)AT∥∥2F, (8)

where is a matrix unfolding of the tensor . There are three matrix unfoldings for this three-way tensor that admit similar model expressions (because one can permute modes and , , accordingly)

 X(1) =[vec(X––(1,:,:)),⋯,vec% (X––(I,:,:))] X(2) =[vec(X––(:,1,:)),⋯,vec% (X––(:,J,:))] X(3) =[vec(X––(:,:,1)),⋯,vec% (X––(:,:,K))].

Like NMF case, PARAFAC is NP-hard in general, but there exist many algorithms offering good performance and flexibility to incorporate constraints, e.g., [31], [32].

Our work brings these factor analysis models together with a variety of clustering tools ranging from -means to -subspace [33, 21] clustering to devise novel joint factorization and latent clustering formulations and companion algorithms that outperform the prior art – including two-step and joint approaches, such as RKM and FKM.

## Iii Proposed Approach

### Iii-a Problem Formulation

Suppose that , for some element-wise nonnegative and , where the columns of are clustered around centroids. A natural problem formulation is then

 minW∈RI×F,H∈RF×JS∈ZK×J,M∈RF×K ∥X−WH∥2F+λ∥H−MS∥2F (9) s.t. W≥0, H≥0, ∥S(:,j)∥0=1, S(k,j)∈{0,1},

where the second term is a -means penalty that enforces the clustering prior on the columns of , and the tuning parameter balances data fidelity and the clustering prior. This formulation admits a Maximum A Posteriori (MAP) interpretation, if , where the data-domain noise and the latent-domain noise

are both drawn from i.i.d. (independent and identically distributed) Gaussian distribution and independent of each other, with variance

and , respectively, and .

Assuming that satisfy NMF identifiability conditions, and that is negligible (i.e., NMF is exact), will be exactly recovered and thus clustering will be successful as well. In practice of course the factorization model (DR) will be imperfect, and so the clustering penalty will help obtain a more accurate factorization, and in turn better clustering. Note that this approach decouples and , because it uses a clustering penalty instead of the hard constraint that RKM uses, which results in rank-deficiency when .

Formulation (9) may seem intuitive and well-motivated from a MAP point of view, but there are some caveats. These are discussed next.

### Iii-B Design Considerations

The first problem is scaling. In (9), the regularization on implicitly favors a small-norm , since if is small, then simply taking works. On the other hand, the first term is invariant with respect to scaling of , so long as this is compensated in . To prevent this, we introduce norm regularization for , resulting in

 minW,H,S,M ∥X−WH∥2F+λ∥H−MS∥2F+η∥W∥2F (10) s.t. W≥0, H≥0, ∥S(:,j)∥0=1, S(k,j)∈{0,1}.

Note that can be used instead of to encourage sparsity, if desired.

Another consideration is more subtle. In many applications, such as document clustering, it has been observed [34, 35]

that the correlation coefficient or cosine similarity are more appropriate for clustering than Euclidean distance. We have observed that this is also true for latent clustering, which speaks for the need for normalization in the latent domain. Corroborating evidence is provided in Fig.

3, which shows the latent representations of two document clusters from the Reuters-25718 dataset. These representations were extracted by plain NMF using . In Fig. 3, the latent representations on the left are difficult to cluster, especially those close to the origin, but after projection onto the unit -norm ball (equivalent to using cosine similarity to cluster the points on the left) the cluster structure becomes more evident on the right.

If -means is applied in the data domain, the cosine similarity metric can be incorporated easily: by normalizing the data columns using their -norms in advance, -means is effectively using cosine dissimilarity as the distance measure. In our context, however, naive adoption of the cosine similarity for the clustering part can complicate things, since changes in every iteration. To accommodate this, we reformulate the problem as follows.

 minW,HS,M,{dj}Ji=1 (11) s.t. W≥0, H≥0, ∥H(:,j)∥2=1, ∀j, D=Diag(d1,…,dJ), ∥S(:,j)∥0=1, S(k,j)∈{0,1}.

Introducing the diagonal matrix is crucial: It allows us to fix the columns of onto the unit -norm ball without loss of generality of the factorization model.

The formulation in (11) can be generalized to tensor factorization models. Consider a three-way tensor with loading factors , , . Assuming that rows of can be clustered into groups, the joint tensor factorization and -mode latent clustering problem can be formulated as

 minA,B,CS,M,{dℓ}Lℓ=1 ∥∥X(1)−(C⊙B)(DA)T∥∥2F+λ∥A−SM∥2F (12) +η∥B∥2F+η∥C∥2F s.t. A,B,C≥0, ∥A(ℓ,:)∥2=1, ∀ℓ, D=Diag(d1,…,dI), ∥S(i,:)∥0=1, S(i,k)∈{0,1}, ∀i,k,

where the regularization terms and are there to control scaling. If one wishes to perform latent clustering in more modes, then norm regularization can be replaced by -means regularization for and/or modes as well. It is still important to have norm regularization for those modes that do not have -means regularization.

An interesting point worth mentioning is that if one adopts VolMin as factorization criterion, then introducing is not necessary, since VolMin already confines on the unit-(-)norm ball. We also do not need to regularize with the norm of , since in this case the scaling of cannot be arbitrary. This yields

 minW,HS,M ∥X−WH∥2F+β⋅vol(W)+λ∥H−MS∥2F (13) s.t. H≥0, 1TH=1T, ∥S(:,j)∥0=1, S(k,j)∈{0,1}.

### Iii-C Extension: Joint Factor Analysis and Subspace Clustering

In addition to considering Problems (11)-(13), we also consider their subspace clustering counterparts, i.e., with -means penalties replaced by subspace clustering ones. Subspace clustering deals with data that come from a union of subspaces [36]. Specifically, consider , where denotes an index set of a subset of ’s columns, and . Also assume that , which is the independent subspace model [21]. Then, it is evident that

 H(f,Jk)=0, f∉Fk,H(Fk,j)=0, j∉Jk,

where denotes the set of indices of columns of in cluster , i.e. .

Under this data structure, consider a simple example: if , , and , then is a block diagonal matrix where the nonzero diagonal blocks are . From this illustrative example, it is easy to see that the columns of also come from different subspaces. The difference is that these subspaces that are spanned by and are not only independent, but also orthogonal to each other – which are much easier to distinguish. This suggests that performing subspace clustering on is more appealing.

In [18], Patel et al. applied joint dimensionality reduction and subspace clustering on using a semi-orthogonal , which is basically the same idea as FKM, but using subspace clustering instead of -means as in FKM. However, as we discussed before, when and are identifiable, taking advantage of the identifiability of the factors can further enhance the performance and should therefore be preferred in this case.

Many subspace clustering formulations such as those in [37], [38] can be integrated into our framework, but we limit ourselves to simple -subspace clustering, and assume that the number of subspaces and subspace dimensions are known, for simplicity. Taking joint NMF and -subspace clustering as an example, we can express the problem as

 minW,HS,{θj,μk,Uk} ∥X−WH∥2F+η∥W∥2F (14) +λJ∑j=1K∑k=1S(k,j)∥∥H(:,j)−μk−Ukθj∥∥2F s.t. W≥0, H≥0, UTkUk=I, ∀ k, ∥S(:,j)∥0=1, S(k,j)∈{0,1},

where is again a cluster assignment variable, denotes an orthogonal basis of the columns of which lie in the th cluster, is a mean vector of the th cluster, and is the coordinates of the th vector in the subspace. As in the VolMin case, subspace clustering is also insensitive to the scaling of , since the metric for measuring distance to a cluster centroid is the distance to a subspace. Therefore, the constraints that were added for normalizing can be removed.

## Iv Optimization Algorithms

In this section, we provide algorithms for dealing with the various problems formulated in the previous section. The basic idea is alternating optimization – breaking the variables down to blocks and solving the partial problems one by one. Updating strategies and convergence properties are also discussed.

### Iv-a Joint NMF and K-means (JNKM)

We first consider (11) and (12). For ease of exposition, we use (11) as a working example. Generalization to Problem (12) is straightforward. Our basic strategy is to alternate between updating , , , , and one at a time, while fixing the others. For the subproblems w.r.t. and , we propose to use the corresponding (alternating) steps of classical -means [39]. The minimization w.r.t. needs more effort, due to the unit norm and nonegativity constraints. Here, we propose to employ a variable-splitting strategy. Specifically, we consider the following optimization surrogate:

 minW,H,Z,S,M,{di}Ii=1 ∥X−WHD∥2F+λ∥H−MS∥2F (15) +η∥W∥2F+μ∥H−Z∥2F s.t.W≥0, H≥0, ∥Z(:,j)∥2=1, ∀j, D=Diag(d1,…,dJ), ∥S(:,i)∥0=1, S(k,j)∈{0,1}, ∀k,j,

where and is a slack variable. Note that is introduced to ‘split’ the effort of dealing with and in two different subproblems. Notice that when , (15) is equivalent to (11); in practice, a large can be employed to enforce .

Problem (15) can be handled as follows. First, can be updated by solving

 H←argminH≥0 ∥X−WHD∥2F+λ∥H−MS∥2F +μ∥H−Z∥2F, (16)

which can be easily converted to a nonnegative least squares (NLS) problem, and efficiently solved to optimality by many existing methods. Here, we employ an alternating direction method of multipliers (ADMM)-based [40] algorithm to solve Problem (16). The update of , i.e.,

 W←argminW≥0 ∥X−WHD∥2F+η∥W∥2F, (17)

is also an NLS problem. The subproblem w.r.t. admits closed-form solution,

 dj←bTjX(:,j)/(bTjbj), (18)

where . The update of is also closed-form,

 Z(:,j)←H(:,j)∥H(:,j)∥2. (19)

The update of comes from the -means algorithm. Let . Then

 M(:,k)←∑j∈JkH(:,j)|Jk|. (20)

The update of also comes from the -means algorithm

 S(ℓ,j)←{1,ℓ=argmink ∥H(:,j)−M(:,k)∥20,otherwise. (21)

The overall algorithm alternates between updates (16)-(21).

### Iv-B Joint NTF and K-means (JTKM)

As in the JNKM case, we employ variable splitting and introduce a variable to (12)

 minA,B,CS,M,{dℓ}Lℓ=1 ∥∥X(1)−(C⊙B)(DA)T∥∥2F+λ∥A−SM∥2F (22) +η(∥B∥2F+∥C∥2F)+μ∥A−Z∥2F s.t. A,B,C≥0, ∥Z(ℓ,:)∥2=1, ∀ℓ, D=Diag(d1,…,dI), ∥S(i,:)∥0=1, S(i,k)∈{0,1}, ∀i,k,

The algorithm for dealing with Problem (22) is similar to that of the NMF case. By treating as , as and as , the updates of , , , and are the same as those in the previous section. To update and , we make use of the other two matrix unfoldings

 B ←argminB≥0∥∥X(2)−(C⊙DA)BT∥∥2F+η∥B∥2F, (23a) C ←argminC≥0∥∥X(3)−(B⊙DA)CT∥∥2F+η∥C∥2F. (23b)

These are nonnegative linear least squares problems, and thus can be efficiently solved.

### Iv-C Joint VolMin and K-means (JVKM)

For VolMin-based factorization, one major difficulty is dealing with the volume measure , which is usually defined as [25, 19, 24]. If clustering is the ultimate objective, however, we can employ more crude volume measures for the sake of computational simplicity. With this in mind, we propose to employ the following approximation of simplex volume [41]: , where . The regularizer measures the volume of the simplex that is spanned by the columns of by simply measuring the distances between the vertices. This approximation is coarse, but reasonable. Hence, Problem (13) can be tackled using a four-block BCD, i.e.,

 W (24a) H ←argmin1TH=1T,H≥0   ∥X−WH∥2F+λ∥H−MS∥2F (24b)

Note that Problem (24a) is a convex quadratic problem that has closed-form solution, and Problem (24b) is a simplex-constrained least squares problem that can be solved efficiently via many solvers. The updates for and are still given by (20) and (21).

### Iv-D Joint NMF and K-subspace (JNKS)

Similar to JNKM, the updates of in (14) are both NLS problems, and can be easily handled. To update the subspace and the coefficients, we need to solve a -subspace clustering problem [21]

 min{Uk},{μk},{θj} J∑j=1K∑k=1S(k,j)∥∥H(:,j)−μk−Ukθj∥∥2F (25) s.t. UTkUk=I, ∀ k.

Let denote the data points in cluster , and . We can equivalently write (25) as

 min{Uk},{μk},{Θk} K∑k=1∥∥H(:,Fk)−μk1T−UkΘk∥∥2F (26) s.t. UTkUk=I, ∀ k.

It can be readily seen that the update of each subspace is independent of the others. For cluster , we first remove its center, and then take a SVD,

 μk (27a) ←svd(H(:,Fk)−μk1T), (27b) Uk ←ˆU(:,1:rk), (27c) Θk ←ˆΣ(1:rk,1:rk)ˆV(:,1:rk)T. (27d)

To update the subspace assignment , we solve

 minS J∑j=1K∑k=1S(k,j)∥∥H(:,j)−μk−Ukθj∥∥2F (28) s.t. ∥S(:,j)∥0=1, S(k,j)∈{0,1}.

With , the update of is given by [21]

 S(ℓ,j)←{1ℓ=argmin% k dist(j,k)0otherwise. (29)

### Iv-E Convergence Properties

The proposed algorithms, i.e., JNKM, JVKM, JTKM, and their subspace clustering counterparts, share some similar traits. Specifically, all algorithms solve the conditional updates (block minimization problems) optimally. From this it follows that

###### Property 1

JNKM, JTKM, and JVKM, yield a non-increasing cost sequence in terms of their respective cost functions in (15), (22), and (13), respectively. The same property is true for their subspace clustering counterparts.

Monotonicity is important in practice – it ensures that an algorithm makes progress in every iteration towards the corresponding design objective. In addition, it leads to convergence of the cost sequence when the cost function is bounded from below (as in our case), and such convergence can be used for setting up stopping criteria in practice.

In terms of convergence of the iterates (the sequence of ‘interim’ solutions), when using a cyclical block updating strategy all algorithms fall under the Gauss-Seidel block coordinate descent (BCD) framework, however related convergence results [42] cannot be applied because some blocks involve nonconvex constraints. If one uses a more expensive (non-cyclical) update schedule, then the following holds.

###### Property 2

If the blocks are updated following the maximum block improvement (MBI) strategy (also known as the Gauss-Southwell rule), then every limit point of the solution sequence produced by JNKM, JTKM, and JVKM is a block-wise minimum of (15), (22), and (13), respectively. A block-wise minimum is a point where it is not possible to reduce the cost by changing any single block while keeping the others fixed.

The MBI update rule [43] is similar to BCD, but it does not update the blocks cyclically. Instead, it tries updating each block in each iteration, but actually updates only the block that brings the biggest cost reduction. The MBI result can be applied here since we solve each block subproblem to optimality. The drawback is that MBI is a lot more expensive per iteration. If one is interested in obtaining a converging subsequence, then a practical approach is to start with cyclical updates, and then only use MBI towards the end for ‘last mile’ refinement. We use Gauss-Seidel in our simulations, because it is much faster than MBI.

## V Synthetic Data Study

In this section, we use synthetic data to explore scenarios where the proposed algorithms show promising performance relative to the prior art. We will consider real datasets that are commonly used as benchmarks in the next section.

Our algorithms simultaneously estimate the latent factors and cluster, so we need a metric to assess estimation accuracy, and another for clustering accuracy. For the latter, we use the ratio of correctly clustered points over the total number of points (in , the higher the better) [15, 14]. Taking into account the inherent column permutation and scaling indeterminacy in estimating the latent factor matrices, estimation accuracy is measured via the following matched mean-square-error measure (MSE, for short)

 MSE=minπ∈Π,c1,…,cF∈{±1}1FF∑f=1∥∥ ∥∥W(:,f)∥W(:,f)∥2−cf^W(:,πf)∥^W(:,πf)∥2∥∥ ∥∥22,

where and represents the ground truth factor and the estimated factor, respectively, is the set of all permutations of the set , is there to resolve the permutation ambiguity, and to resolve the sign ambiguity when there is no nonnegativity constraint.

### V-a Joint Matrix Factorization and Latent K-means

We generate synthetic data according to

 X =W(MS+E2)+E1, (30) =WH+E1,

where is the data matrix, is the basis matrix, is the factor where the clustering structure lies in, and are the centroid matrix and the assignment matrix, respectively, and for denote the modeling errors and the measurement noise, respectively. We define the Signal to Noise Ratio (SNR) in the data and latent space, respectively, as and . SNR is the ‘conventional’ SNR that characterizes the data corruption level, and SNR is for characterizing the latent domain modeling errors. All the simulation results are obtained by averaging 100 Monte Carlo trials.

We employ several clustering and factorization algorithms as baselines, namely, the original -means, the reduced -means (RKM), the factorial -means algorithm (FKM), and a simple two-stage combination of nonnegative matrix factorization (NMF) and -means (NMF-KM).

We first test the algorithms under an NMF model. Specifically, is drawn from i.i.d. standard Gaussian distribution, with all negative entries set to zero. is generated with the following steps:

1. Generate by setting the first columns as unit vectors (to ensure identifiability of the NMF model), i.e. , and entries in the remaining

columns are randomly generated from an i.i.d. uniform distribution in

; set as ;

2. Draw from an i.i.d. standard Gaussian distribution, set and perform the following steps

 H ←(H)+, (31a) E2 ←H−MS, (31b) E2 ←γE2, (31c) H ←MS+E2, (31d)

where in (31c) is a scaling constant determined by the desired , and takes the nonnegative part of its argument. We may need to repeat the above steps (31a)(31d) several times, till we get a nonnegative with desired (in our experience, usually one repetition suffices).

We then multiply and and add , where is drawn from an i.i.d. standard Gaussian distribution and scaled for the desired . With this process, and are sparse and identifiable (when ) with very high probability [9, 10]. Finally, we replace 3% of the columns of

with all-ones vectors that act as outliers, which are common in practice.

Table I presents the clustering accuracies of various algorithms using , , , and . The MSEs of the estimated of the factorization-based algorithms are also presented. We set the parameters of the JNKM method to be , and . Here, SNR is fixed to be 15 dB and SNR varies from 3 dB to 18 dB. The JNKM is initialized with an NMF solution [44], and the JVKM is initialized with the SISAL [25] algorithm. We see that, for all the SNR’s under test, the proposed JNKM yields the best clustering accuracies. RKM and FKM give poor clustering results since they cannot resolve the distortion brought by , as we discussed before. Our proposed method works better than NMF-KM, since JNKM estimates more accurately (cf. the MSEs of the estimated factor ) – by making use of the cluster structure on as prior information.

Note that in order to make the data fit the VolMin model, we normalized the columns of to unit -norm as suggested in [45]. Due to noise, such normalization cannot ensure that the data follow the VolMin model exactly, however we observe that the proposed JVKM formulation still performs well in this case.