High-dimensional data are ubiquitous in the learning community and it has become increasingly challenging to learn from such data . For example, as one of the most important tasks in, for example, multimedia and data mining, information retrieval has drawn considerable attentions in recent years [47, 18, 46], where there is often a need to handle high-dimensional data. Often times, it is desirable and demanding to seek a data representaiton to reveal latent data structures of high-dimensional data, which is usually helpful for further data processing. It is thus a critical problem to find a suitable representation of the data [4, 20, 22, 37]
in many learning tasks, such as single image super-resolution, image reconstruction , image clustering , foreground-background seperation in surveillance video , matrix completion , etc. To this end, a number of methods for finding proper representations have been developed, among which matrix factorization technique has been widely used to handle high-dimensional data. Matrix factorization seeks two or more low-dimensional matrices to approximate the original data such that the high-dimensional data can be represented with reduced dimensions [23, 35].
For some types of data, such as images and documents that are widely used in real world learning problems, the entries are naturally nonnegative. For such data, nonnegative matrix factorization (NMF) was proposed to seek two nonnegative factor matrices for approximation. In fact, the way of seeking nonnegative factorization for nonnegative data naturally leads to learning parts-based representations of the data . Parts-based representation is believed to commonly exist in human brain with psychological and physiological evidence [33, 39, 25]. It overcomes the drawback of latent semantic indexing (LSI) 
, for which the interpretation of basis vectors is difficult due to mixed signs. When the number of basis vectors is large, NMF has been proven to be NP-hard; moreover, 
has recently given some conditions, under which NMF is solvable. Recent studies have shown a close relationship between NMF and K-means
, and further study has shown that both spectral clustering and kernel K-means are particular cases of clustering with NMF under a doubly stochastic constraint . This implies that NMF is especially suitable for clustering such data. In this paper, we will develop a novel NMF method, which focuses on the clustering capability.
Many variants of NMF have been developed in the past decades, which can be mainly categorized into four types, including basic NMF , constrained NMF , structured NMF , and generalized NMF . A fairly comprehensive review can be found in . Among these methods, Semi-NMF  removes the nonnegative constraint on the data and basis vectors, such that its applications can be expanded to more fields; convex NMF (CNMF)  restricts the basis vectors to lie in the feature space of the input data so that they can be represented as convex combinations of data vectors; orthogonal NMF (ONMF)  imposes orthogonality constraints on factor matrices, which leads to clustering interpretation. The classic NMF only considers the linear structures of the data by finding new data points with respect to the new basis and ignores the nonlinear structures of the data, which is usually important for many applications such as clustering. To learn the latent nonlinear structures of the data, graph regularized nonnegative matrix factorization (GNMF) considers the intrinsic geometrical structures of the data on a manifold by incorporating a Laplacian regularization . By modeling the data space as a manifold embedded in an ambient space and performing NMF on this manifold, GNMF considers both linear and nonlinear relationships of the data points in the original instance space, and thus it is also more discriminating than ordinary NMF which only considers the Euclidean structure of the data . This renders GNMF more suitable for clustering purpose than the original NMF. Based on GNMF, robust manifold nonnegative matrix factorization (RMNMF) constructs a structured sparsity-inducing norm-based robust formulation . With a
-norm, RMNMF is insensitive to the between-sample data outliers and improves the robustness of NMF. Moreover, the relaxed requirement on signs of the data makes it a nonlinear version of Semi-NMF.
The main contributions of this paper are as follows:
For the first time, in an effective yet simple way, local similarity learning is embedded into learning matrix factorization, which allows our method to learn global and local structures of the data. The learned basis and representations well preserve the inherent structures of the data and are more representative;
To our best knowledge, we are the first to integrate the orthogonality-constrained coefficient matrix into local similarity adaption, such that local similarity and clustering can mutually enhance each other and be learned simultaneously;
Nonlinear extension is developed from kernel perspectives, which can be further expanded to cope with multiple-kernel scenario;
Efficient multiplicative update rules are constructed to solve the proposed model and comprehensive theoretical analysis is provided to guarantee the convergence;
Lastly, extensive experimental results have verified the effectiveness of our method.
The rest of this paper is organized as follows: In Section II, we briefly review some methods that are closely related with our research. Then we introduce our method in Section III. Regarding the proposed method, we provide an efficient alternating optimization procedure in Section IV, and then provide complicated theoretical results for the convergence analysis in Section V. Next, we conduct comprehensive experiments and show the results in Section VI. Finally, we conclude the paper in Section VII.
Notation: For a matrix , , , and denote the -th element, -th column, and -th row of . is the trace operator, and are the Frobenius and norms.
denotes the identity matrix of size, is an operator that returns a diagonal matrix with identical diagonal elements to the input matrix.
Ii Related Work
In this section, we briefly review some methods that are closely related with our research.
Given nonnegative data with being the dimension and sample size, NMF is to factor into (basis) and (coefficients) with the following optimization problem:
where enforces a low-rank approximation of the original data.
Ii-B Graph Laplacian
Graph Laplacian  is defined as
where is the weight matrix that measures the pair-wise similarities of original data points, is a diagonal matrix with , and . It is widely used to incorporate the geometrical structure of the data on manifold. In particular, the manifold enforces the smoothness of the data in linear and nonlinear spaces by minimizing (2), which leads to an effect that if two data points are close in the intrinsic geometry of the data distribution, then their new representations with respect to the new basis, and , are also close . This is closely related with spectral clustering (SC) [36, 27] and its further development [31, 30].
Iii Proposed Method
As aforementioned, existing NMF methods do not fully exploit local geometric structures, nor do they exploit close interaction between local similarity and clustering. In this section, we will propose an effective, yet simple, new method to overcome these two drawbacks.
CNMF restricts the basis of NMF to convex combinations of the columns of the data, i.e., , which gives rise to the following:
By restricting , (3) has the advantage that it could interpret the columns of as weighted sums of certain data points and these columns correspond to centroids . It is natural to see that reveals the importance of basis to by .
It is noted that (3) is closely related to subspace clustering [23, 15]. The observation is that high-dimensional data usually reside in low-dimensional subspaces and recovering such subspaces usually needs a self-expressiveness assumption, which refer to that the data can be approximately self-expressed as with a representation matrix . Local structures of the data are shown to be important  and it is necessary to take into consideration local similarity in learning tasks. A natural assumption is that if two data points and are close to each other, then their similarity, , should be large; otherwise, small. This assumption leads to the following minimization:
or in matrix form,
with being a length- vector of 1s. It is noted that the minimization of creftype 4 directly enforces to reflect the pair-wise similarity information of the examples. Noticing that and are nonnegative and inspired by self-expressiveness assumption, we take as the similarity matrix , such that . Here, is the score vector of example on the basis vectors, and is the coefficient vector of the -th sample with respect to the new basis. If and are close on data manifold or grouped into the same cluster, then it is natural that and have higher similarity; vice versa. This close relationship between the geometry of and on data manifold and the similarity of and suggests that using as in (4) is indeed meaningful. To encourage the interaction between similarity learning and clustering, we incorporate (4) into (3) with , obtaining the Local Similarity NMF (LS-NMF):
where is a balancing parameter. Now, it is seen that the first term in above model captures global structure of the data by exploiting linear representation of each example with respect to the overall data, while the second term exploits local structure of the data by the connection between local geometric structure and pairwise similarity.
To allow for immediate interpretation of clustering from the coefficient matrix, we impose an orthogonality constraint of , i.e., , leading to
Note that by enforcing , the problem of NMF is directly connected with clustering in that can be regarded as relaxed cluster indicators. More importantly, learning similarity and clustering are connected through such a matrix and can be mutually promoted through an iterative optimization process. At the end of the iteration, the optimized clustering results are directly given by .
Model (6) only learns linear relationships of the data and omits the nonlinear ones, which usually exist and are important. To take nonlinear relationships of the data into consideration, it is widely considered to seek data relationships in kernel space.
We define a kernel mapping as , which maps the data points from the input space to in a reproducing kernel Hilbert space , where is an arbitrary positive integer. After kernel mapping, we obtain the mapped data points . The similarity between each pair of data points is defined as the inner product of mapped data in the Hilbert space, i.e., , where is a reproducing kernel function. In the kernel space, (6) is reduced to
where is extended in (6) from instance space to kernel space defined as
We expand (7) and replace with , the kernel matrix induced by kernel function associated with the mapping , giving rise to the Kernel LS-NMF (KLS-NMF):
In this paper, we aim at providing a new NMF method to take both local and global nonlinear relationships of the data into consideration. It is also worth mentioning that our method can be extended to multiple-kernel scenario. Since the future extension is out of the scope of this paper, we do not further explore it here.
We solve (9) using an iterative update algorithm and element-wisely update and as follows:
V Correctness and Convergence
V-a Correctness and Convergence of (10)
We present two results regarding the update rule of (10): 1) When convergent, the limiting solution of (10) satisfies the KKT condition. 2) The iteration of (10) converges. The two results are established in Theorems V.2 and V.1, respectively.
Fixing , the limiting solution of the update rule in (10) satisfies the KKT condition.
Fixing , the subproblem for is
Imposing the non-negativity constraint , we introduce the Lagrangian multipliers and the Lagrangian function
The gradient of gives
For ease of notation, we denote , , , and . By the complementary slackness condition, we obtain
Note that (15) provides the fixed point condition that the limiting solution should satisfy. It is easy to see that the limiting solution of (10) satisfies (15), which is described as follows. At convergence, (10) gives
which is reduced to
Next, we prove the convergence of the iterative update as stated in Theorem V.2.
In this proof, we use an auxiliary function approach  with relevant definition and propositions given below.
A function is called an auxiliary function of if for any and the following are satisfied
Given a function and its auxiliary function , if we define a variable sequence with
then the value sequence, , is decreasing due to the following chain of inequalities:
Proposition V.2 ().
For any matrices , , , and , with and being symmetric, the following inequality holds:
Proof of Theorem v.2.
For fixed , the objective function in (12) can be written as
First, we show that the function defined in (21) is an auxiliary function of :
To show this equation, we find the upper-bounds and lower-bounds for the positive and negative terms in , respectively. For the positive terms, we use Proposition V.2 and the inequality for to get the following upper-bounds:
For the negative term, we use the inequality for to get the following lower-bound:
Combining these bounds, we get the auxiliary function for . Next, we will show that the update of (10) essentially follows (19), then according to Proposition V.1 we can conclude the proof. To show this, the remaining problem is to find the global minimum of (21). For this, we first prove that (21) is convex.
The first-order derivative of is
Then the Hessian of can be obtained element-wisely as
where is delta function that returns 1 if or 0 otherwise. It is seen that the Hessian matrix of has zero elements off diagonal and nonzero elements on diagonal, and thus is positive definite. Therefore, is convex and achieves the global optimum by its first-order optimality condition, i.e., (24) = 0, which gives rise to
(26) can be further reduced to
V-B Correctness and Convergence of (11)
Fixing , we need to solve the following optimization problem for :
where is nonnegative and diagonal. We introduce the Lagrangian multipliers , which is symmetric and has size . Then the Lagrangian function to be minimized gives rise to
where we define , , , and for easier notation, and , to be two nonnegative matrices for a nonnegative matrix such that . The gradient of is
Then the KKT complementarity condition gives
which is a fixed point relation that the local minimum for must hold. Following the previous subsection, noting that
we give an update as follows:
To show that the update of (32) will converge to a local minimum, we will show two results: the convergence of the update algorithm and the correctness of the converged solution.
From (32), it is easy to show that, at convergence, the solution satisfies the following condition:
which is the fixed point condition in (31). Hence, the correctness of the converged solution can be verified.
The convergence is assured by the following theorem.
For fixed , the Lagrangian function is monotonically decreasing under the update rule in (32).
To prove Theorem V.3, we use the auxiliary function approach. For ease of notation, we define .
First, we find upper-bounds for each positive term in . By inequality for , we get
Then, according to Proposition V.2, by setting or to be identity matrices, we get the following two upper-bounds
Then, by the inequalities for , we get the following lower-bounds for negative terms:
Hence, combining the above bounds, we construct an auxiliary function for :
We take the first order derivative of (37), then we get
Further, we can get the Hessian of (37) by taking the second order derivative:
It is easy to verify that the Hessian matrix has zero elements off diagonal, and nonnegative values on diagonal. Therefore, is convex in and its global minimum is obtained by its first order optimality condition, (38) = 0, which gives rise to
So far, a conclusion can be drawn that by alternatively updating and , the objective function in (9) will decrease and the value sequence converges. We set , and regard the updates of (10) and (11) as a mapping , then at convergence we have . Following [13, 42], with non-negativity constraint enforced, we expand , which indicates that under an appropriate matrix norm. In general, , hence the updates of (10) and (11) roughly have a first-order convergence rate.
In this section, we conduct experiments to verify the effectiveness of the proposed KLS-NMF. We will present the evaluation metrics, benchmark datasets, algorithms in comparison, and experimental results in detail.