1 Introduction
Nonnegative data are pervasive. Consider the following four important applications, each of which give rise to nonnegative data matrices.

In document collections, documents are stored as vectors. Each element of a document vector is a count (possibly weighted) of the number of times a corresponding term appears in that document. Stacking document vectors one after the other creates a nonnegative termbydocument matrix that represents the entire document collection numerically.

Similarly, in image collections, each image is represented by a vector, and each element of the vector corresponds to a pixel. The intensity and color of the pixel is given by a nonnegative number, thereby creating a nonnegative pixelbyimage matrix.

For item sets or recommendation systems, the information for a purchase history of customers or ratings on a subset of items is stored in a nonnegative sparse matrix.

In gene expression analysis, genebyexperiment matrices are formed from observing the gene sequences produced under various experimental conditions.
These are but four of the many interesting applications that create nonnegative data matrices (and tensors)
[32].Three common goals in mining information from such matrices are: (1) to automatically cluster similar items into groups, (2) to retrieve items most similar to a user’s query, and (3) identify interpretable critical dimensions within the collection. For the past decade, a technique called Latent Semantic Indexing (LSI) [4]
, originally conceived for the information retrieval problem and later extended to more general text mining problems, was a popular means of achieving these goals. LSI uses a wellknown factorization of the termbydocument matrix, thereby creating a low rank approximation of the original matrix. This factorization, the singular value decomposition (SVD)
[21, 40], is a classic technique in numerical linear algebra.The SVD is easy to compute and works well for points (1) and (2) above, but not (3). The SVD does not provide users with any interpretation of its mathematical factors or why it works so well. A common complaint from users is: do the SVD factors reveal anything about the data collection? Unfortunately, for the SVD, the answer to this question is no, as explained in the next section. However, an alternative and much newer matrix factorization, known as the nonnegative matrix factorization (NMF), allows the question to be answered affirmatively. As a result, it can be shown that the NMF works nearly as well as the SVD on points (1) and (2), and further, can also achieve goal (3).
Most examples and applications of the NMF in this paper refer to text mining because this is the area with which we are most familiar. However, the phrase “termbydocument matrix” which we will use frequently throughout this paper can just as easily be replaced with genebyobservation matrix, purchasebyuser matrix, etc., depending on the application area.
2 Low Rank Approximations
Applications, such as text processing, data mining, and image processing, store pertinent information in a huge matrix. This matrix is large, sparse, and often times nonnegative. In the last few decades, researchers realized that the data matrix could be replaced with a related matrix, of much lower rank. The low rank approximation to the data matrix brought several advantages. The rank approximation, denoted , sometimes required less storage than . But most importantly, the low rank matrix seemed to give a much cleaner, more efficient representation of the relationship between data elements. The low rank approximation identified the most essential components of the data by ignoring inessential components attributed to noise, pollution, or inconsistencies. Several low rank approximations are available for a given matrix: QR, URV, SVD, SDD, PCA, ICA, NMF, CUR, etc. [29, 40, 55, 17]. In this section, we focus on two such approximations, the SVD and the NMF, that have been applied to data mining problems.
2.1 The Singular Value Decomposition
In 1991, Susan Dumais [19] used the singular value decomposition (SVD) to build a low rank approximation to the termbydocument matrix of information retrieval. In fact, to build a rank approximation to the rank termbydocument matrix , simply use the most significant singular components, where . That is,
where is the singular value of , and and are the corresponding singular vectors [21]. The technique of replacing with the truncated is called Latent Semantic Indexing (LSI) because the low rank approximation reveals meanings and connections between documents that were hidden, or latent, in the original noisy data matrix .
Mathematically, the truncated SVD has one particularly appealing property: of all possible rank approximations, is the best approximation in the sense that is as small as possible [4, 5]. Thus, the truncated SVD provides a nice baseline against which all other lowrank approximations can be judged for quantitative accuracy. This optimality property is also nice in practice. Algorithms for computing the most significant singular components are fast, accurate, welldefined, and robust [2, 4, 21]. Two different algorithms will produce the same results up to roundoff error. Such uniqueness and computational robustness are comforting. Another advantage of the truncated SVD concerns building successive low rank approximations. Once has been computed, no further computation is required if, for example, for sensitivity analysis or comparison purposes, other lower rank approximations are needed. That is, once is available, then is available for any .
LSI and the truncated SVD dominated text mining research in the 1990s [1, 3, 4, 6, 5, 7, 9, 10, 14, 19, 25, 27, 26, 36, 59, 61, 62]. However, LSI is not perfect. For instance, while it first appeared that the low rank approximation would save storage over the original matrix , experiments showed that this was not the case. is generally very sparse for text mining problems because only a small subset of the terms in the collection are used in any particular document. No matter how sparse the original termbydocument matrix is, the truncated SVD produces singular components that are almost always completely dense. In many cases, can require more (sometimes much more) storage than .
Furthermore, is always a nonnegative matrix, yet the singular components are mixed in sign. The SVD’s loss of the nonnegative structure of the termbydocument matrix means that the factors of the truncated SVD provide no interpretability. To understand this statement, consider a particular document vector, say, column 1 of . The truncated SVD represents document 1, , as
which reveals that document 1 is a linear combination of the singular vectors , also called the basis vectors. The scalar weight represents the contribution of basis vector in document 1. Unfortunately, the mixed signs in and preclude interpretation.
Clearly, the interpretability issues with the SVD’s basis vectors are caused by the mixed signs in the singular vectors. Thus, researchers proposed an alternative low rank approximation that maintained the nonnegative structure of the original termbydocument matrix. As a result, the nonnegative matrix factorization (NMF) was created [35, 45]. The NMF replaces the role played by the singular value decomposition (SVD). Rather than factoring as , the NMF factors as , where and are nonnegative.
2.2 The Nonnegative Matrix Factorization
Recently, the nonnegative matrix factorization (NMF) has been used to create a low rank approximation to that contains nonnegative factors called and . The NMF of a data matrix is created by solving the following nonlinear optimization problem.
(1)  
The Frobenius norm is often used to measure the error between the original matrix and its low rank approximation , but there are other possibilities [16, 35, 43]. The rank of the approximation, , is a parameter that must be set by the user.
The NMF is used in place of other low rank factorizations, such as the singular value decomposition (SVD) [40], because of its two primary advantages: storage and interpretability. Due to the nonnegativity constraints, the NMF produces a socalled “additive partsbased” representation [35] of the data. One consequence of this is that the factors and are generally naturally sparse, thereby saving a great deal of storage when compared with the SVD’s dense factors.
The NMF also has impressive benefits in terms of interpretation of its factors, which is, again, a consequence of the nonnegativity constraints. For example, consider a text processing application that requires the factorization of a termbydocument matrix .
In this case, can be considered the number of (hidden) topics
present in the document collection. In this case, becomes a termbytopic matrix whose columns are the NMF
basis vectors. The nonzero elements of column 1 of (denoted ),
which is sparse and nonnegative,
correspond to particular terms. By considering the highest weighted terms in
this vector, one can assign a label or topic to basis vector 1. Figure
1 shows four basis vectors for one
particular termbydocument matrix, the medlars dataset of medical abstracts, available at http://www.cs.utk.edu/~lsi/
.
For those familiar with the domain of this dataset, the NMF allows users the ability to interpret the basis vectors. For instance, a user might attach the label “heart” to basis vector of Figure 1. Similar interpretation holds for the other factor . becomes a topicbydocument matrix with sparse nonnegative columns. Element of column 1 of measures the strength to which topic appears in document 1.
Another fascinating application of the NMF is image processing. Figure 2 clearly demonstrates two advantages of the NMF over the SVD. First, notice that the NMF basis vectors, represented as individual blocks in the matrix, are very sparse (i.e., there is much white space). Similarly, the weights, represented as individual blocks in the vector, are also sparse. On the other hand, the SVD factors are nearly completely dense. Second, the basis vectors of the NMF, in the matrix, have a nice interpretation, as individual components of the structure of the face—ears, noses, mouths, hairlines. The SVD basis vectors do not create an additive partsbased representation. In addition, the gains in storage and interpretability do not come at a loss in performance. The NMF and the SVD perform equally well in reconstructing an approximation to the original image.
Of course, the NMF has its disadvantages too. Other popular factorizations, especially the SVD, have strengths concerning uniqueness and robust computation. Yet these become problems for the NMF. There is no unique global minimum for the NMF. The optimization problem of Equation (2.2) is convex in either or , but not in both and , which means that the algorithms can only, if at all, guarantee convergence to a local minimum. In practice, NMF users often compare the local minima from several different starting points, using the results of the best local minimum found. However, this is prohibitive on large, realisticallysized problems. Not only will different NMF algorithms (and there are many now [32]) produce different NMF factors, the same NMF algorithm, run with slightly different parameters, can produce different NMF factors.
2.3 Summary of SVD vs. NMF
We pause to briefly summarize the advantages of these two competing low rank approximations. The properties and advantages of the SVD include: (1) an optimality property; the truncated SVD produces the best rank approximation (in terms of squared distances), (2) speedy and robust computation, (3) unique factorization; initialization does not affect SVD algorithms, and ( 4) orthogonality; resulting basis vectors are orthogonal and allow conceptualization of original data as vectors in space. On the other hand, the advantages of NMF are: (1) sparsity and nonnegativity; the factorization maintains these properties of the original matrix, (2) reduction in storage; the factors are sparse, which also results in easier application to new data, and (3) interpretability; the basis vectors naturally correspond to conceptual properties of the data.
The strengths of one approximation become the weaknesses of another. The most severe weakness of the NMF are its convergence issues. Unlike the SVD and its unique factorization, there is no unique NMF factorization. Because different NMF algorithms can converge to different local minima (and even this convergence to local minima is not guaranteed), initialization of the algorithm becomes critical. In practice, knowledge of the application area can help guide initialization choices. We will return to such initialization issues in Section 4.
3 ALS Algorithms for the NMF
Several currently popular NMF algorithms [13, 35, 34, 45, 43, 60] do not create sparse factors, which are desired for storage, accuracy, and interpretability reasons. Even with adjustments to create sparse factors, the improved algorithms [23, 34, 46, 53] exhibit an undesirable locking phenomenon, as explained below. Thus, in this section, we propose two new NMF algorithms [30], called ACLS and AHCLS, that produce sparse factors and avoid the socalled locking problem.
Both algorithms are modifications to the simple Alternating Least Squares (ALS) algorithm [45], wherein is fixed and is computed using least squares, then is fixed and is computed using least squares, and so on, in alternating fashion. The method of alternating variables is a wellknown technique in optimization [42]. One problem with the first ALS algorithm applied to the NMF problem (done by Paatero and Tapper in 1994 [45]) was the lack of sparsity restrictions. To address this, the ACLS algorithm adds a reward for sparse factors of the NMF. The user sets the two parameters and to positive values. Increasing these values increases the sparsity of the two NMF factors. However, because there are no upperbounds on these parameters, a user must resort to trial and error to find the best values for and . The more advanced AHCLS [30], presented in Section 3.2, provides better sparsity parameters with more intuitive bounds.
3.1 The ACLS Algorithm
The ACLS (Alternating Constrained Least Squares) algorithm is implemented differently than the original ALS algorithm [45] because issues arise at each alternating step, where a constrained least squares problem of the following form
(2) 
must be solved. The vectors and are columns of and , respectively. Notice that the decision variable must be nonnegative. There are algorithms specifically designed for this nonnegative constrained least squares problem. In fact, the NNLS algorithm of Lawson and Hanson [8, 33] is so common that it appears as a builtin function in MATLAB. Unfortunately, the NNLS algorithm is very slow, as it is an “active set” method, meaning it can swap only one variable from the basis at a time. Even the faster version of the NNLS algorithm by Bro and de Jong [11] is still not fast enough, and the NNLS step remains the computational bottleneck. As a result, in practice, compromises are made. For example, a standard (unconstrained) least squares step is run [8] and all negative elements in the solution vector are set to 0. This adhoc enforcement of nonnegativity, while not theoretically appealing, works quite well in practice. The practical ACLS algorithm is shown below.
Practical ACLS Algorithm for NMF input , = rand(m,k); % initialize as random dense matrix or use another initialization from Section 4 for i = 1 : maxiter (cls) Solve for in matrix equation . % for fixed, find (nonneg) Set all negative elements in to 0. (cls) Solve for in matrix equation . % for fixed, find (nonneg) Set all negative elements in to 0. end
3.2 The AHCLS Algorithm
ACLS uses a crude measure to approximate the sparsity of a vector . The AHCLS replaces this with a more sophisticated measure, , which was invented by Hoyer [24].
In AHCLS (Alternating HoyerConstrained Least Squares), the user defines two scalars and in addition to and of ACLS. For AHCLS, the two additional scalars represent a user’s desired sparsity in each column of the factors. These scalars, because they range from 0 to 1, match nicely with a user’s notion of sparsity as a percentage. Recall that are positive weights associated with the penalties assigned to the density of and . Thus, in AHCLS, they measure how important it is to the user that and . Our experiments show that AHCLS does a better job of enforcing sparsity than ACLS does. And the four AHCLS parameters are easier to set. For example, as a guideline, we recommend , with of course, . The practical AHCLS algorithm, using matrix systems and adhoc enforcement of negativity, is below. is the matrix of all ones.
Practical AHCLS Algorithm for NMF input , , , = rand(m,k); % initialize as random dense matrix or use another initialization from Section 4 for i = 1 : maxiter (hcls) Solve for in matrix equation . (nonneg) Set all negative elements in to 0. (hcls) Solve for in matrix equation . (nonneg) Set all negative elements in to 0. end
3.3 Advantages and Disadvantages of ACLS and AHCLS
3.3.1 Speed
These algorithms have many advantages. For instance, rather than computing the vectors in column by column (as is done in [53]), thereby solving sequential least squares problems of the form of Equation (2), one matrix system solve can be executed. Further, because each CLS step solves a small matrix system, ACLS and AHCLS are the fastest NMF algorithms available (and faster than current truncated SVD algorithms). See Section 3.4 for comparative run times. They converge quickly and give very accurate NMF factors.
3.3.2 Sparsity
Only must be initialized, and sparsity is incorporated for both NMF factors. We believe that avoidance of the socalled locking phenomenon is one reason why the class of ALS algorithms works well in practice. Nearly all other NMF algorithms, especially those of the multiplicative update class [23, 24, 35, 34, 47, 52, 53], lock elements when they become 0. That is, during the iterative process, once an element in either or becomes 0, it must remain 0. For the basis vectors in the text mining problem, which are stored in , this means that in order to improve the objective function, the algorithm can only remove terms from, not add terms to, topic basis vectors. As a result, once the algorithm starts down a path toward a particular topic vector, it must continue in that direction. On the other hand, ALS algorithms do not lock elements, and thus provide greater flexibility, allowing them to escape from a path heading towards a poor local minimum.
3.3.3 Convergence
It has been proven that ALS algorithms will converge to a fixed point, but this fixed point may be a local extrema or a saddle point [20, 22, 38]. The ACLS and AHCLS algorithms with properly enforced nonnegativity, for example, by the NNLS algorithm, are known to converge to a local minimum [16, 38]. However, our adhoc enforcement of nonnegativity, which drastically speeds up the algorithm (and improves sparsity), means there are no proofs claiming convergence to a local minimum; saddle points are now possible. (Actually, this is not so damning for our two ALS algorithms because most NMF algorithms suffer this same problem. The few NMF algorithms believed to guarantee convergence to a local minimum have been proven otherwise [20, 22, 38].) Our experiments [30, 31] and others [28, 44, 45, 43, 48] have shown that the ALS fixed points can be superior to the results of other NMF algorithms.
3.3.4 Nonnegativity
Clearly, adhoc enforcement of nonnegativity is theoretically unattractive. There are some alternatives to this adhoc enforcement of nonnegativity. For instance, one could convert from an alternating least squares approach to an alternating linear programming approach, whereby nonnegativity of variables is enforced in a natural way by the simple constraints of the linear programming formulation. Yet, this has the same problem as the NNLS algorithm, lengthy execution time. A second alternative to adhoc enforcement of nonnegativity is to add negativity penalties in the form of logarithmic functions to the NMF objective function
[39]. This is a focus of future work.3.4 Numerical Experiments
Figure 3 compares our ACLS and AHCLS algorithms with the popular LeeSeung mean squared error algorithm [35] and the GDCLS algorithm [53]. We use our own implementation of GDCLS, which is much faster than the implementation presented in [53]. The speed improvement results from our use of one matrix system rather than serial vector systems to solve the CLS step. This implementation trick was described above for the ACLS and AHCLS algorithms.
To create Figure 3, we used the medlars dataset of medical abstracts and the cisi dataset of library science abstracts. These figures clearly show how the ALStype algorithms outperform the LeeSeung multiplicative update algorithms in terms of accuracy and speed. While the ALStype algorithms provide similar accuracy to the GDCLS algorithm, they are much faster. This speed advantage continues to hold for much larger collections like the reuters10 collection, used in Section 4. (Details on the reuters10 dataset appear in Section 4.3.) On average the ACLS and AHCLS algorithms require roughly 2/3 the time of the GDCLS algorithm. Figure 3 also reports the error in the optimal rank10 approximation required by the SVD. Notice how close all NMF algorithms come to the optimal factorization error. Also, notice that ACLS and AHCLS require less time than the SVD to produce such good, sparse, nonnegative factorizations.
4 Initializations
All NMF algorithms are iterative and it is wellknown that they are sensitive to the initialization of and [56]. Some algorithms require that both and be initialized [23, 24, 35, 34, 46], while others require initialization of only [45, 43, 52, 53]. In all cases, a good initialization can improve the speed and accuracy of the algorithms, as it can produce faster convergence to an improved local minimum [55]. A good initialization can sidestep some of the convergence problems mentioned above, which is precisely why they are so important. In this section, we compare several initialization procedures (two old and four new) by testing them on the ALS algorithms presented in Section 3. We choose to use the ACLS and AHCLS algorithms because they produce sparse accurate factors and require about the same time as the SVD. Most other NMF algorithms require much more time than the SVD, often times orders of magnitude more time.
4.1 Two Existing Initializations
Nearly all NMF algorithms use simple random initialization, i.e., and
are initialized as dense matrices of random numbers between 0 and 1. It is wellknown that random initialization does not generally provide a good first estimate for NMF algorithms
[55], especially those of the ALStype of [12, 37, 49, 51]. Wild et al. [56, 57, 58] have shown that the centroid initialization, built from the centroid decomposition [15] is a much better alternative to random initialization. Unfortunately, this decomposition is expensive as a preprocessing step for the NMF. Another advantage of ALS algorithms, such as our ACLS and AHCLS, is that they only require initialization of . In ALS algorithms, once is known, is computed quickly by a least squares computation. As a result, we only discuss techniques for computing a good .4.2 Four New Initializations
Some text mining software produces the SVD factors for other text tasks. Thus, in the event that the SVD factor is available, we propose a SVDcentroid initialization [31], which initializes with a centroid decomposition of the low dimensional SVD factor [54]. While the centroid decomposition of can be too timeconsuming, the centroid decomposition of is fast because is much smaller than . When the SVD factors are not available, we propose a very inexpensive procedure called random Acol initialization. Random Acol forms an initialization of each column of the basis matrix by averaging random columns of . It makes more sense to build basis vectors from the given data, the sparse document vectors themselves, than to form completely dense random basis vectors, as random initialization does. Random Acol initialization is very inexpensive, and lies between random initialization and centroid initialization in terms of performance [30, 31].
We also present two more initialization ideas, one inspired by the matrix of the CUR decomposition [17], and another by the term cooccurrence matrix [50]. We refer to these last two methods as random initialization and cooccurrence initialization, respectively. The random initialization is similar to the random Acol method, except it chooses columns at random from the longest (in the 2norm) columns of , which generally means the densest columns since our text matrices are so sparse. The idea is that these might be more likely to be the centroid centers. The cooccurrence method first forms a term cooccurrence matrix . Next, the method for forming the columns of described as Algorithm 2 of [50] is applied to . The cooccurrence method is very expensive for two reasons. First, for text mining datasets, such as reuters10, , which means is very large and often very dense too. Second, the algorithm of [50] for finding is extremely expensive, making this method impractical. All six initialization methods are summarized in Table 1. The two existing methods appear first, followed by our four suggested methods.
Name  Proposed by  Pros  Cons 
Random  Lee, Seung [34]  easy, cheap to compute  dense matrices, no intuitive basis 
Centroid  Wild et al. [56]  reduces # NMF iterations,  expensive, must run clustering 
firm, intuitive foundation  algorithm on cols of  
SVDCentroid  Langville [31]  inexpensive, reduces # NMF  SVD factor must be available 
iterations  
Random Acol  Langville [30]  cheap, sparse matrices built  only slight decrease in number of 
from original data  NMF iterations  
Random  Langville adapts  cheap, sparse  not very effective 
from Drineas [17]  
Cooccurrence  Langville adapts  uses termterm similarities  large, dense cooccurrence matrix, 
from Sandler [50]  very expensive computation 
4.3 Initialization Experiments with Reuters10 dataset
The reuters10 collection is our subset of the Reuters21578 version of the Reuter’s benchmark document collection of business newswire posts. The Reuters21578 version contains over 20,000 documents categorized into 118 different categories, and is available online.^{1}^{1}1http://www.daviddlewis.com/resources/testcollections/
reuters21578/ Our subset, the reuters10
collection, is derived from the set of documents that have been classified into the top ten most frequently occurring categories. The collection contains 9248 documents from the training data of the “ModApte split” (details of the split are also available at the website above).
The numbers reported in Table 2 were generated by applying the alternating constrained least squares (ACLS) algorithm of Section 3 with to the reuters10 dataset. The error measure in this table is relative to the optimal rank10 approximation given by the singular value decomposition. For this dataset, . Thus, for example, the error at iteration 10 is computed as
Method 
Time  Storage  Error(0)  Error(10)  Error(20)  Error(30) 

Random  .09 sec  726K  4.28%  .28%  .15%  .15% 
Centroid  27.72  46K  2.02%  .27%  .18%  .18% 
SVDCentroid  56K  2.08%  .06%  .06%  .06%  
Random Acol  .05  6K  2.01%  .21%  .16%  .15% 
Random  .11  22K  3.35%  .29%  .20%  .19% 
Cooccurrence  3287  45K  3.38%  .37%  .27%  .25% 
ACLS time  .37 sec  3.42  6.78  10.29 
provided of the SVD is already available
each column of formed by averaging 20 random columns of
each column of formed by averaging 20 of the longest columns of
We distinguish between quantitative accuracy, as reported in Table 2, and qualitative accuracy as reported in Tables 3 through 9. For text mining applications, it is often not essential that the low rank approximation be terribly precise. Often suboptimal solutions are “good enough.” After reviewing Tables 3–9, it is easy to see why some initializations give better accuracy and converge more quickly. They start with basis vectors in that are much closer to the best basis vectors found, as reported in Table 3, which was generated by using the basis vectors associated with the best global minimum for the reuters10 dataset, found by using 500 random restarts. In fact, the relative error for this global minimum is .009%, showing remarkable closeness to the optimal rank10 approximation. By comparing each subsequent table with Table 3, it’s clear why one initialization method is better than another. The best method, SVDcentroid initialization, starts with basis vectors very close to the “optimal” basis vectors of Table 3. On the other hand, random and random Acol initialization are truly random. Nevertheless, random Acol does maintain one clear advantage over random initialization as it creates a very sparse . The Random C and cooccurrence initializations suffer from lack of diversity. Many of the longest documents in the reuters10 collection appear to be on similar topics, thus, not allowing to cover many of the reuters topics.
Because the algorithms did not produce the “wheat” vector always in column one of , we have reordered the resulting basis vectors in order to make comparisons easier. We also note that the nonnegative matrix factorization did produce basis vectors that cover 8 of the 10 “correct” reuters classifications, which appear on the last line of Table 3. The two missing reuters classifications are corn and grain, both of which are lumped into the first basis vector labeled wheat. This first basis vector does break into two separate vectors, one pertaining to wheat and grain and another to corn when the number of basis vectors is increased from to . We note that these categories have been notoriously difficult to classify, as previously reported in [18].



tonne 
billion  share  stg  mlnmln  gulf  dollar  oil  loss  trade 
wheat  year  offer  bank  cts  iran  rate  opec  profit  japan 
grain  earn  company  money  mln  attack  curr.  barrel  oper  japanese 
crop  qrtr  stock  bill  shr  iranian  bank  bpd  exclude  tariff 
corn  rise  sharehol.  market  net  ship  yen  crude  net  import 
agricul.  pct  common  england  avg  tanker  monetary  price  dlrs  reagan 
wheat  earn  acquisition  interest  ship  frgnexch.  oil  trade 



announce  wpp  formality  bulletin  matthews  dramatic  squibb  wag  cochran  erik 
medtec  reflagging  simply  awfully  nyt  boca raton  kuwaiti  oils  mln  support 
pac  kwik  moonie  blair  barrel  clever  dacca  hears  barriers  sale oil 
purina  tilbury  tmg  fresno  purina  billion  democrat  bwtr  deluxe  direct 
mezzanine  capacitor  bushnell  farm  june  bkne  induce  nestle  mkc  wheat 
foreign  grain  country  leutwiler  trend  clever  rate  fed.  econ.  aid 



tonne  bank  share  medar  cts  iran  rate  oil  stg  strike 
wheat  rate  company  mdxr  mmln  gulf  dollar  trade  bill  port 
grain  dollar  offer  mlx  loss  attack  bank  price  takeup  union 
corn  billion  pct  mlxx  net  iranian  currency  barrel  drain  seaman 
crop  pct  stock  mich  shr  missile  market  japan  mature  worker 
agriculture  trade  dlrs  troy  dlrs  tanker  monetary  opec  money  ship 



tonne  billion  share  bank  cts  iran  dollar  oil  loss  trade 
wheat  year  offer  money  shr  gulf  rate  barrel  oper  japan 
grain  earn  company  rate  mln  attack  curr.  opec  profit  japanese 
corn  qrtr  stock  stg  net  iranian  yen  crude  cts  tariff 
crop  rise  pct  market  mlnmln  missile  japan  bpd  mln  import 
agricul.  pct  common  pct  rev  ship  economic  price  net  country 



mln  fee  agl  mln  mark  loss  official  dlrs  bank  trade 
denman  mortg.  tmoc  dlrs  mannes.  mln  piedmont  oper  bancaire  viermetz 
dlrs  billion  bank  share  dividend  cts  dollar  billion  austral  mln 
ecuador  winley  pct  seipp  mln  maki  interest  loss  neworld  nwa 
venezuela  mln  company  billion  dieter  name  tokyo  texaco  datron  cts 
revenue  fed  maki  dome  gpu  kato  japanese  pennzoil  share  builder 



analyst  dollar  econ.  bank  market  analyst  analyst  analyst  trade  rate 
lawson  rate  policy  rate  bank  market  industry  bank  dollar  trade 
market  econ.  pct  market  analyst  trade  price  currency  japan  official 
trade  mark  cost  currency  price  pct  market  japan  price  bank 
sterling  bank  growth  dollar  mark  last  believe  billion  japanese  market 
dollar  rise  trade  trade  good  official  last  cut  pct  econ. 



dept.  average  agricul.  national  farmer  ratex  aver price  plywood  wash.  trade 
wheat  pct  wheat  bank  ratex  natl  average  aqtn  trade  japan 
agricul.  rate  tonne  rate  natl  avge  price  aequitron  japan  billion 
tonne  price  grain  pct  avge  farmer  yield  medical  official  market 
usda  billion  farm  oil  cwt  cwt  billion  enzon  reagan  japanese 
corn  oil  dept.  gov.  wheat  wheat  bill  enzon  pct  import 
5 Convergence Criterion
Nearly all NMF algorithms use the simplest possible convergence criterion, i.e., run for a fixed number of iterations, denoted maxiter. This criterion is used so often because the natural criterion, stop when , requires more expense than most users are willing to expend, even occasionally. Notice that maxiter was the convergence criterion used in the ACLS and AHCLS algorithms of Section 3. However, a fixed number of iterations is not a mathematically appealing way to control the number of iterations executed because the most appropriate value for maxiter is problemdependent.
In this section, we use the ACLS algorithm applied to the cisi dataset to compare two convergence criterion: the natural but more expensive Frobenius norm measure, and our proposed angular measure, which we describe in the next paragraph. We note that we used the most efficient implementation of the Frobenius measure [32], which exploits the trace form of the Frobenius norm.
In this equation is a constant that does not change throughout the iterations, and thus, is only computed once and stored. At each iteration and must be computed. However, some calculations required by these traces, such as and , are already available from the least squares steps, and hence, need not be recomputed.
Our angular convergence measure is moderately inexpensive in storage and computation, and is intuitively appealing. Simply measure the angle between successive topic vectors, i.e., and at iterations and . Once for , stop because the topic vectors have converged satisfactorily. Mitchell and Burdick [41] have shown that, in a different context, a similar measure converges simultaneously with the expensive convergence criterion based on the objective function, [55]. However, Figure 4 clearly shows one problem with the angular convergence measure—it does not maintain continual descent, because the basis vectors compared from one iteration to the next are not required to maintain any fixed column order. After several iterations, the column ordering of the basis vectors in is less likely to change, making the angular measure more useful in later iterations of the algorithm. The angular convergence measure is much less expensive to compute than the Frobenius measure, but does require additional storage of the matrix from the previous iteration. We note in practice, that regardless of the chosen convergence criterion, it is wise to only compute the measure every five or so iterations after some burnin period.
The recent 2005 reference by Lin [38] mentioned the related convergence criterion issue of stationarity. The fact that (or some similar objective function) levels off does not guarantee stationarity. Lin advocates a stationarity check once an algorithm has stopped. For instance, the stationarity checks of Chu and Plemmons [13] may be used. Lin [38] proposes a convergence criterion, that simultaneously checks for stationarity, and fits nicely into his projected gradients algorithm. We agree that a stationarity check should be conducted on termination.
6 Conclusion
The two new NMF algorithms presented in this paper, ACLS and AHCLS, are some of the fastest available, even faster than truncated SVD algorithms. However, while the algorithms will converge to a stationary point, they cannot guarantee that this stationary point is a local minimum. If a local minimum must be achieved, then we recommend using the results from a fast ALStype algorithm as the initialization for one of the slow algorithms [38] that do guarantee convergence to a local minimum. In this paper we also presented several alternatives to the common, but poor, initialization technique of random initialization. Lastly, we proposed an alternative stopping criterion that practical implementations of NMF code should consider. The common stopping criterion of running for a fixed number of iterations should be replaced with a criterion that fits the context of the users and their data. For many applications, iterating until reaches some small level is unnecessary, especially in cases where one is most interested in the qualitative results produced by the vectors in . In such cases, our proposed angular convergence measure is more appropriate.
References
 [1] Ricardo BaezaYates and Berthier RibeiroNeto. Modern Information Retrieval. ACM Press, New York, 1999.
 [2] R. Barrett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. Van der Vorst. Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods. SIAM, Philadelphia, 2nd edition, 1994.
 [3] Michael W. Berry, editor. Computational Information Retrieval. SIAM, Philadelphia, 2001.
 [4] Michael W. Berry and Murray Browne. Understanding Search Engines: Mathematical Modeling and Text Retrieval. SIAM, Philadelphia, 2nd edition, 2005.
 [5] Michael W. Berry, Zlatko Drmac, and Elizabeth R. Jessup. Matrices, vector spaces and information retrieval. SIAM Review, 41:335–62, 1999.
 [6] Michael W. Berry and R. D. Fierro. Lowrank orthogonal decompositions for information retrieval applications. Journal of Numerical Linear Algebra with Applications, 1(1):1–27, 1996.
 [7] Michael W. Berry and Gavin W. O’Brien. Using linear algebra for intelligent information retrieval. SIAM Review, 37:573–595, 1998.
 [8] Ake Bjorck. Numerical methods for least squares problems. SIAM, Philadelphia, 1996.
 [9] Katarina Blom. Information retrieval using the singular value decomposition and Krylov subspaces. PhD thesis, University of Chalmers, January 1999.
 [10] Katarina Blom and Axel Ruhe. Information retrieval using very short Krylov sequences. In Computational Information Retrieval, pages 41–56, 2001.
 [11] Rasmus Bro and Sijmen de Jong. A fast nonnegativity constrained linear least squares algorithm. Journal of Chemometrics, 11:393–401, 1997.
 [12] Donald S. Burdick, Xin M. Tu, Linda B. McGown, and David W. Millican. Resolution of multicomponent fluorescent mixtures by analysis of the excitationemissionfrequency array. Journal of Chemometrics, 4:15–28, 1990.
 [13] Moody Chu, Fasma Diele, Robert J. Plemmons, and Stefania Ragni. Optimality, computation, and interpretations of nonnegative matrix factorizations. SIAM Journal on Matrix Analysis, 2004. submitted.
 [14] Anirban Dasgupta, Ravi Kumar, Prabhakar Raghavan, and Andrew Tomkins. Variable latent semantic indexing. In Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. ACM Press, 2005.
 [15] Inderjit S. Dhillon. Concept decompositions for large sparse text data using clustering. Machine Learning, 42(1/2):143–175, 2001.
 [16] Inderjit S. Dhillon and Suvrit Sra. Generalized nonnegative matrix approximations with Bregman divergences. In Proceeding of the Neural Information Processing Systems (NIPS) Conference, Vancouver, B.C., 2005.
 [17] Petros Drineas, Ravi Kannan, and Michael W. Mahoney. Fast Monte Carlo algorithms for matrices III: Computing a compressed approximate matrix decomposition. SIAM Journal on Computing, 2006. to appear.
 [18] Susan Dumais, John Platt, David Heckerman, and Mehran Sahami. Inductive learning algorithms and representations for text categorization. In CIKM ’98: Proceedings of the seventh international conference on Information and knowledge management, pages 148–55. ACM Press, 1998.
 [19] Susan T. Dumais. Improving the retrieval of information from external sources. Behavior Research Methods, Instruments and Computers, 23:229–236, 1991.
 [20] Lorenzo Finesso and Peter Spreij. Approximate nonnegative matrix factorization via alternating minimization. In Sixteenth International Symposium on Mathematical Theory of Networks and Systems, Leuven, 2004.
 [21] Gene H. Golub and Charles F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, 1996.
 [22] Edward F. Gonzalez and Yin Zhang. Accelerating the LeeSeung algorithm for nonnegative matrix factorization. Technical Report TR0502, Rice University, March 2005.
 [23] Patrik O. Hoyer. Nonnegative sparse coding. In Neural Networks for Signal Processing XII (Proc. IEEE Workshop on Neural Networks for Signal Processing), pages 557–565, 2002.
 [24] Patrik O. Hoyer. Nonnegative matrix factorization with sparseness constraints. Journal of Machine Learning Research, 5:1457–1469, 2004.
 [25] M. K. Hughey and Michael W. Berry. Improved query matching using kdtrees, a latent semantic indexing enhancement. Information Retrieval, 2:287–302, 2000.
 [26] Eric P. Jiang and Michael W. Berry. Solving total least squares problems in information retrieval. Linear Algebra and its Applications, 316:137–156, 2000.
 [27] Fan Jiang and Michael L. Littman. Approximate dimension equalization in vectorbased information retrieval. In The Seventeenth International Conference on Machine Learning, pages 423–430, 2000.
 [28] Mika Juvela, Kimmi Lehtinen, and Pentti Paatero. The use of positive matrix factorization in the analysis of molecular line spectra. Monthly Notices of the Royal Astronomical Society, 280:616–626, 1996.
 [29] Tamara G. Kolda and Dianne P. O’Leary. A semidiscrete matrix decomposition for latent semantic indexing in information retrieval. ACM Transactions on Information Systems, 16:322–346, 1998.
 [30] Amy N. Langville. Algorithms for the nonnegative matrix factorization in text mining, April 2005. Slides from SAS Meeting.
 [31] Amy N. Langville. Experiments with the nonnegative matrix factorization and the reuters10 dataset, February 2005. Slides from SAS Meeting.
 [32] Amy N. Langville, Michael W. Berry, Murray Browne, V. Paul Pauca, and Robert J. Plemmons. Algorithms and applications for approximate nonnegative matrix factorization. Computational Statistics and Data Analysis, 52(1):155–173, 2007.
 [33] Charles L. Lawson and Richard J. Hanson. Solving Least Squares Problems. SIAM, 1995.
 [34] D. Lee and H. Seung. Algorithms for NonNegative Matrix Factorization. Advances in Neural Information Processing Systems, 13:556–562, 2001.
 [35] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788–791, 1999.
 [36] Todd A. Letsche and Michael W. Berry. Largescale information retrieval with LSI. Informatics and Computer Science, pages 105–137, 1997.

[37]
Shousong Li and Paul J. Gemperline.
Eliminating complex eigenvectors and eigenvalues in multiway analyses using the direct trilinear decomposition method.
Journal of Chemometrics, 7:77–88, 1993.  [38] ChihJen Lin. Projected gradient methods for nonnegative matrix factorization. Technical Report Information and Support Services Tech. Report ISSTECH95013, Department of Computer Science, National Taiwan University, 2005.
 [39] Jianhang Lu and Laosheng Wu. Technical details and programming guide for a general twoway positive matrix factorization algorithm. Journal of Chemometrics, 18:519–525, 2004.
 [40] Carl D. Meyer. Matrix Analysis and Applied Linear Algebra. SIAM, Philadelphia, 2000.
 [41] Ben C. Mitchell and Donald S. Burdick. Slowly converging PARAFAC sequences: Swamps and twofactor degeneracies. Journal of Chemometrics, 8:155–168, 1994.
 [42] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, 1999.
 [43] Pentti Paatero. Least squares formulation of robust nonnegative factor analysis. Chemometrics and Intelligent Laboratory Systems, 37:23–35, 1997.
 [44] Pentti Paatero. The multilinear engine—a tabledriven least squares program for solving multilinear problems, including the nway parallel factor analysis model. Journal of Computational and Graphical Statistics, 8(4):1–35, 1999.
 [45] Pentti Paatero and U. Tapper. Positive matrix factorization: a nonnegative factor model with optimal utilization of error estimates of data values. Environmetrics, 5:111–126, 1994.
 [46] V. Paul Pauca, Jon Piper, and Robert J. Plemmons. Nonnegative matrix factorization for spectral data analysis. 2005.
 [47] V. Paul Pauca, Farial Shahnaz, Michael W. Berry, and Robert J. Plemmons. Text Mining Using NonNegative Matrix Factorizations. In Proceedings of the Fourth SIAM International Conference on Data Mining, April 2224, Lake Buena Vista, FL, 2004. SIAM.
 [48] Robert J. Plemmons, 2005. private communication.
 [49] E. Sanchez and B. R. Kowalski. Tensorial resolution: A direct trilinear decomposition. Journal of Chemometrics, 4:29–45, 1990.
 [50] Mark Sandler. On the use of linear programming for unsupervised text classification. In The Eleventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, 2005.
 [51] R. Sands and Forrest W. Young. Component models for threeway data: an alternating least squares algorithm with optimal scaling features. Psychometrika, 45:39–67, 1980.
 [52] Farial Shahnaz. A clustering method based on nonnegative matrix factorization for text mining. Master’s thesis, University of Tennessee, Knoxville, 2004.
 [53] Farial Shahnaz, Michael W. Berry, V.Paul Pauca, and Robert J. Plemmons. Document Clustering Using Nonnegative Matrix Factorization. Information Processing & Management, 42(2):373–386, 2006.
 [54] David B. Skillicorn, S. M. McConnell, and E.Y. Soong. Handbooks of data ming using matrix decompositions. 2003.
 [55] Age Smilde, Rasmus Bro, and Paul Geladi. Multiway Analysis. Wiley, West Sussex, England, 2004.

[56]
Stefan Wild.
Seeding nonnegative matrix factorizations with spherical kmeans clustering.
Master’s thesis, University of Colorado, 2003.  [57] Stefan Wild, James Curry, and Anne Dougherty. Motivating nonnegative matrix factorizations. In Eighth SIAM Conference on Applied Linear Algebra, Philadelphia, 2003. SIAM.

[58]
Stefan Wild, James Curry, and Anne Dougherty.
Improving nonnegative matrix factorizations through structured
initialization.
Journal of Pattern Recognition
, 37(11):2217–2232, 2004.  [59] Dian I. Witter and Michael W. Berry. Downdating the latent semantic indexing model for conceptual information retrieval. The Computer Journal, 41(1):589–601, 1998.
 [60] Wei Xu, Xin Liu, and Yihong Gong. Document clustering based on nonnegative matrix factorization. In ACM SIGIR Conference on Research and Development in Information Retrieval, pages 267–273, New York, 2003. ACM Press.
 [61] Hongyuan Zha, Osni Marques, and Horst D. Simon. A subspacebased model for information retrieval with applications in latent semantic indexing. Lecture Notes in Computer Science, 1457:29–42, 1998.
 [62] Xiaoyan Zhang, Michael W. Berry, and Padma Raghavan. Level search schemes for information filtering and retrieval. Information Processing and Management, 37:313–34, 2001.
Comments
There are no comments yet.