Minimum description length as an objective function for non-negative matrix factorization

02/05/2019 ∙ by Steven Squires, et al. ∙ 0

Non-negative matrix factorization (NMF) is a dimensionality reduction technique which tends to produce a sparse representation of data. Commonly, the error between the actual and recreated matrices is used as an objective function, but this method may not produce the type of representation we desire as it allows for the complexity of the model to grow, constrained only by the size of the subspace and the non-negativity requirement. If additional constraints, such as sparsity, are imposed the question of parameter selection becomes critical. Instead of adding sparsity constraints in an ad-hoc manner we propose a novel objective function created by using the principle of minimum description length (MDL). Our formulation, MDL-NMF, automatically trades off between the complexity and accuracy of the model using a principled approach with little parameter selection or the need for domain expertise. We demonstrate our model works effectively on three heterogeneous data-sets and on a range of semi-synthetic data showing the broad applicability of our method.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Non-negative matrix factorisation (NMF) is a popular linear dimensionality reduction technique. Its ability to produce a sparse and parts based representation, in contrast to principal component analysis (PCA) which is variance preserving, has been exploited in a range of applications 

devarajan2006nonnegative ; lee1999learning ; wang2011community . The aim of NMF is to take a non-negative data matrix with dimensions and data points and find two non-negative matrices: and , such that and (usually) and . The columns of W represent the axes of the new subspace and each column of H holds the coefficients of each data-point in the new subspace. The standard method for finding the matrices W and H is to choose an objective function and then attempt to minimise it. One of the most common choices is the Frobenius norm:

, but other options such as measures similar to the Kullback-Leibler divergence have also been used 

lee2001algorithms .

One of the useful features of NMF is that it can produce a sparse representation of the data which, amongst other benefits, makes it easier to interpret. This sparsity can manifest in either, or both, of the factorized matrices. Sparsity is imposed by NMF because of the non-negativity constraint which means all the latent structure is positive and so is the summation of that structure to recreate each data-point. The columns of the W matrix can often be interpreted as parts of the data, especially as the features occupy the same positive space as the input matrix. Again, this is in contrast to PCA where the subspace the data is projected onto tends not to be purely positive and therefore is difficult to interpret as features of a non-negative input matrix. Neither does PCA normally produce sparse solutions, instead producing holistic representations which preserve variance.

The seminal paper by Lee and Seung lee1999learning , which provided the surge of interest in NMF (although other authors had conducted significant work in the field previously paatero1994positive ), relied on the constraints from the small subspace size and non-negativity to provide the sparse solutions. However, other authors, in particular Hoyer hoyer2004non , showed that for some problems NMF does not produce sparse solutions and it can be useful to impose sparsity explicitly.

Hoyer hoyer2004non introduced additional sparsity parameters through a projection operator which constrains the W and H matrices to have a defined level of sparseness. The formulation of Hoyer works well if there is a specific goal in mind. If you want to extract understandable features you might want to impose significant levels of sparsity on W while if you are more interested in a representation closer to clustering, imposing sparsity constraints on H can be effective. These choices are domain dependent and require ad-hoc choices. However, if there is no prior knowledge or domain expertise, how do you decide the level of sparsity to impose on the factorized matrices and the distribution of the imposed sparsity between the two matrices? At a more fundamental level we ask whether the minimisation of the error alone is an optimal strategy.

With no additional constraints NMF algorithms attempt to minimise the error between the recreated and original matrices, or for other objective functions, such as the Kullbeck-Leibler divergence, to maximise the similarity in distribution between the actual and recreated matrices. In many situations we would expect this to result in over-fitting of the data - modelling noise rather than signal, and resulting in a model which would generalise poorly. The algorithm aims to maximise the similarity between the actual and recreated matrices with no regard for how complex the model being created is. As this is unsupervised learning the algorithm sees all the data and cannot be easily tested on unseen data.

There are two features of the formulation that contend with overfitting: 1) the small size of the subspace , which means that it is likely real latent structure will be uncovered; 2) the non-negativity constraint which works to prevent overfitting by preventing the subtraction of structure, so providing an emphasis on sparse models which do not include unwanted structure. The choice of has been investigated in the literature squires2017rank and its importance is clear when considering that if or then the model can perfectly recreate the actual data. The quality of the fit increases continuously with an increasing . The non-negativity constraint can produce sparse solutions which act against overfitting but, without additional constaints, may not be sufficient as demonstrated by Hoyer hoyer2004non . The objective function without sparsity constraints may be biased in favour of improving accuracy with less consideration of the level of complexity of the model (which causes the overfitting). The imposition of sparsity constraints is an effective but ad-hoc and arbitrary method to impose an additional constraint on the model becoming too complex. A more fundamental approach is desirable.

A Bayesian approach to NMF has been demonstrated by, for example schmidt2009bayesian , which overcomes some of these related issues, especially the ad-hoc nature of additional sparsity constraints. Other groups have also applied a Bayesian formulation blei2010bayesian , including to the choice of . Bayesian approaches have great value in NMF but they still require knowledge of the domain to enable prior distributions to be chosen. Other related work has been performed using particle filtering ideas wang2013online .

We propose an alternative objective function using a formulation based on minimum description length (MDL). This formulation allows an automatic and natural trade-off between accuracy and complexity of the model without the need for considerable domain expertise. Previous work has used MDL ideas to estimate good choices of

 squires2017rank in NMF but, to our knowledge, no previous work has used MDL principles to guide the formation of the W and H matrices.

The remainder of this paper is organised as follows. In Section 2 we present the principles behind our application of MDL to the NMF objective function; in Section 3 we explain our methodology and algorithm; in Section 4 we demonstrate the empirical effectiveness of our model on real and semi-synthetic data and finally in Section 5 we draw conclusions and propose future directions of research.

2 Minimum description length

MDL is a method which produces an automatic trade-off between the complexity and accuracy of a model rissanen1978modeling ; wallace1968information ; barron1998minimum . Considering a message sent to a receiver, the MDL principle is that the encoding of the message that can be described by the shortest code is the best choice, as long as it enables the message to be recreated with perfect (to a pre-agreed precision) accuracy by the receiver. This is essentially equivalent to the best compression of the data, as a maximally compressed data-set will also be cheaper to transmit as a message than a more poorly compressed data-set. The best compression should come about when we extract features and use those to recreate the original data. The MDL principle is not used here to actually encode a message, instead our interest is in how long the message would be if we did encode it.

The message length under the MDL formulation, , consists of two parts mackay2003information :

(1)

where is the length of the hypothesis and encodes the accuracy of the model. can be interpreted as representing the complexity of the model (or hypothesis), a simple model would have a lower cost (message length) than a more complex model. then provides a measure of how accurately the model can represent the data, the better the model can reproduce the original data the lower the cost, and the shorter the message required to encode the model errors.

To apply MDL to NMF, we regard the data matrix V as a message to be communicated and the factors, W and H, as a reduced representation that can be more efficiently transmitted. The combined number of elements of W and H is usually much smaller than the number of elements in V () so should, usually, have a shorter required code length. In addition, we usually expect W and H to be fairly sparse compared to V which, again, makes them cheaper to encode, as will be discussed below. We need to reproduce the data matrix V to some pre-agreed precision so we also need a correction matrix, which is the accuracy of the model, with required code length .

In this MDL formulation the objective function becomes

(2)

where , and are the lengths of the code required for W, H and E respectively. The terms, can be viewed as a penalty term similar to Tikhonov tikhonov1977solutions and lasso tibshirani1996regression regularisers in classic regression problems. Models of high complexity require correspondingly long codes. However, with a rise in model complexity we expect to see a model which can more accurately recreate the original data matrix and so would expect to see the value of the elements in E fall which should cause a reduction in . Therefore, an algorithm that finds a local minimum of Equation (2) should find a pair of matrices W and H which are automatically regulated to trade-off between complexity and accuracy.

To understand why smaller errors lead to a shorter length of code consider what happens to a Gaussian distribution as the terms move closer to zero - the distribution becomes tighter. As we need to send the message to a pre-agreed precision, we can see that more terms will be close to zero, with many having the same value, within the pre-agreed precision. These increasing number of terms can be encoded using the same, short, code. The elements far from the Gaussian mean will be more expensive to encode. Increasing accuracy therefore decreases the length of the correction,

. If model complexity decreases, for example by an increase in sparsity, we see a similar result, but for the term - the code required becomes much cheaper as there are many terms near to zero. Therefore, sparse and non-complex model matrices will require a short message to encode them, but will require a longer code for the correction matrix, providing the trade-off between model complexity and accuracy.

In MDL there is no need to know how to encode the matrices, all we need to do is to estimate the length the code would need to be. We do this using the Shannon information content shannon1948 which is defined as where is the value of an element in the relevant matrix and

is the probability of that value. Including this estimate of the length of the code gives us an objective function:

(3)
(4)

where , and represent the , element of E, W and H respectively. We have now specified the outline of our model (MDL-NMF) and why it should result in an automatic trade-off between complexity and accuracy. In the next section we will describe our method which estimates the probabilities and then finds appropriate W and H matrices.

3 Methods and algorithm

We propose two methods of estimating the probabilities, in the same manner as the previous work on using MDL to select the subspace size squires2017rank . The first is a non-parametric method of sorting the elements of the matrix into bins and finding the probability by where is the number of elements in the bin and

is the total number of elements of that matrix. The other, parametric, method applies probability distributions to the

E, W and H matrices to extract the probability density of each value.

All the results in this paper were produced using the parametric model as it is computationally faster than the non-parametric approach because the objective function can be differentiated, allowing us to apply gradient descent methods to approach a minimum. Finding a fast non-parametric method might be an interesting future avenue of research.

To find the probability distribution we first put the elements of the W, H and E matrices, individually, into bins. We then fit a Gaussian distribution, with mean

and standard deviation

, to the E

matrix and two different gamma distributions to the

W (with parameters and ) and H (with parameters and ) matrices.

We chose the gamma distribution () because it fulfils several important criteria: it falls to zero as the value becomes large; it remains positive, either falling to zero or rising towards infinity as its elements approach zero; and it is a flexible distribution. While we illustrate the idea here using the gamma and Gaussian distributions, our MDL-NMF method is not dependent on the type of distributions applied, other options may be more suitable for specific applications.

As the objective function of the distribution method is differentiable we seek a solution by following a gradient descent approach. Therefore we need to find and while noting that E is a function of W and H. As with most NMF techniques gillis2014and we update W and H separately, making two separate convex functions to optimise as opposed to the original non-convex problem if we try and update both W and H together. We therefore apply:

(5)

where and are learning rate parameters,

(6)

and

(7)

We can also consider the updates from a matrix multiplication perspective which can significantly speed up computational time. In this case we have:

(8)

where

(9)

and we have defined three new matrices, , and , with elements:

(10)
(11)

respectively.

As the W and H matrices are changed, the distributions will also shift, so we also need to change the parameters of the distributions in response. In Algorithm 1 we specify the algorithm for finding the W and H matrices for the MDL-NMF formulation. The precision term depends on the data and sets the bin width. We do not impose a specific stopping criteria - generally we run the algorithm until there appears to be no further fall in the objective function or it begins to rise.

1:Input: Data matrix V, subspace size .
2:Output: W, H
3:Initialise: , .
4:Define: learning rate , precision
5:Calculate: parameters of the distribution, initial objective function
6:for t=1,2,….stopping criteria reached
7:         if or set to
8:         Update:
9:         Update:
10:         Calculate
11:         if reduce , revert W and H to previous values
12:         Recalculate: parameters of distributions using maximum likelihood estimates.
13:end for
Algorithm 1 MDL-NMF implementation

Perhaps the most widely used NMF algorithm is the multiplicative updates (MU) method of Lee and Seung lee1999learning ; lee2001algorithms which is, effectively, gradient descent of the Frobenius norm error with an automatically changing learning rate which ensures that the objective function monotonically decreases. In contrast, due to the static learning rate, we have no guarantee that the MDL-NMF objective function will monotonically decrease. We compensate for this by reducing the learning rate if the objective function is not falling.

If different distributions were chosen the only change required would be in the and terms. These would need to be worked out again for the new distributions, otherwise the method and algorithm would remain the same.

4 Results and Discussion

In the previous sections we have given an explanation of why our MDL-NMF method should work and the potential advantages over methods that minimise an error alone, or those that impose sparsity with ad-hoc parameter tuning. We now present empirical results to demonstrate the efficacy of our approach. In Table 1 we display the three data-sets we tested our MDL-NMF method on, including the type of data, the number of dimensions and number of data-points. These data-sets were chosen for their considerable heterogeneity to show the breadth of application of our method.

Name Type m n Source
Faces Image 361 2429 http://cbcl.mit.edu/
software-datasets
/FaceData2.html
Transcriptome Biological 5000 38 http://www.broad
institute.org/cgi-bin
/cancer/datasets.cgi
FTSE 100 Financial 1305 94 University Bloomberg
information terminal
Table 1: Data-sets, the type of data, the number of dimensions, m, and number of data-points, n.

The faces data is a set of 19-by-19 grey-scale images of faces which are individually converted to a 361 dimension vector then stacked up to make the

matrix. The transcriptome golub1999molecular data set is gene expression data for 38 samples, for people with two different types of leukemia. Each column of the FTSE 100 data represents the day closing price of one stock over a five year period, these are stacked with the other companies (94 rather than than 100 due to companies dropping into and out of the FTSE 100 over time) from the FTSE 100 to make our data-set. These data-sets are from a wide range of applications, all have previously been studied using NMF techniques, lee1999learning ; devarajan2006nonnegative ; squires2017non , and provide evidence for the large range of effective use of MDL-NMF. They are from very different domains, with significantly different ratios.

Making a direct comparison between dimensionality reduction techniques is a challenge. Unlike in supervised learning there is no direct model to test held-out data on, therefore we cannot easily check the generalisation error to compare our results to other methods. To attempt to compensate for this we have also tested MDL-NMF on semi-synthetic data.

The use of synthetic data has two main problems: 1) that it is difficult to know what the underlying structure should look like; 2) that synthetic data can be created and adjusted to provide data that produce favourable results for any particular technique instead of a realistic portrayal of the effectiveness of the method. Therefore, instead of using completely synthetic data we used a form of semi-synthetic data based upon the three data-sets in Table 1.

Our general principle to create semi-synthetic data is to perform NMF on the real data, producing W and H. From there we know what the underlying structure of the semi-synthetic data is - assuming standard NMF uncovers real structure. We then rebuild the data, add noise and use our MDL-NMF technique to attempt to uncover the known true results.

We have not provided extensive results for how long the algorithm takes to run - if using the matrix multiplication method a full run from a randomised starting point to final solution for the largest and slowest dataset (faces) takes less than three minutes on an ordinary desktop computer. We would suggest the MDL-NMF method may be better used as a tuning method, with initial W and H matrices set as similar to the results from standard NMF, in which case MDL-NMF runs even faster. Either way MDL-NMF, while slower than standard NMF, takes an insignificant length of time to run on these data-sets. The scalability of this work to much larger datasets would require further research.

In the following subsections we demonstrate the potential value of MDL-NMF by investigating: 1) how well it reproduces the original data; 2) the process by which MDL-NMF learns the representation; 3) the ability of MDL-NMF to extract signal rather than noise from the semi-synthetic data.

4.1 Recreation of the original data

Any NMF algorithm must be able to reproduce the original data with a reasonable degree of similarity. The precise level of similarity desired is not straight-forward to specify because part of the purpose of NMF is to suppress noise in the data. In any real data set we do not know what is signal and what is noise. Therefore, if an NMF algorithm perfectly recreates a real world data-set we would not necessarily consider that a success, as it is likely to be modelling noise. On the other hand, if we are using data-sets with a reasonably high signal to noise ratio we would expect to be able to recreate much of the data using NMF techniques.

In Figure 1 examples of the final recreated output from each of the three data-sets are shown. In Figure 1 we show one randomly selected sample from the images of faces data-set. The left face is the original with the recreated face (for ) on the right. In Figure 1 are plots of original (red dashed line) and recreated (blue dotted line) prices against time for one stock from the FTSE 100 dataset (with ). The top and bottom plots show the most and least accurately recreated stocks respectively. The stock that is recreated most inaccurately, still produces a good reproduction of the actual prices. In Figure 1, for ease of visualisation of the genomic data, we randomly sampled 5000 of the elements of the data matrix V (x-axis) and compared the values of them to the same elements of the recreated matrix WH (y-axis) with . For all three data-sets our MDL-NMF method produces an effective recreation of the data.

Figure 1: Plots showing the quality of the final recreated data. (a) One randomly selected sample from the images of faces dataset. The left image is the original and the right image is the recreated face for . (b) Actual (red dashed line) and recreated (blue dotted line) prices against time for one stock for . The top is the stock with most similarity between actual and recreated, while the bottom plot is the least similar. (c) 5000 randomly selected elements of the data matrices with the recreated value plotted against original values for the transcriptome data with . MDL-NMF effectively recreates all three datasets.

4.2 The learning process

We now observe the learning dynamics by taking snapshots of the reconstruction during the learning process. Our MDL-NMF objective function is not purely based on minimising the error and therefore may follow a somewhat different trajectory to finding a minimum, it is therefore useful to demonstrate what does happen as we reduce the objective function. In Figure 2 we show how the reproduced data looks as the algorithm proceeds. Due to the differences between the data-sets they take a different number of iterations to arrive at a reasonable solution and the snapshots were chosen at appropriate iteration intervals to demonstrate how the quality of the fit improves. Figure 2 is an image from the faces dataset; Figure 2 is a FTSE 100 stock with the real (red dashed) and recreated (blue dotted) prices changing with iterations; Figure 2 is 5000 of the elements of the recreated matrix versus the real values from the transcriptome data. All three plots show the clear trend from a randomised starting point to a reasonable final recreation of the original data.

Figure 2: The reproduced data recorded at different iterations of the training process. (a) A sample from the faces dataset; (b) a randomly selected stock with the real (red dashed) and recreated (blue dotted) prices; (c) 5000 recreated results versus original values for the transcriptome data. All three show the change from a randomised starting point to a reasonable recreation of the original data.

In Figure 3 we display the changes in both the description lengths and errors with iteration for a low and high noise variant of the semi-synthetic transcriptome data. There are five repeats of the MDL-NMF algorithm displayed on the same graph for the same data with different initialisations of the W and H matrices. For these plots we initialised the matrices for MDL-NMF by running standard NMF on the semi-synthetic data and then adding Gaussian noise to each element of the two matrices. The true error is defined as where is the underlying data matrix before we added noise while the noisy error is where is the semi-synthetic data-set after noise is added. All the plots are normalised so the description lengths and errors all start at one.

Figure 3: The changes in description lengths and errors with iteration for the semi-synthetic transcriptome data for a low noise variant and a high noise variant.

There are some interesting features of the graphs. At a high noise variant the MDL-NMF significantly reduces the true error while the noisy error is not reduced much. At the low noise level most of the reduction in description length is achieved by reducing the noisy error - that also reduces the true error. With the high noise level the terms does not reduce much, instead we get a much larger relative reduction in the and values with commensurate significant falls in the true error. This may imply that the algorithm is reducing the complexity of the model ( and ) in preference to the error term () when there is a high level of noise. These type of plots may be useful analysis tools to use when investigating new data sets.

4.3 Representing true data over noise

A good NMF technique should be able to extract true data from noisy data. For our semi-synthetic data we can look at how well our MDL-NMF technique does with extracting the true data from the noise we added.

In Figure 4 we show the true error against the noise error for MDL-NMF. Any point below the straight line shows a version where MDL-NMF finds a better error on the true data than the noise data. These results are from a range of variants of the semi-synthetic data: varying additional noise levels; varying values used to create the semi-synthetic data; varying values used to try and extract the true representations; true W and H matrices that are smoothed; using sparsity induced NMF (sNMF) to create the true matrices. The different symbols represent the three different types of semi-synthetic data. It is clear that almost all the results show a lower true error than noisy error MDL-NMF is able to cut through the added noise to represent the true data better than the noise data.

In Figure 4b we show the same results but for standard NMF using multiplicative updates lee1999learning . We note here that we originally made the data using a NMF method - we might therefore expect NMF to effectively find the real data beneath - which it effectively does for most of the data points but there are several points where standard NMF produces a better fit to the noisy data than the true data, it is overfitting to the noise.

Figure 4: The real error against noise error show that MDL-NMF almost always finds a better fit to the real data than the noise added data. Results for a range of semi-synthetic data-set types are shown. (a) for MDL-NMF. (b) for NMF.

A direct comparison between NMF and MDL-NMF may not be completely fair, as NMF was used to create the semi-synthetic data in the first place. Also we are assuming that the original creation of the semi-synthetic data removed all the noise from the original data - if it did not then there is still noise left in the true error. Accepting these limitations, if we see any improvement in representing real data using MDL-NMF over standard NMF then it implies that MDL-NMF can be a useful tool to use alongside NMF.

In Figure 5a we show real error against levels of noise for the three semi-synthetic datasets for both MDL-NMF and NMF. In Figure 5b we display the same but this time using sparsity induced NMF (sNMF) from Hoyer hoyer2004non with induced sparsity chosen to be the actual sparsity of the known semi-synthetic W and H. This favours the sNMF over MDl-NMF which has no direct information about the level of sparsity of the W and H matrices, we would expect sNMF to produce significantly better results as the noise increases.

The comparison between MDL-NMF and NMF shows very little difference between the two methods with low level of noise, but as the noise level increases we get an improved representation with MDL-NMF. This is true for all three data-sets although it is most apparent for the Faces. This finding is sensible, at low levels of additional noise there is little need to regularise but as the noise level increases we would expect to see increased levels of over-fitting. The comparison with sNMF is instructive on this, when we set the sparseness level to what the underlying matrices have, we get much less overfitting - the sNMF does much better than either MDL-NMF and NMF. So the sparseness is reducing much of the overfitting. MDL-NMF shows a promising ability to, at least a little, compensate for the increases in noise. We initialised the MDL-NMF matrices using final NMF matrices with a little noise added, so it is possible that with better initialisation choices we could do even better than the small improvements we show here.

Figure 5: A comparison of the real error found for MDL-NMF against NMF (three plots on the left) and sNMF (three plots on the right) for the three semi-synthetic data sets.

5 Conclusions

The standard NMF objective function which minimizes the error may result in producing matrices which are too complex and overfit the data. We provide an alternative, based upon an MDL approach which provides an automatic and natural tradeoff between the accuracy and complexity of the representation.

We demonstrated the effectiveness of MDL-NMF on three heterogeneous data-sets, producing representations which match well with the real data. We also tested our model on semi-synthetic data producing representations which represent the real data significantly more than the noise. In addition, studying the changes in the description lengths with iteration may be a useful analysis tool to use to investigate a dataset.

MDL-NMF produces superior results at extracting real data from noise over NMF for some semi-synthetic data especially when the added noise becomes high. It would be a worthwhile procedure to run MDL-NMF after a standard NMF run using the NMF matrices as initialisations for MDL-NMF to see how the two representations compare.

We have proposed using an MDL based objective function for NMF and demonstrated that it works effectively. However, the details are highly flexible. The choice of both distributions to model the matrices and the optimization scheme for finding the W and H matrices may both be subject to considerable improvement. There may also be significantly faster and more efficient methods of finding the W and H matrices in the MDL-NMF formulation than that proposed in this paper. The choice of the learning rates and could potentially be improved as could the initialisation of the W and H matrices. Our aim is to introduce the method and background and demonstrate that it has potential to be a useful tool.

References

References

  • (1) K. Devarajan, Nonnegative matrix factorization—a new paradigm for large-scale biological data analysis, in: Proceedings of the Joint Statistical Meetings. Joint Statistical Meetings, 2006, pp. 6–10.
  • (2) D. D. Lee, H. S. Seung, Learning the parts of objects by non-negative matrix factorization, Nature 401 (6755) (1999) 788–791.
  • (3) F. Wang, T. Li, X. Wang, S. Zhu, C. Ding, Community discovery using nonnegative matrix factorization, Data Mining and Knowledge Discovery 22 (3) (2011) 493–521.
  • (4) D. D. Lee, H. S. Seung, Algorithms for non-negative matrix factorization, in: Advances in neural information processing systems, 2001, pp. 556–562.
  • (5) P. Paatero, U. Tapper, Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values, Environmetrics 5 (2) (1994) 111–126.
  • (6)

    P. O. Hoyer, Non-negative matrix factorization with sparseness constraints, The Journal of Machine Learning Research 5 (2004) 1457–1469.

  • (7) S. Squires, A. Prügel-Bennett, M. Niranjan, Rank selection in nonnegative matrix factorization using minimum description length, Neural Computation.
  • (8)

    M. N. Schmidt, O. Winther, L. K. Hansen, Bayesian non-negative matrix factorization, in: International Conference on Independent Component Analysis and Signal Separation, Springer, 2009, pp. 540–547.

  • (9) D. M. Blei, P. R. Cook, M. Hoffman, Bayesian nonparametric matrix factorization for recorded music, in: Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010, pp. 439–446.
  • (10)

    N. Wang, J. Wang, D.-Y. Yeung, Online robust non-negative dictionary learning for visual tracking, in: Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 657–664.

  • (11) J. Rissanen, Modeling by shortest data description, Automatica 14 (5) (1978) 465–471.
  • (12) C. S. Wallace, D. M. Boulton, An information measure for classification, The Computer Journal 11 (2) (1968) 185–194.
  • (13) A. Barron, J. Rissanen, B. Yu, The minimum description length principle in coding and modeling, Information Theory, IEEE Transactions on 44 (6) (1998) 2743–2760.
  • (14) D. J. MacKay, Information theory, inference and learning algorithms, Cambridge university press, 2003.
  • (15) A. N. Tikhonov, V. I. Arsenin, F. John, Solutions of ill-posed problems, Vol. 14, Winston Washington, DC, 1977.
  • (16) R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society. Series B (Methodological) (1996) 267–288.
  • (17) C. E. Shannon, A mathematical theory of communication, The Bell System Technical Journal 27 (3) (1948) 379–423. doi:10.1002/j.1538-7305.1948.tb01338.x.
  • (18)

    N. Gillis, The why and how of nonnegative matrix factorization, Regularization, Optimization, Kernels, and Support Vector Machines 12 (2014) 257.

  • (19) T. R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. L. Loh, J. R. Downing, M. A. Caligiuri, et al., Molecular classification of cancer: class discovery and class prediction by gene expression monitoring, science 286 (5439) (1999) 531–537.
  • (20) S. Squires, L. Montesdeoca, A. Prügel-Bennett, M. Niranjan, Non-negative matrix factorization with exogenous inputs for modeling financial data, in: International Conference on Neural Information Processing, Springer, 2017, pp. 873–881.