Obtaining error-minimizing estimates and universal entry-wise error bounds for low-rank matrix completion

We propose a general framework for reconstructing and denoising single entries of incomplete and noisy entries. We describe: effective algorithms for deciding if and entry can be reconstructed and, if so, for reconstructing and denoising it; and a priori bounds on the error of each entry, individually. In the noiseless case our algorithm is exact. For rank-one matrices, the new algorithm is fast, admits a highly-parallel implementation, and produces an error minimizing estimate that is qualitatively close to our theoretical and the state-of-the-are Nuclear Norm and OptSpace methods.



There are no comments yet.


page 13


The Algebraic Combinatorial Approach for Low-Rank Matrix Completion

We present a novel algebraic combinatorial view on low-rank matrix compl...

Matrix completion with data-dependent missingness probabilities

The problem of completing a large matrix with lots of missing entries ha...

The Leave-one-out Approach for Matrix Completion: Primal and Dual Analysis

In this paper, we introduce a powerful technique, Leave-One-Out, to the ...

Adaptive Estimation of Noise Variance and Matrix Estimation via USVT Algorithm

Consider the problem of denoising a large m× n matrix. This problem has ...

Solving the Phantom Inventory Problem: Near-optimal Entry-wise Anomaly Detection

We observe that a crucial inventory management problem ('phantom invento...

Detection thresholds in very sparse matrix completion

Let A be a rectangular matrix of size m× n and A_1 be the random matrix ...

Image Tag Completion by Low-rank Factorization with Dual Reconstruction Structure Preserved

A novel tag completion algorithm is proposed in this paper, which is des...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Matrix Completion is the task to reconstruct low-rank matrices from a subset of its entries and occurs naturally in many practically relevant problems, such as missing feature imputation, multi-task learning 

(Argyriou et al., 2008), transductive learning (Goldberg et al., 2010), or collaborative filtering and link prediction (Srebro et al., 2005; Acar et al., 2009; Menon and Elkan, 2011).

Almost all known methods performing matrix completion are optimization methods such as the max-norm and nuclear norm heuristics 

(Srebro et al., 2005; Candès and Recht, 2009; Tomioka et al., 2010), or OptSpace (Keshavan et al., 2010), to name a few amongst many.

These methods have in common that in general (a) they reconstruct the whole matrix and (b) error bounds are given for all of the matrix, not single entries. These two properties of existing methods are in particular unsatisfactory111While the existing methods may be applied to a submatrix, it is always at the cost of accuracy if the data is sparse, and they do not yield statements on single entries. in the scenario when one is interested only in predicting (resp. imputing) one single missing entry or a set of interesting missing entries instead of all - which is for real data a more natural task than imputing all missing entries, in particular in the presence of large scale data (resp. big data).

Indeed the design of such a method is not only desirable but also feasible, as the results of Király et al. (2012) suggest by relating algebraic combinatorial properties and the low-rank setting to the reconstructability of the data. Namely, the authors provide algorithms which can decide for one entry if it can be - in principle - reconstructed or not, thus yielding a statement of trustability for the output of any algorithm222The authors also provide an algorithm for reconstructing some missing entries in the arbitrary rank case, but without obtaining global or entry-wise error bounds, or a strategy to reconstruct all reconstructible entries..

In this paper, we demonstrate the first time how algebraic combinatorial techniques, combined with stochastic error minimization, can be applied to (a) reconstruct single missing entries of a matrix and (b) provide lower variance bounds for the error of any algorithm resp. estimator for that particular entry - where the error bound can be obtained without actually reconstructing the entry in question. In detail, our contributions include:

  • the construction of a variance-minimal and unbiased estimator for any fixed missing entry of a rank-one-matrix, under the assumption of known noise variances

  • an explicit form for the variance of that estimator, being a lower bound for the variance of any unbiased estimation of any fixed missing entry and thus yielding a quantiative measure on the trustability of that entry reconstructed from any algorithm

  • the description of a strategy to generalize the above to any rank

  • comparison of the estimator with two state-of-the-art optimization algorithms (OptSpace and nuclear norm), and error assessment of the three matrix completion methods with the variance bound

Note that most of the methods and algorithms presented in this paper restrict to rank one. This is not, however, inherent in the overall scheme, which is general. We depend on rank one only in the sense that we understand the combinatorial-algebraic structure of rank-one-matrix completion exactly, whereas the behavior in higher rank is not yet as well understood. Nonetheless, it is, in principle accessible, and, once available will can be “plugged in” to the results here without changing the complexity much.

2 The Algebraic Combinatorics of Matrix Completion

2.1 A review of known facts

In Király et al. (2012), an intricate connection between the algebraic combinatorial structure, asymptotics of graphs and analytical reconstruction bounds has been exposed. We will refine some of the theoretical concepts presented in that paper which will allow us to construct the entry-wise estimator.

Definition 2.1

An matrix is called mask. If is a partially known matrix, then the mask of is the mask which has -s in exactly the positions which are known in ; and -s otherwise.

Definition 2.2

Let be an mask. We will call the unique bipartite graph which has as bipartite adjacency matrix the completion graph of . We will refer to the vertices of corresponding to the rows of as blue vertices, and to the vertices of corresponding to the columns as red vertices. If is an edge in (where is the complete bipartite graph with blue and red vertices), we will also write instead of and for any matrix .

A fundamental result, (Király et al., 2012, Theorem 2.3.5), says that identifiability and reconstructability are, up to a null set, graph properties.

Theorem 2.3

Let be a generic333In particular, if is sampled from a continuous density, then the set of non-generic is a null set. and partially known matrix of rank , let be the mask of , let be integers. Whether is reconstructible (uniquely, or up to finite choice) depends only on and the true rank ; in particular, it does not depend on the true .

For rank one, as opposed to higher rank, the set of reconstructible entries is easily obtainable from by combinatorial means:

Theorem 2.4 ((Király et al., 2012, Theorem 2.5.36 (i)))

Let be the completion graph of a partially known matrix . Then the set of uniquely reconstructible entries of is exactly the set , with in the transitive closure of . In particular, all of is reconstructible if and only if is connected.

2.2 Reconstruction on the transitive closure

We extend Theorem 2.4’s theoretical reconstruction guarantee by describing an explicit, algebraic algorithm for actually doing the reconstruction. This algorithm will be the basis of an entry-wise, variance-optimal estimator in the noisy case. In any rank, such a reconstruction rule can be obtained by exposing equations which explicitly give known and unknown entries in terms of only known entries due to the fact that the set of low-rank matrices is an irreducible variety (the common vanishing locus of finitely many polynomial equations). We are able to derive the reconstruction equations for rank one.

Definition 2.5

Let (resp. ) be a path (resp. cycle), with a fixed start and end (resp. traversal order). We will denote by be the set of edges in (resp. and ) traversed from blue vertex to a red one, and by the set of edges traversed from a red vertex to a blue one 444This is equivalent to fixing the orientation of that directs all edges from blue to red, and then taking to be the set of edges traversed forwards and the set of edges traversed backwards. This convention is convenient notationally, but any initial orientation of will give us the same result.. From now on, when we speak of “oriented paths” or “oriented cycles”, we mean with this sign convention and some fixed traversal order.

Let be a matrix of rank , and identify the entries with the edges of . For an oriented cycle , we define the polynomials

where for negative entries of , we fix a branch of the complex logarithm.

Theorem 2.6

Let be a generic matrix of rank . Let be an oriented cycle. Then, .

Proof: The determinantal ideal of rank one is a binomial ideal generated by the minors of (where entries of are considered as variables). The minor equations are exactly , where is an elementary oriented four-cycle; if is an elementary -cycle, denote its edges by , , , , with . Let be the collection of the elementary -cycles, and define and .

By sending the term to a formal variable , we see that the free -group generated by the is isomorphic to . With this equivalence, it is straightforward that, for any oriented cycle , lies in the -span of elements of and, therefore, formally,

with the . Thus vanishes when is rank one, since the r.h.s. does. Exponentiating, we see that

If is generic and rank one, the r.h.s. evaluates to one, implying that vanishes.

Corollary 2.7

Let be a matrix of rank . Let be two vertices in . Let be two oriented paths in starting at and ending at . Then, for all , it holds that .

Remark 2.8

It is possible to prove that the set of forms the set of polynomials vanishing on the entries of which is minimal with respect to certain properties. Namely, the form a universal Gröbner basis for the determinantal ideal of rank , which implies the converse of Theorem 2.6. From this, one can deduce that the estimators presented in section 3.2 are variance-minimal amongst all unbiased ones.

3 A Combinatorial Algebraic Estimate for Missing Entries and Their Error

In this section, we will construct an estimator for matrix completion which (a) is able to complete single missing entries and (b) gives universal error estimates for that entry that are independent of the reconstruction algorithm.

3.1 The sampling model

In all of the following, we will assume that the observations arise from the following sampling process:

Assumption 3.1

There is an unknown fixed, rank one, matrix which is generic, and an mask which is known. There is a (stochastic) noise matrix whose entries are uncorrelated and which is multiplicatively centered with finite variance, non-zero555The zero-variance case corresponds to exact reconstruction, which is handled already by Theorem 2.4. variance; i.e., and for all and .

The observed data is the matrix , where denotes the Hadamard (i.e., component-wise) product. That is, the observation is a matrix with entries .

The assumption of multiplicative noise is a necessary precaution in order for the presented estimator (and in fact, any estimator) for the missing entries to have bounded variance, as shown in Example 3.2 below. This is not, in practice, a restriction since an infinitesimal additive error on an entry of is equivalent to an infinitesimal multiplicative error , and additive variances can be directly translated into multiplicative variances if the density function for the noise is known666

The multiplicative noise assumption causes the observed entries and the true entries to have the same sign. The change of sign can be modeled by adding another multiplicative binary random variable in the model which takes values

; this adds an independent combinatorial problem for the estimation of the sign which can be done by maximum likelihood. In order to keep the exposition short and easy, we did not include this into the exposition.. The previous observation implies that the multiplicative noise model is as powerful as any additive one that allows bounded variance estimates.

Example 3.2

Consider the rank one matrix

The unique equation between the entries is Solving for any entry will have another entry in the denominator, for example

Thus we get an estimator for when substituting observed and noisy entries for . When approaches zero, the estimation error for approaches infinity. In particular, if the density function of the error of is too dense around the value , then the estimate for given by the equation will have unbounded variance. In such a case, one can show that no estimator for has bounded variance.

3.2 Estimating entries and error bounds

In this section, we construct the unbiased estimator for the entries of a rank-one-matrix with minimal variance. First, we define some notation to ease the exposition:

Notations 3.3

We will denote by and the logarithmic entries and noise. Thus, for some path in we obtain

Denote by the logarithmic (observed) entries, and the (incomplete) matrix which has the (observed) as entries. Denote by

The components of the estimator will be built from the :

Lemma 3.4

Let be the graph of the mask . Let be any edge with red. Let be an oriented path777If , then can also be the path consisting of the single edge . in starting at and ending at . Then,

is an unbiased estimator for with variance

Proof: By linearity of expectation and centeredness of , it follows that

thus is unbiased. Since the are uncorrelated, the also are; thus, by Bienaymé’s formula, we obtain

and the statement follows from the definition of

In the following, we will consider the following parametric estimator as a candidate for estimating :

Notations 3.5

Fix an edge . Let be a basis for the set of all oriented paths starting at and ending at 888This is the set of words equal to the formal generators in the free abelian group generated by the , subject to the relations for all cycles in

. Independence can be taken as linear independence of the coefficient vectors of the

., and denote by . For , set

Furthermore, we will denote by the -vector of ones.

The following Lemma follows immediately from Lemma 3.4 and Theorem 2.6:

Lemma 3.6

in particular, is an unbiased estimator for if and only if .

We will now show that minimizing the variance of can be formulated as a quadratic program with coefficients entirely determined by , the measurements and the graph . In particular, we will expose an explicit formula for the minimizing the variance. Before stating the theorem, we define a suitable kernel:

Definition 3.7

Let be an edge. For an edge and a path , set if otherwise Let be any fixed oriented paths. Define the (weighted) path kernel by

Under our assumption that for all , the path kernel is positive definite, since it is a sum of independent positive semi-definite functions; in particular, its kernel matrix has full rank. Here is the variance-minimizing unbiased estimator:

Proposition 3.8

Let be a pair of vertices, and a basis for the path space in with elements. Let be the kernel matrix of the path kernel with respect to the basis . For any ,

Moreover, under the condition , the variance is minimized by

Proof: By inserting definitions, we obtain

Writing as vectors, and as matrices, we obtain

By using that for any scalar , and independence of the , an elementary calculation yields

In order to determine the minimum of the variance in , consider the Lagrangian

where the slack term models the condition . An elementary calculation yields

where is the vector of ones. Due to positive definiteness of the function is convex, thus will be the unique minimizing the variance while satisfying .

Remark 3.9

The above setup works in wider generality: (i) if is allowed and there is an path of all zero variance edges, the path kernel becomes positive semi-definite; (ii) similarly if is replaced with any set of paths at all, the same may occur. In both cases, we may replace with the Moore-Penrose pseudo-inverse and the proposition still holds: (i) reduces to the exact reconstruction case of Theorem 2.4; (ii) produces the optimal estimator with respect to , which is optimal provided that is spanning, and adding paths to does not make the estimate worse.

3.3 Rank and higher

An estimator for rank and higher, together with a variance analysis, can be constructed similarly once all polynomials known which relate the entries under each other. The main difficulty lies in the fact that these polynomials are not parameterized by cycles anymore, but specific subgraphs of , see (Király et al., 2012, Section 2.5). Were these polynomials known, an estimator similar to as in Notation 3.5 could be constructed, and a subsequent variance (resp. perturbation) analysis performed.

3.4 The algorithms

In this section, we describe the two main algorithms which calculate the variance-minimizing estimate for any fixed entry of an matrix , which is observed with noise, and the variance bound for the estimate . It is important to note that does not necessarily need to be an entry which is missing in the observation, it can also be any entry which has been observed. In the latter case, Algorithm 3 will give an improved estimate of the observed entry, and Algorithm 4 will give the trustworthiness bound on this estimate.

Since the the path matrix , the path kernel matrix , and the optimal is required for both, we first describe Algorithm 1 which determines those.

0:   Find a linearly independent set of paths in the graph , starting from and ending at .
0:   Determine the matrix with set if , otherwise
0:   Define a diagonal matrix with for
0:   Compute the kernel matrix
0:   Calculate
0:   Output and .
Algorithm 1 Calculates path kernel and .
Input: index an mask , variances .
Output: path matrix , path kernel and minimizer .

The steps of the algorithm follow the exposition in section 3.2, correctness follows from the statements presented there. The only task in Algorithm 1 that isn’t straightforward is the computation of a linearly independent set of paths in step 1. We can do this time linear in the number of observed entries in the mask with the following method. To keep the notational manageable, we will conflate formal sums of the , cycles in and their representations as vectors in , since there is no chance of confusion.

0:   If is not an edge of , and and are in different connected components, then is empty. Output .
0:   Otherwise, if is not an edge, of , add a “dummy” copy.
0:   Compute a spanning forest of that does not contain , if possible.
0:   For each edge , compute the fundamental cycle of in .
0:   If is an edge in , output .
0:   Otherwise, let . Output .
Algorithm 2 Calculates a basis of the path space.
Input: index an mask .
Output: a basis for the space of oriented paths.

We prove the correctness of Algorithm 2.

Algorithms 3 and 4 then can make use of the calculated to determine an estimate for any entry and its minimum variance bound. The algorithms follow the exposition in Section 3.2, from where correctness follows; Algorithm 3 additionally provides treatment for the sign of the entries.

0:   Calculate and with Algorithm 1.
0:   Store as a vector and a sign vector with .
0:   Calculate The sign is if each column of (

component-wise) contains an odd number of entries

, else .
0:   Return
Algorithm 3 Estimates the entry .
Input: index an mask , log-variances , the partially observed and noisy matrix .
Output: The variance-minimizing estimate for .
0:   Calculate and with Algorithm 1.
0:   Return .
Algorithm 4 Determines the variance of the entry .
Input: index an mask , log-variances .
Output: The variance lower bound for .

Note that even if observations are not available, Algorithm 4 can be used to obtain the variance bound. The variance bound is relative, due to its multiplicativity, and can be used to approximate absolute bounds when any reconstruction estimate is available - which does not necessarily need to be the one from Algorithm 3, but can be the estimation result of any reconstruction. Namely, if is the estimated variance of the log, we obtain an upper confidence bound (resp. deviation) bound for and a lower confidence bound (resp. deviation) bound , corresponding to the log-confidence Also note that if is not reconstructible from the mask (i.e., if the edge is not in the transitive closure of , see Theorem 2.4), then the deviation bounds will be infinite.

4 Experiments

4.1 Universal error estimates

For three different masks, we calculated the predicted minimum variance for each entry of the mask. The multiplicative noise was assumed to be for each entry. Figure 1 shows the predicted a-priori minimum variances for each of the masks. Notice how the structure of the mask affects the expected error; known entries generally have least variance, while it is interesting to note that in general it is less than the starting variance of . I.e., tracking back through the paths can be successfully used even to denoise known entries. The particular structure of the mask is mirrored in the pattern of the predicted errors; a diffuse mask gives a similar error on each missing entry, while the more structured masks have structured error which is determined by combinatorial properties of the completion graph and the paths therein.

Figure 1: The figure shows three pairs of masks and predicted variances. A pair consists of two adjacent squares. The left half is the mask which is depicted by red/blue heatmap with red entries known and blue unknown. The right half is a multicolor heatmap with color scale, showing the predicted variance of the completion. Variances were calculated by our implementation of Algorithm 4.
(a) mean squared errors
(b) error vs. predicted variance
Figure 2: For randomly chosen masks and true matrix, matrix completions were performed with Nuclear Norm (green), OptSpace (red), and Algorithm 3 (blue) under multiplicative noise with variance increasing in increments of . For each completed entry, minimum variances were predicted by Algorithm 4. 2(a) shows the mean squared error of the three algorithms for each noise level, coded by the algorithms’ respective colors. 2(b) shows a bin-plot of errors (y-axis) versus predicted variances (x-axis) for each of the three algorithms: for each completed entry, a pair (predicted error, true error) was calculated, predicted error being the predicted variance, and the actual prediction error measured as of prediction minus of true entry. Then, the points were binned into bins with equal numbers of points. The figure shows the mean of the errors (second coordinate) of the value pairs with predicted variance (first coordinate) in each of the bins, the color corresponds to the particular algorithm; each group of bars is centered on the minimum value of the associated bin.

4.2 Influence of noise level

We generated random mask of size with entries sampled uniformly and a random matrix of rank one. The multiplicative noise was chosen entry-wise independent, with variance for each entry. Figure 2(a) compares the Mean Squared Error (MSE) for three algorithms: Nuclear Norm (using the implementation Tomioka et al. (2010)), OptSpace (Keshavan et al., 2010), and Algorithm 3. It can be seen that on this particular mask, Algorithm 3 is competitive with the other methods and even outperforms them for low noise.

4.3 Prediction of estimation errors

The data are the same as in Section 4.2, as are the compared algorithm. Figure 2(b) compares the error of each of the methods with the variance predicted by Algorithm 4 each time the noise level changed. The figure shows that for any of the algorithms, the mean of the actual error increases with the predicted error, showing that the error estimate is useful for a-priori prediction of the actual error - independently of the particular algorithm. Note that by construction of the data this statement holds in particular for entry-wise predictions. Furthermore, in quantitative comparison Algorithm 4 also outperforms the other two in each of the bins.

5 Conclusion

In this paper, we have introduced an algebraic combinatorics based method for reconstructing and denoising single entries of an incomplete and noisy matrix, and for calculating confidence bounds of single entry estimations for arbitrary algorithms. We have evaluated these methods against state-of-the art matrix completion methods. The results of section 4 show that our reconstruction method is competitive and that - for the first time - our variance estimate provides a reliable prediction of the error on each single entry which is an a-priori estimate, i.e., depending only on the noise model and the position of the known entries. Furthermore, our method allows to obtain the reconstruction and the error estimate for a single entry which existing methods are not capable of, possibly using only a small subset of neighboring entries - a property which makes our method unique and particularly attractive for application to large scale data. We thus argue that the investigation of the algebraic combinatorial properties of matrix completion, in particular in rank and higher where these are not yet completely understood, is crucial for the future understanding and practical treatment of big data.


  • Acar et al. (2009) E. Acar, D.M. Dunlavy, and T.G. Kolda.

    Link prediction on evolving data using matrix and tensor factorizations.

    In Data Mining Workshops, 2009. ICDMW’09. IEEE International Conference on, pages 262–269. IEEE, 2009.
  • Argyriou et al. (2008) A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task structure learning. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in NIPS 20, pages 25–32. MIT Press, Cambridge, MA, 2008.
  • Candès and Recht (2009) Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Found. Comput. Math., 9(6):717–772, 2009. ISSN 1615-3375. doi: 10.1007/s10208-009-9045-5. URL http://dx.doi.org/10.1007/s10208-009-9045-5.
  • Goldberg et al. (2010) A. Goldberg, X. Zhu, B. Recht, J. Xu, and R. Nowak. Transduction with matrix completion: Three birds with one stone. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 757–765. 2010.
  • Keshavan et al. (2010) Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE Trans. Inform. Theory, 56(6):2980–2998, 2010. ISSN 0018-9448. doi: 10.1109/TIT.2010.2046205. URL http://dx.doi.org/10.1109/TIT.2010.2046205.
  • Király et al. (2012) Franz J. Király, Louis Theran, Ryota Tomioka, and Takeaki Uno. The algebraic combinatorial approach for low-rank matrix completion. Preprint, arXiv:1211.4116v3, 2012. URL http://arxiv.org/abs/1211.4116.
  • Menon and Elkan (2011) A. Menon and C. Elkan. Link prediction via matrix factorization. Machine Learning and Knowledge Discovery in Databases, pages 437–452, 2011.
  • Srebro et al. (2005) N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Lawrence K. Saul, Yair Weiss, and Léon Bottou, editors, Advances in NIPS 17, pages 1329–1336. MIT Press, Cambridge, MA, 2005.
  • Tomioka et al. (2010) Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. On the extension of trace norm to tensors. In NIPS Workshop on Tensors, Kernels, and Machine Learning, 2010.

Appendix A Correctness of Algorithm 2

We adopt the conventions of Section 2, so that is a bipartite graph with blue vertices, red ones, and edges oriented from blue to red. Recall the isomorphism, observed in the proof of Theorem 2.6 of the -group of the polynomials and the oriented cycle space .

Define (the first Betti number of the graph). Some standard facts are that: (i) the rank of is ; (ii) we can obtain a basis for consisting only of simple cycles by picking any spanning forest of and then using as basis elements the fundamental cycles of the edges . This justifies step 4.

Let be an edge of . Define an to be the set of subgraphs such that, for generic rank one , . By Theorem 2.6, we can write these as -linear combinations of and oriented cycles. From this, we see that the rank of the path space is and the graph theoretic identification of elements in the path space with subgraphs that have even degree at every vertex except and . Thus, if is an edge of , step 5 is justified, completing the proof of correctness in this case.

If was not an edge, step 1 guarantees that the dummy copy of that we added is not in the spanning tree computed in step 3. Thus, the element computed in step 6 is a simple path from to . The collection of elements generated in step 6 is independent by the same fact in and has rank and does not put a positive coefficient on the dummy generator .