1 Introduction
Matrix factorization plays an important role in machine learning, due to its wide applications including collaborative filtering
[1], partsbased learning [2] and clustering [3]. Given a data matrix (where denotes the number of data samples and denotes the dimension of each data sample), one aims to find matrices and such that , where usually . Many existing algorithms in the literature find and by minimizing (possibly with regularizations on or to induce structural properties). However, when the number of data samples becomes large, solving this problem using batch algorithms can be highly inefficient, in terms of both computation and storage. To improve the efficiency, online (stochastic) matrix factorization algorithms [4, 5, 6, 7, 8, 9, 10, 11, 12] have been proposed to learn the dictionary (matrix) from a sequence of randomly drawn data samples (possibly in minibatches).^{1}^{1}1We use “online” and “stochastic” interchangeably. In the literature, online algorithms also cover the setting where the number of data samples is infinite. However, we only consider to be finite, but can be very large. Extensive numerical evidence has shown that the stochastic matrix factorization algorithms can learn significantly faster than their batch counterparts, in terms of (empirical) convergence rate. In addition, only storage space are consumed by these algorithms, in contrast to by batch methods. Due to these advantages, stochastic matrix factorization algorithms have gained much popularity in recent years.From an optimization point of view, previous stochastic matrix factorization algorithms [4, 5, 6, 7, 8, 9, 10, 11, 12] aim to solve a nonconvex stochastic program involving the dictionary (see (4) in Remark 1). They mainly leverage two optimization frameworks, namely stochastic majorizationminimization (SMM) [13] and stochastic (sub)gradient descent (SGD) [14]. Based on either framework, almost sure asymptotic convergence to stationary points of the stochastic program has been shown. However, the asymptotic nature of this convergence analysis cannot provide insights into the dictionary learning process at any finite time instant. Thus, we wish to understand the nonasymptotic convergence of dictionary learning, as well as the sample complexity for learning a “reasonable” dictionary . Besides, we desire improved stochastic methods that yield faster convergence, at least for large but finite datasets.
Main Contributions. Inspired by the recent advances in variance reduction techniques (particularly for nonconvex problems) [15, 16, 17], we propose a unified framework to speed up the existing stochastic matrix factorization algorithms via variance reduction (VR). As shown in Section 2, our framework is general
and it subsumes eight wellknown SMF formulations as special instances, including those robust variants that explicitly model outliers in their objective functions. Within this framework, we derive a
nonasymptotic convergence rate, measured in terms of how “fast” the algorithm converges to a stationary point in expectation (see Section 4.1 for precise statements). Accordingly, we derive sample and computational complexities for our algorithm to converge to an stationary point in expectation. To the best of our knowledge, this is the first time that a nonasymptotic convergence analysis (together with sample complexity) is established for SMF algorithms. To further improve the efficiency of our framework, we also provide an inexactconvergence analysis of our framework, where in each step the coefficient (and outlier) vectors are only learned approximately. In the context of stochastic (variancereduced) nonconvex proximal gradient algorithms, this is the
first inexact analysis thus far. We show, via extensive experiments on various SMF formulations and datasets, that our variancereduced framework consistently outperforms stateoftheart frameworks (including SMM and SGD), in terms of both convergence rate and accuracy of learned dictionaries.Related works. Another line of works [18, 19, 20] considers using SMF algorithms in a matrix completion setting, where the matrices and are jointly learned from a random sequence of entries (or columns) of a data matrix . By leveraging lowrankness and the restricted isometry property (RIP) on , the product of and can exactly recover
(with high probability). However, their problem setting differs from ours in two aspects. First, we do not store or explicitly learn
, due to space constraints. Second, our convergence results do not rely on lowrankness or RIP. Therefore, similar to [4, 5, 6, 7, 8, 9, 10, 11, 12], our results are stated in terms of stationary points, which are oftentimes appealing in real applications [4, 11]. In the context of stochastic PCA, VR techniques have been applied to yield a linear convergence [21]. Again, the problem setting therein vastly differs from ours.Notations. We use boldface capital letters, boldface lowercase letters and lowercase letters to denote matrices, column vectors and scalars respectively. Denote as and as . For any , define and . For a matrix , denote its th column as . We use to denote the norm of a vector and the Frobenius norm of a matrix. For a convex and closed set , denote its indicator function as , such that if and otherwise. All the sections, lemmas and algorithms beginning with ‘S’ will appear in the supplemental material.
2 Problem Formulation
Let
denote the set of data samples. We first define a loss function
w.r.t. a data sample and a dictionary as(1) 
where and denote the constraint set and regularizer of the coefficient vector respectively. Since largesale datasets may contain outliers, we can define a “robust” version of as
(2) 
where and denote the constraint set and regularizer of the outlier vector respectively. For convenience, let denote either or . Based on , we formulate our problem as
(3) 
where and denote the constraint set and regularizer of the dictionary respectively. Our targeted problem (3) is general and flexible, in the sense that by choosing the constraint sets , and and the regularizers , and in different manners, (3) encompasses many important examples in the literature of SMF. For loss function , we have:

Online Structured Sparse Learning (OSSL) [5]: , , and , where is a subvector of , and .

Smooth Sparse ODL (SSODL) [10]: , , and , where and is positive semidefinite.
For loss function , we have:
Remark 1.
In almost all the works cited in this section, the optimization problem is formulated as a stochastic program that minimizes the expected risk of dictionary training. It has the form
(4) 
where
is a probability distribution. The reasons that we consider (
3) rather than (4) are threefold. First, (3) can be regarded as the sample average approximation [22] of (4). Solving this approximation has been a popular and efficient approach for solving a general stochastic program like (4) [23]. Second, (3) can also be interpreted as minimizing the empirical risk incurred by any largescale finite training dataset. Solving (3) efficiently can greatly speed up the dictionary learning process. Last but not least, since (3) is a finitesum minimization problem, we can employ the recently developed variance reduction techniques [15, 16] to solve it in an efficient manner. See Section 3 for details.For algorithm and convergence analysis, we will focus on the loss function , which contains as a special case (by choosing ).
3 Algorithm
We present the pseudocode for our stochastic matrix factorization algorithm with variance reduction techniques in Algorithm 1. To develop our algorithm, we first make two mild assumptions. We then describe our algorithm in details, with focus on learning the coefficient and outlier vectors.
Assumptions. The following two assumptions are satisfied by all the Problems 1 to 4.

, and are convex, proper and closed functions on , and respectively, with proximal operators that can be evaluated efficiently.^{2}^{2}2For a function , the “efficient” evaluation of means that for any , can be computed in arithemetic operations. The same definitions apply to “efficient projections” in ii.

, and are convex and admit efficient (Euclidean) projections onto them.
Algorithm Description. Variancereduced stochastic optimization algorithms typically contain outer and inner loops [15, 24]. In Algorithm 1, we use and to denote the indices for outer and inner iterations respectively. At the beginning of each outer iteration , we solve (regularized) leastsquare regression problems to learn the set of coefficient and outlier vectors based on . Based on the learned vectors, we compute , which will be shown to be in Section 4. In each inner iteration , we solve a (random) minibatch of regression problems, based on and respectively. The regression results, together with , are used to calculate a search direction
, which acts as a variancereduced gradient estimate of the loss function
at . The proximal operator of either has a closed form [25] or can be obtained using DouglasRachford splitting or alternating direction method of multipliers [26], both in (arithmetic) operations. We choose the final dictionary via option I or II (in lines 15 and 16 respectively).^{3}^{3}3In this work, the line numbers always refer to those in Algorithm 1. In practice, we choose the final dictionary as the last iterate , i.e., option II. However, to streamline the analysis, we output the dictionary using option I. This is consistent with the many works on variancereduced SGD algorithms, e.g., [15].Learning coefficient and outlier vectors. It can be seen that the computational complexity of Algorithm 1 crucially depends on solving the problem (2), which appears in lines 4, 8 and 9 of Algorithm 1. To solve this problem, we leverage the block successive upperbound minimization framework [27], by alternating between the following two steps, i.e.,
(5)  
(6) 
where and denote the updated values of and respectively (in one iteration). The proximal operators of and can be obtained in the same way as that of . Overall, the computational complexity of solving (2) is .
Remark 2.
In our implementation, we do not store before computing . Instead, after obtaining and , we add them to the running sum in line 5, and then discard them. The same procedure applies to computing . Thus the storage complexity of Algorithm 1 is . This complexity is linear in the dimension of the optimization variable , which has entries.
4 Convergence Analysis
Additional assumptions. For analysis purposes, we make the following three additional assumptions:

The data samples lie in a compact set .

For any and , is jointly strongly convex on .

is compact; is compact or is coercive on ; is compact or is coercive on .
Remark 3.
Assumption i is natural since all real data are bounded. Assumption ii is common in the literature [4, 11] and can be satisfied by adding Tikhonov regularizers (see [11, Remark 4] for a detailed discussion). Assumption iii is satisfied by all the Problems 1 to 4, except in 2, 1, 2 and 3, is not bounded. However, for each problem, it was proven that the sequence of basis matrices generated by the corresponding algorithm in [7], [8] or [9] belonged to a compact set almost surely. Thus the compactness of is a reasonable assumption.
4.1 Key Lemmas and Main Results
We first present some definitions that are used in the statements of our results. We then state our key lemmas (Lemmas 1 and 2) and main convergence theorems (Theorems 1 and 2), together with the derived sample and computational complexities (Corollary 1). The proof sketches of all the results are shown in Section 4.2, with complete proofs deferred to Section S1 to Section S4.
Definitions. Define a filtration such that is the algebra generated by the random sets . It is easy to see that is adapted to to . Consider a composite function defined on a Euclidean space , where is differentiable and is proper, closed and convex. For any and , define the (proximal) gradient mapping [28] of at with step size , . A point is stationary w.r.t. the problem and if . Finally, define .
Lemma 1 (Smoothness of ).
Define . The loss function in (2) is differentiable on and
(7) 
In addition, is continuous on . Moreover, for any , is Lipschitz on with a Lipschitz constant independent of .
Lemma 2 (Variance bound of ).
In Algorithm 1, and
(8) 
Measuring convergence rate. Since the loss function in (3) is nonconvex (due to bilinearity), obtaining global minima of is in general outofreach. Thus in almost all the previous works [4, 5, 6, 7, 8, 9, 10, 11, 12], convergence to stationary points of (3) was studied instead. Define . Following [29, 16], for the sequence of (random) dictionaries generated in Algorithm 1, we propose to use to measure its convergence rate (to stationary points).To see the validity of this measure, define the directional derivative of at along , i.e., . In previous works [4, 5, 6, 7, 8, 9, 10, 11, 12], is characterized as a stationary point of (3) if and only if for any , . It can be shown that this condition is equivalent to , for any . In particular, if , we can verify that both conditions are equivalent to the condition that lies in the normal cone of at .
Equipped with the proper convergence measure, we now state our main convergence theorem.
Theorem 1.
Note that denotes the total number of inner iterations. Thus (9) states a sublinear convergence rate of towards the set of stationary points of (3). In addition, since , we can choose , and such that to satisfy the conditions in Theorem 1.
In practice, for efficiency considerations, it is important that we learn the coefficient and outlier vectors (in lines 4, 8 and 9) approximately, by only finding inexact solutions to the subproblem (2). The errors in these inexact solutions will be propagated to the variancereduced gradient as a whole. In the next theorem, we show that if the (additive) error in is properly controlled (i.e., decreases at a certain rate), then we can achieve the same convergence rate as in Theorem 1. Note that although inexact analyses have been done for batch [30] and incremental [31] proximal gradient (PG) algorithms, our result is the first one for a stochastic (variancereduced) PG algorithm.
Remark 4 (Bound ).
Denote an approximate solution of as . From the definition of and the compactness of and , we see that can be bounded by the learning errors in coefficient and outlier vectors, i.e., . For each , these learning errors can be further bounded by the (infimum of) norm of (sub)gradient of at , due to the firstorder optimality conditions. Therefore, we are able to bound in terms of the approximate solutions .
4.2 Proof Sketches
Proof Sketch of Lemma 1. By Assumption iii, if is not compact, then is coercive. This implies the boundedness of . A similar argument also applies to . Thus it is equivalent to minimizing over a compact set . In addition, Assumptions i and iii ensure the compactness of and respectively. Since Assumptions ii and ii ensure the uniqueness of , for any , we can invoke Danskin’s theorem (see Lemma S1) to guarantee the differentiability of , and compute as in (7). Additionally, we can invoke the Maximum theorem (see Lemma S2) to ensure the continuity of on . This implies the continuity of . Based on this, we can assert the Lipschitz continuity of both and on . The proceeding arguments, together with the compactness of , , and , allow us to conclude the Lipschitz continuity of . ∎
Proof Sketch of Lemma 2. To show (8), we first apply Lemma S4 to and then make use of the Lipschitz continuity of in Lemma 1. ∎
Proof Sketch of Theorem 1. From Lemma 1, we know the loss function is differentiable. Define and , then . Using Lemma S5 and the Lipschitz continuity of (see Lemma 1), we have the recursion
(10) 
Conditioning on , we then take expectations w.r.t. on both sides of (10). Using Lemma 2, we can bound in terms of and . Then, instead of directly telescoping (10), we construct a surrogate function , where the sequence is given by the recursion , where . Using this recursive relation, together with the conditions and in Theorem 1, we manage to arrive at
(11) 
Define . We now telescope (11) over to obtain
(12) 
Finally, we telescope (12) over and use option I to choose the final dictionary .∎
Proof Sketch of Theorem 2. The proof of Theorem 2 is modified from that of Theorem 1. Due to the error , in (10) is replaced by . We decouple from through . We then take expectation on both sides and carefully follow the telescoping procedure (over and ) used in proving Theorem 1. After some algebraic manipulations, we arrive at
Since , for any , . Therefore by using option I to choose , we complete the proof. ∎
Proof of Corollary 1. By Theorem 1, we know that inner iterations are needed for . By evenly distributing the data samples drawn at the start of each outer iteration into inner iterations, we see that each inner iteration takes in samples. Thus the sample complexity is . Based on it, we can also derive the computational complexity to be . This is because both subproblem (2) (in lines 4, 8 and 9) and the proximal operator of (in line 7) can be solved or evaluated in arithmetic operations (see Section 3). From the condition in Theorem 1, we see that . Thus the sample and computational complexities become and respectively. To optimize both costs, it is necessary that , which amounts to . (Note that this also implies .) ∎
5 Numerical Experiments
5.1 Tested SMF Formulations and Datasets
We tested the performance of our proposed framework (Algorithm 1) on four representative SMF formulations in Section 2, including ODL, ONMF, ORPCA and ORNMF. Among them, ODL and ONMF are nonrobust SMF formulations whereas ORPCA and ORNMF are robust ones, i.e., they explicitly model outliers. Therefore, for ODL and ONMF, we tested their algorithms on the CBCL [2] and MNIST [32] datasets; while for ORPCA and ORNMF, we tested their algorithms on the Synth and YaleB [33] datasets. The CBCL and MNIST datasets are commonly used in testing (nonrobust) matrix factorization algorithms, e.g., [2]. The YaleB dataset consists of face images taken under various lighting conditions. The shadow and gloss in these images caused by imperfect illumination can be regarded as outliers. The Synth dataset was generated synthetically in a similar fashion to those in [7, 8, 11]. Specifically, we first generated a matrix and a matrix , where , and . The entries of and
were drawn i.i.d. from a normal distribution with mean 0.5 and variance
. We then generated a outlier matrix by first uniformly randomly selecting of its entries to be zero, wheredenotes the outlier density. The remaining entries were then generated i.i.d. from a uniform distribution with support
. We set and . Finally .5.2 Benchmarking Frameworks and Choice of Parameters
For each tested SMF formulation, we compared our variancereduced SMF framework (denoted as VR) against another two optimization frameworks commonly used in previous works on SMF, namely SMM and SGD. For completeness, the pseudocodes for these two methods are shown in Algorithms S2 and S3 respectively. We next describe the parameter setting in our method. From our analysis in Corollary 1, we set the minibatch size and the number of inner iterations , where and . We set the number of outer iterations and the parameter in Theorem 2 to . For each tested SMF formulation on each dataset, we chose the step size such that our method yielded the best (empirical) convergence rate. The plots of objective values resulting from other step sizes are shown in Figure S31. For the latent dimension , following the convention in [2, 4, 11], we set it to 49. This parameter can also be chosen from domain knowledge or a Bayesian approach, e.g., [34]. Since in practice we found the performance of all the three frameworks (VR, SMM and SGD) was not sensitive to , we fixed it for simplicity. For both SMM and SGD, we used the same minibatch size as in VR. The step sizes in SGD are chosen in the classical way [14], i.e., . Similar to VR, we chose such that SGD achieved the best empirical convergence rate. For simplicity, we initialized the dictionary such that all its entries are equal to one. Finally, following [4, 6, 7, 11], we set the regularization weight in ODL, ONMF and ORNMF to . In addition, we set
Comments
There are no comments yet.