1 Introduction
In many large datasets, relevant information often lies in a subspace of much lower dimension than the ambient space, and thus the goal of many learning algorithms can be broadly interpreted as trying to find or exploit this underlying “structure” that is present in the data. One structure that is particularly useful both due to its wideranging applicability and efficient computation is the linear subspace model. Generally speaking, if one is given data points from a dimensional ambient space, , a linear subspace model simply implies that there exists matrices such that . When one of the factors is known a priori, the problem of finding the other factor simplifies considerably, but if both factors are allowed to be arbitrary one can always find an infinite number of matrices that yield the same product. As a result, to accomplish anything meaningful, one must impose some restrictions on the factors. This idea leads to a variety of common matrix factorization techniques. A few well known examples are the following:

Principal Component Analysis (PCA): The number of columns, , in is typically constrained to be small, e.g., , and is constrained to have orthonormal columns, i.e., .

Sparse Dictionary Learning (SDL): The number of columns in is allowed to be larger than or , the columns of are required to have unit norm, and is required to be sparse as measured by, e.g., the norm or the pseudonorm [3, 4].^{1}^{1}1As a result, in SDL, one does not assume that there exists a single lowdimensional subspace to model the data, but rather that the data lie in a union of a large number of lowdimensional subspaces.
Mathematically, the general problem of recovering structured linear subspaces from a dataset can be captured by a structured matrix factorization problem of the form
(1) 
where is some loss function that measures how well is approximated by and is a regularizer that encourages or enforces specific properties in . By taking an appropriate combination of and
one can formulate both unsupervised learning techniques, such as PCA, NMF, and SDL, or supervised learning techniques like discriminative dictionary learning
[5, 6]and learning maxmargin factorized classifiers
[7]. However, while there are wideranging applications for structured matrix factorization methods that have achieved good empirical success, the associated optimization problem (1) is nonconvex regardless of the choice of and functions due to the presence of the matrix product . As a result, aside from a few special cases (such as PCA), finding solutions to (1) poses a significant challenge, which often requires one to instead consider approximate solutions that depend on a particular choice of initialization and optimization method.Given the challenge of nonconvex optimization, one possible approach to matrix factorization is to relax the nonconvex problem into a problem which is convex on the product of the factorized matrices, , and then recover the factors of after solving the convex relaxation. As a concrete example, in lowrank matrix factorization, one might be interested in solving a problem of the form
(2) 
which is equivalently defined as a factorization problem
(3) 
where the rank constraint is enforced by limiting the number of columns in to be less than or equal to . However, aside from a few special choices of , solving (2) or (3) is in general an NPhard problem. Instead, one can relax (2) into a convex problem by using a convex regularizer that promotes lowrank solutions, such as the nuclear norm
(sum of the singular values of
), and then solve(4) 
which can often be done efficiently if is convex with respect to [8, 9]. Given a solution to (4), , it is then simple to find a lowrank factorization
via a singular value decomposition. Unfortunately, while the nuclear norm provides a nice convex relaxation for lowrank matrix factorization problems, nuclear norm relaxation does not capture the full generality of problems such as (
1) as it does not necessarily ensure that can be factorized as for some pair which has the desired structure encouraged by (e.g., in nonnegative matrix factorization we require and to be nonnegative), nor does it provide a means to find the desired factors.Based on the above discussion, optimization problems in the factorized space, such as (1), versus problems in the product space, with (4) as a particular example, both present various advantages and disadvantages. Factorized problems attempt to solve for the desired factors directly, provide significantly increased modeling flexibility by permitting one to model structure on the factors (sparsity, nonnegativity, etc.), and allow one to potentially work with a significantly reduced number of variables if the number of columns in is ; however, they suffer from the significant challenges associated with nonconvex optimization. On the other hand, problems in the product space can be formulated to be convex, which affords many practical algorithms and analysis techniques, but one is required to optimize over a potentially large number of variables and solve a second factorization problem in order to recover the factors from the solution . These various pros and cons are briefly summarized in Table I.
Product Space  Factorized Space  

Convex  Yes  No 
Problem Size  Large  Small 
Structured Factors  No  Yes 
To bridge this gap between the two classes of problems, here we explore the link between nonconvex matrix factorization problems, which have the general form
(5) 
and a closely related family of convex problems in the product space, given by
(6) 
where the function will be defined based on the choice of the regularization function and will have the desirable property of being a convex function of . Unfortunately, while the optimization problem in (6) is convex w.r.t. , it will typically be nontractable to solve. Moreover, even if a solution to (6) could be found, solving a convex problem in the product space does not necessarily achieve our goal, as we still must solve another matrix factorization problem to recover the factors with the desired properties encouraged by the function (sparsity, nonnegativity, etc.). Nevertheless, the two problems given by (5) and (6) will be tightly coupled. Specifically, the convex problem in (6) will be shown to be a global lowerbound to the nonconvex factorized problem in (5), and solutions to the factorized problem will yield solutions to the convex problem. As a result, we will tailor our results to the nonconvex factorization problem (5) using the convex problem (6) as an analysis tool. While the optimization problem in the factorized space is not convex, by analyzing this tight interconnection between the two problems, we will show that if the number of columns in is large enough and can be adapted to the data instead of being fixed a priori, local minima of the nonconvex factorized problem will be global minima of both the convex and nonconvex problems. This result will lead to a practical optimization strategy that is parallelizable and often requires a much smaller set of variables. Experiments in image processing applications will illustrate the effectiveness of the proposed approach.
2 Mathematical Background & Prior Work
As discussed before, relaxing lowrank matrix factorization problems via nuclear norm formulations fails to capture the full generality of factorized problems as it does not yield “structured” factors, , with the desired properties encouraged by (sparseness, nonnegativity, etc.). To address this issue, several studies have explored a more general convex relaxation via the matrix norm given by
(7) 
where denote the ^{th} columns of and , respectively, and
are arbitrary vector norms, and the number of columns,
, in the and matrices is allowed to be variable [10, 11, 12, 13, 14]. The norm in (7) has appeared under multiple names in the literature, including the projective tensor norm, decomposition norm, and atomic norm. It is worth noting that for particular choices of the
and vector norms, reverts to several well known matrix norms and thus provides a generalization of many commonly used regularizers. Notably, when the vector norms are both norms, the form in (7) becomes the well known variational definition of the nuclear norm [9]:(8)  
Moreover, by replacing the column norms in (7) with gauge functions one can incorporate additional regularization on , such as nonnegativity, while still being a convex function of [12]. Recall that given a closed, convex set containing the origin, a gauge function, , is defined as,
(9) 
Further, recall that all norms are gauge functions (as can be observed by choosing to be the unit ball of a norm), but gauge functions are a slight generalization of norms, since they satisfy all the properties of a norm except that they are not required to be invariant to nonnegative scaling (i.e., it is not required that be equal to ). Finally, note that given a gauge function, , its polar, , is defined as
(10) 
which itself is also a gauge function. In the case of a norm, its polar function is often referred to as the dual norm.
2.1 Matrix Factorization as Semidefinite Optimization
Due to the increased modeling opportunities it provides, several studies have explored structured matrix factorization formulations based on the norm in a way that allows one to work with a highly reduced set of variables while still providing some guarantees of global optimality. In particular, it is possible to explore optimization problems over factorized matrices of the form
(11) 
While this problem is convex with respect to the product , it is still nonconvex with respect to jointly due to the matrix product. However, if we define a matrix to be the concatenation of and
(12) 
we see that is a submatrix of the positive semidefinite matrix . After defining the function
(13) 
it is clear that the proposed formulation (11) can be recast as an optimization problem over a positive semidefinite matrix.
At first the above discussion seems to be a circular argument, since while is a convex function of , this says nothing about finding (or and ). However, results for semidefinite programs in standard form [15] show that one can minimize by solving for directly without introducing any additional local minima, provided that the rank of is larger than the rank of the true solution, . Further, if the rank of is not known a priori and is twice differentiable, then any local minima w.r.t. such that is rankdeficient give a global minimum of [11].
Unfortunately, many objective functions (such as those with the projective tensor norm) are not twice differentiable in general, so this result can not be applied directly. Nonetheless, if is the sum of a twice differentiable and a nondifferentiable convex function, then our prior work [13] has shown that it is still possible to guarantee that rankdeficient local minima w.r.t. give global minima of . In particular, the result from [11] can be extended to nondifferentiable functions as follows.
Proposition 1
[13] Let , where is a twice differentiable convex function with compact level sets and is a proper, lower semicontinuous convex function that is potentially nondifferentiable. If is a rank deficient local minimum of , then is a global minimum of .
These results allow one to solve (11) using a potentially highly reduced set of variables if the rank of the true solution is much smaller than the dimensionality of .
However, while the above results from semidefinite programming are sufficient if we only wish to find general factors such that , for the purposes of solving structured matrix factorizations, we are interested in finding factors that achieve the infimum in the definition of (7), which is not provided by a solution to (11), as the that one obtains is not necessarily the that minimize (7). As a result, the results from semidefinite optimization are not directly applicable to problems such as (11) as they deal with different optimization problems. In the remainder of this paper, we will show that results regarding global optimality can still be derived for the nonconvex optimization problem given in (11) as well as for more general matrix factorization formulations.
3 Structured Matrix Factorization Problem
To develop our analysis we will introduce a matrix regularization function, , that generalizes the norm and is similarly defined in the product space but allows one to enforce structure in the factorized space. We will establish basic properties of the proposed regularization function, such as its convexity, and discuss several practical examples.
3.1 Structured Matrix Factorization Regularizers
The proposed matrix regularization function, , will be constructed from a regularization function on rank1 matrices, which can be defined as follows:
Definition 1
A function is said to be a rank1 regularizer if

is positively homogeneous with degree 2, i.e., .

is positive semidefinite, i.e., and .

For any sequence such that we have that .
The first two properties of the definition are straightforward and their necessity will become apparent when we derive properties of . The final property is necessary to ensure that is well defined as a regularizer for rank1 matrices. For example, taking , where denotes the nonnegative indicator function, satisfies the first two properties of the definition, but not the third since can always be decreased by taking , for some without changing the value of . Note also that the third property implies that for all .
These three properties define a general set of requirements that are satisfied by a very wide range of rank1 regularizers (see §3.3 for specific examples of regularizers that can be used for well known problems). While we will prove our theoretical results using this general definition of a rank1 regularizer, later, when discussing specific algorithms that can be used to solve structured matrix factorization problems in practice, we will require that satisfies a few additional requirements.
Using the notion of a rank1 regularizer, we now define a regularization function on matrices of arbitrary rank:
Definition 2
Given a rank1 regularizer , the matrix factorization regularizer is defined as
(14) 
3.2 Properties of the Matrix Factorization Regularizer
As long as satisfies the requirements from Definition 1, one can show that satisfies the following proposition:
Proposition 2
Given a rank1 regularizer , the matrix factorization regularizer satisfies the following properties:
Proof. A full proof of the above Proposition can be found in Appendix C.1 and uses similar arguments to those found in [10, 11, 12, 14, 16] for related problems.
Note that the first 3 properties show that is a gauge function on (and further it will be a norm if property 6 is satisfied). While this also implies that must be a convex function of , note that it can still be very challenging to evaluate or optimize functions involving due to the fact that it requires solving a nonconvex optimization problem by definition. However, by exploiting the convexity of , we are able to use it to study the optimality conditions of many associated nonconvex matrix factorization problems, some of which are discussed next.
3.3 Structured Matrix Factorization Problem Examples
The matrix factorization regularizer provides a natural bridge between convex formulations in the product space (6) and nonconvex functions in the factorized space (5) due to the fact that is a convex function of , while from the definition (14) one can induce a wide range of properties in by an appropriate choice of function. In what follows, we give a number of examples which lead to variants of several structured matrix factorization problems that have been studied previously in the literature.
LowRank: The first example of note is to relax lowrank constraints into nuclear norm regularized problems. Taking gives the well known variational form of the nuclear norm, , and thus provides a means to solve problems in the factorized space where the size of the factorization gets controlled by regularization. In particular, we have the conversion
(15) 
where the notation implies that solutions to all 3 objective functions will have identical values at the global minimum and any global minimum w.r.t. will be a global minimum for . While the above equivalence is well known for the nuclear norm [17, 9], the factorization is “unstructured” in the sense that the Euclidean norms do not bias the columns of and to have any particular properties. Therefore, to find factors with additional structure, such as nonnegativity, sparseness, etc., more general functions need to be considered.
NonNegative Matrix Factorization: If we extend the previous example to now add nonnegative constraints on , we get , where denotes the indicator function that . This choice of acts similarly to the variational form of the nuclear norm in the sense that it limits the number of nonzero columns in , but it also imposes the constraints that and must be nonnegative. As a result, one gets a convex relaxation of traditional nonnegative matrix factorization
(16)  
The notation is meant to imply that the two problems are not strictly equivalent as in the nuclear norm example. The key difference between the two problems is that in the first one the number of columns in is fixed a priori, while in the second one the number of columns in is allowed to vary and is adapted to the data via the lowrank regularization induced by the Frobenius norms on .
Row or Columns Norms: Taking results in , i.e., the sum of the norms of the rows of , while taking results in , i.e., the sum of the norms of the columns of [11, 10]. As a result, the regularizer generalizes the and mixed norms. The reformulations into a factorized form give:
However, the factorization problems in this case are relatively uninteresting as taking either or to be the identity (depending on whether the norm is on the columns of or , respectively) and the other matrix to be (or ) results in one of the possible optimal factorizations.
Sparse Dictionary Learning: Similar to the nonnegative matrix factorization case, convex relaxations of sparse dictionary learning can also be obtained by combining norms with sparsityinducing regularization. For example, taking results in a relaxation
(17) 
which was considered as a potential formulation for sparse dictionary learning in [11], where now the number of atoms in the dictionary is fit to the dataset via the lowrank regularization induced by the Frobenius norms. A similar approach would be to take .
Sparse PCA: If both the rows and columns of and are regularized to be sparse, then one can obtain convex relaxations of sparse PCA [18]. One example of this is to take . Alternatively, one can also place constraints on the number of elements in the nonzero support of each column in via a rank1 regularizer of the form , where denotes the indicator function that has or fewer nonzero elements. Such a form was analyzed in [19] and gives a relaxation of sparse PCA that regularizes the number of sparse components via the norm, while requiring that a given component have the specified level of sparseness.
General Structure: More generally, this theme of using a combination of norms and additional regularization on the factors can be used to model additional forms of structure on the factors. For example one can take or with a function that promotes the desired structure in and provided that satisfies the necessary properties in the definition of a rank1 regularizer. Additional example problems can be found in [12, 13].
Symmetric Factorizations: Assuming that is a square matrix, it is also possible to learn symmetrical formulations with this framework, as the indicator function that requires and to be equal is also positively homogeneous. As a result, one can use regularization such as to learn lowrank symmetrical factorizations of , and add additional regularization to encourage additional structures. For example learns symmetrical factorizations where the factors are required to be nonnegative and encouraged to be sparse.
4 Theoretical Analysis
In this section we provide a theoretical analysis of the link between convex formulations (6), which offer guarantees of global optimality, and factorized formulations (5), which offer additional flexibility in modeling the data structure and recovery of features that can be used in subsequent analysis. Using the matrix factorization regularizer introduced in §3.1, we consider the convex optimization problem
(18) 
Here the term can be used to model additional variables that will not be factorized. For example, in robust PCA (RPCA) [20] the term accounts for sparse outlying entries, and a formulation in which the data is corrupted by both large corruptions and Gaussian noise is given by:
In addition to the convex formulation (18), we will also consider the closely related nonconvex factorized formulation
(19) 
Note that in (19) we consider problems where is held fixed. If we additionally optimize over the factorization size, , in (19), then the problem becomes equivalent to (18), as we shall see. We will assume throughout that is jointly convex w.r.t. and once differentiable w.r.t. . Also, we will use the notation to denote the set .
4.1 Conditions under which Local Minima Are Global
Given the nonconvex optimization problem (19), note from the definition of that for all such that we must have . This yields a global lower bound between the convex and nonconvex objective functions, i.e., for all such that
(20)  
From this, if denotes an optimal solution to the convex problem , then any factorization such that is also an optimal solution to the nonconvex problem . These properties lead to the following result.
Theorem 1
Proof. Since provides a global lower bound for , the result follows from the fact that local minima of that satisfy the conditions of the theorem also satisfy the conditions for global optimality of . More specifically, because is a convex function, we have that is a global minimum of iff
(21) 
Since is a local minimum of , it is necessary that . Moreover, from the characterization of the subgradient of given in Proposition 2, we also have that will be true for if the following conditions are satisfied
(22)  
(23) 
To show (22), recall that the local minimum is such that one column pair of is 0. Assume without loss of generality that the final column pair of is 0 and let and for some and arbitrary . Then, due to the fact that is a local minimum, we have that for all there exists such that for all we have
(24)  
(25)  
(26) 
where the equivalence between (24) and (25) follows from the positive homogeneity of . Rearranging terms, we have
(27) 
Since is differentiable w.r.t. , after taking the limit w.r.t. , we obtain for any vector pair, showing (22).
To show (23), let and for some . Since is a local minimum, there exists such that for all we have
(28) 
Rearranging terms gives
and taking the limit w.r.t. gives
showing (23). The last two statements of the result follow from the discussion before the theorem, together with Proposition 2 part (8).
Note that the above proof provides sufficient conditions to guarantee the global optimality of local minima with specific properties, but in addition it also proves the following sufficient conditions for global optimality of any point.
Corollary 1
Condition 1 is easy to verify, as one can hold constant and solve a convex optimization problem for . Likewise, condition 2 is simple to test (note that the condition is equivalent to (23) due to the bound in (22)), and if a pair exists which does not satisfy the equality, then one can decrease the objective function by scaling by a nonnegative constant. Further, for many problems, it is possible to show that points that satisfy firstorder optimality will satisfy conditions 1 and 2, such as in the following result.
Proposition 3
Proof. See Appendix C.2.
As many optimization algorithms can guarantee convergence to firstorder optimal points, from the above result and discussion it is clear that the primary challenge in verifying if a given point is globally optimal is to test if condition 3 of Corollary 1 holds true. This is known as the polar problem and is discussed in detail next.
4.2 Ensuring Local Minimality via the Polar Problem
Note that, because the optimization problem in (19) is nonconvex, firstorder optimality is not sufficient to guarantee a local minimum. Thus, to apply the results from Section 4.1 in practice one needs to verify that condition 3 from Corollary 1 is satisfied. This problem is known as the polar problem and generalizes the concept of a dual norm. In particular, the proof of Proposition 2 shows that the polar function of a given matrix factorization regularizer can be computed as
(29) 
Therefore, condition 3 of Corollary 1 is equivalent to . Note that the difficulty of solving the polar problem heavily depends on the particular choice of the function. For example for the polar problem reduces to simply finding the largest entry of in absolute value, while for solving the polar problem is known to be NPhard [21].
While for general functions it is not necessarily known how to efficiently solve the polar problem, given a point that satisfies conditions 1 and 2 of Corollary 1, the value of the polar problem solution at a given point and how closely the polar problem can be approximated provide a bound on how far a particular point is from being globally optimal. The bound is based on the following result.
Comments
There are no comments yet.