# Structured Low-Rank Matrix Factorization: Global Optimality, Algorithms, and Applications

Recently, convex formulations of low-rank matrix factorization problems have received considerable attention in machine learning. However, such formulations often require solving for a matrix of the size of the data matrix, making it challenging to apply them to large scale datasets. Moreover, in many applications the data can display structures beyond simply being low-rank, e.g., images and videos present complex spatio-temporal structures that are largely ignored by standard low-rank methods. In this paper we study a matrix factorization technique that is suitable for large datasets and captures additional structure in the factors by using a particular form of regularization that includes well-known regularizers such as total variation and the nuclear norm as particular cases. Although the resulting optimization problem is non-convex, we show that if the size of the factors is large enough, under certain conditions, any local minimizer for the factors yields a global minimizer. A few practical algorithms are also provided to solve the matrix factorization problem, and bounds on the distance from a given approximate solution of the optimization problem to the global optimum are derived. Examples in neural calcium imaging video segmentation and hyperspectral compressed recovery show the advantages of our approach on high-dimensional datasets.

## Authors

• 4 publications
• 51 publications
• ### Reexamining Low Rank Matrix Factorization for Trace Norm Regularization

Trace norm regularization is a widely used approach for learning low ran...
06/27/2017 ∙ by Carlo Ciliberto, et al. ∙ 0

• ### Global Optimality in Tensor Factorization, Deep Learning, and Beyond

Techniques involving factorization are found in a wide range of applicat...
06/24/2015 ∙ by Benjamin D. Haeffele, et al. ∙ 0

• ### Supervised Quantile Normalization for Low-rank Matrix Approximation

Low rank matrix factorization is a fundamental building block in machine...
02/08/2020 ∙ by Marco Cuturi, et al. ∙ 0

• ### Proximal algorithms for constrained composite optimization, with applications to solving low-rank SDPs

We study a family of (potentially non-convex) constrained optimization p...
03/01/2019 ∙ by Yu Bai, et al. ∙ 0

• ### Fast Convergence for Langevin Diffusion with Matrix Manifold Structure

In this paper, we study the problem of sampling from distributions of th...
02/13/2020 ∙ by Ankur Moitra, et al. ∙ 0

• ### Online Forecasting Matrix Factorization

In this paper the problem of forecasting high dimensional time series is...
12/23/2017 ∙ by San Gultekin, et al. ∙ 0

• ### Tight convex relaxations for sparse matrix factorization

Based on a new atomic norm, we propose a new convex formulation for spar...
07/19/2014 ∙ by Emile Richard, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In many large datasets, relevant information often lies in a subspace of much lower dimension than the ambient space, and thus the goal of many learning algorithms can be broadly interpreted as trying to find or exploit this underlying “structure” that is present in the data. One structure that is particularly useful both due to its wide-ranging applicability and efficient computation is the linear subspace model. Generally speaking, if one is given data points from a dimensional ambient space, , a linear subspace model simply implies that there exists matrices such that . When one of the factors is known a priori, the problem of finding the other factor simplifies considerably, but if both factors are allowed to be arbitrary one can always find an infinite number of matrices that yield the same product. As a result, to accomplish anything meaningful, one must impose some restrictions on the factors. This idea leads to a variety of common matrix factorization techniques. A few well known examples are the following:

• The number of columns, , in is typically constrained to be small, e.g., , and is constrained to have orthonormal columns, i.e., .

• Nonnegative Matrix Factorization (NMF): The number of columns in is constrained to be small, and are required to be non-negative [1, 2].

• Sparse Dictionary Learning (SDL): The number of columns in is allowed to be larger than or , the columns of are required to have unit norm, and is required to be sparse as measured by, e.g., the norm or the pseudo-norm [3, 4].111As a result, in SDL, one does not assume that there exists a single low-dimensional subspace to model the data, but rather that the data lie in a union of a large number of low-dimensional subspaces.

Mathematically, the general problem of recovering structured linear subspaces from a dataset can be captured by a structured matrix factorization problem of the form

 minU,Vℓ(Y,UVT)+λΘ(U,V), (1)

where is some loss function that measures how well is approximated by and is a regularizer that encourages or enforces specific properties in . By taking an appropriate combination of and

one can formulate both unsupervised learning techniques, such as PCA, NMF, and SDL, or supervised learning techniques like discriminative dictionary learning

[5, 6]

and learning max-margin factorized classifiers

[7]. However, while there are wide-ranging applications for structured matrix factorization methods that have achieved good empirical success, the associated optimization problem (1) is non-convex regardless of the choice of and functions due to the presence of the matrix product . As a result, aside from a few special cases (such as PCA), finding solutions to (1) poses a significant challenge, which often requires one to instead consider approximate solutions that depend on a particular choice of initialization and optimization method.

Given the challenge of non-convex optimization, one possible approach to matrix factorization is to relax the non-convex problem into a problem which is convex on the product of the factorized matrices, , and then recover the factors of after solving the convex relaxation. As a concrete example, in low-rank matrix factorization, one might be interested in solving a problem of the form

 minXℓ(Y,X)  s.t.  rank(X)≤r, (2)

which is equivalently defined as a factorization problem

 minU,Vℓ(Y,UVT), (3)

where the rank constraint is enforced by limiting the number of columns in to be less than or equal to . However, aside from a few special choices of , solving (2) or (3) is in general an NP-hard problem. Instead, one can relax (2) into a convex problem by using a convex regularizer that promotes low-rank solutions, such as the nuclear norm

(sum of the singular values of

), and then solve

 minXℓ(Y,X)+λ∥X∥∗, (4)

which can often be done efficiently if is convex with respect to [8, 9]. Given a solution to (4), , it is then simple to find a low-rank factorization

via a singular value decomposition. Unfortunately, while the nuclear norm provides a nice convex relaxation for low-rank matrix factorization problems, nuclear norm relaxation does not capture the full generality of problems such as (

1) as it does not necessarily ensure that can be factorized as for some pair which has the desired structure encouraged by (e.g., in non-negative matrix factorization we require and to be non-negative), nor does it provide a means to find the desired factors.

Based on the above discussion, optimization problems in the factorized space, such as (1), versus problems in the product space, with (4) as a particular example, both present various advantages and disadvantages. Factorized problems attempt to solve for the desired factors directly, provide significantly increased modeling flexibility by permitting one to model structure on the factors (sparsity, non-negativity, etc.), and allow one to potentially work with a significantly reduced number of variables if the number of columns in is ; however, they suffer from the significant challenges associated with non-convex optimization. On the other hand, problems in the product space can be formulated to be convex, which affords many practical algorithms and analysis techniques, but one is required to optimize over a potentially large number of variables and solve a second factorization problem in order to recover the factors from the solution . These various pros and cons are briefly summarized in Table I.

To bridge this gap between the two classes of problems, here we explore the link between non-convex matrix factorization problems, which have the general form

 {Factorized Problems:}  minU,Vℓ(Y,UVT)+λΘ(U,V), (5)

and a closely related family of convex problems in the product space, given by

 {Convex Problems:}  minXℓ(Y,X)+λΩΘ(X), (6)

where the function will be defined based on the choice of the regularization function and will have the desirable property of being a convex function of . Unfortunately, while the optimization problem in (6) is convex w.r.t. , it will typically be non-tractable to solve. Moreover, even if a solution to (6) could be found, solving a convex problem in the product space does not necessarily achieve our goal, as we still must solve another matrix factorization problem to recover the factors with the desired properties encouraged by the function (sparsity, non-negativity, etc.). Nevertheless, the two problems given by (5) and (6) will be tightly coupled. Specifically, the convex problem in (6) will be shown to be a global lower-bound to the non-convex factorized problem in (5), and solutions to the factorized problem will yield solutions to the convex problem. As a result, we will tailor our results to the non-convex factorization problem (5) using the convex problem (6) as an analysis tool. While the optimization problem in the factorized space is not convex, by analyzing this tight interconnection between the two problems, we will show that if the number of columns in is large enough and can be adapted to the data instead of being fixed a priori, local minima of the non-convex factorized problem will be global minima of both the convex and non-convex problems. This result will lead to a practical optimization strategy that is parallelizable and often requires a much smaller set of variables. Experiments in image processing applications will illustrate the effectiveness of the proposed approach.

## 2 Mathematical Background & Prior Work

As discussed before, relaxing low-rank matrix factorization problems via nuclear norm formulations fails to capture the full generality of factorized problems as it does not yield “structured” factors, , with the desired properties encouraged by (sparseness, non-negativity, etc.). To address this issue, several studies have explored a more general convex relaxation via the matrix norm given by

 ∥X∥u,v≡infr∈N+ infU,V:UVT=Xr∑i=1∥Ui∥u∥Vi∥v≡infr∈N+ infU,V:UVT=Xr∑i=112(∥Ui∥2u+∥Vi∥2v), (7)

where denote the th columns of and , respectively, and

are arbitrary vector norms, and the number of columns,

, in the and matrices is allowed to be variable [10, 11, 12, 13, 14]. The norm in (7

) has appeared under multiple names in the literature, including the projective tensor norm, decomposition norm, and atomic norm. It is worth noting that for particular choices of the

and vector norms, reverts to several well known matrix norms and thus provides a generalization of many commonly used regularizers. Notably, when the vector norms are both norms, the form in (7) becomes the well known variational definition of the nuclear norm [9]:

 ∥X∥∗=∥X∥2,2 ≡infr∈N+ infU,V:UVT=Xr∑i=1∥Ui∥2∥Vi∥2 (8) ≡infr∈N+ infU,V:UVT=Xr∑i=112(∥Ui∥22+∥Vi∥22).

Moreover, by replacing the column norms in (7) with gauge functions one can incorporate additional regularization on , such as non-negativity, while still being a convex function of [12]. Recall that given a closed, convex set containing the origin, a gauge function, , is defined as,

 σC(x)≡infμμ  s.t.  μx∈C. (9)

Further, recall that all norms are gauge functions (as can be observed by choosing to be the unit ball of a norm), but gauge functions are a slight generalization of norms, since they satisfy all the properties of a norm except that they are not required to be invariant to non-negative scaling (i.e., it is not required that be equal to ). Finally, note that given a gauge function, , its polar, , is defined as

 σ∘C(z)≡supx⟨z,x⟩  s.t.  σC(x)≤1, (10)

which itself is also a gauge function. In the case of a norm, its polar function is often referred to as the dual norm.

### 2.1 Matrix Factorization as Semidefinite Optimization

Due to the increased modeling opportunities it provides, several studies have explored structured matrix factorization formulations based on the norm in a way that allows one to work with a highly reduced set of variables while still providing some guarantees of global optimality. In particular, it is possible to explore optimization problems over factorized matrices of the form

 minU,Vℓ(Y,UVT)+λ∥UVT∥u,v. (11)

While this problem is convex with respect to the product , it is still non-convex with respect to jointly due to the matrix product. However, if we define a matrix to be the concatenation of and

 Γ≡[UV]⟹ΓΓT=[UUTUVTVUTVVT], (12)

we see that is a submatrix of the positive semidefinite matrix . After defining the function

 H(ΓΓT)=ℓ(Y,UVT)+λ∥UVT∥u,v, (13)

it is clear that the proposed formulation (11) can be recast as an optimization problem over a positive semidefinite matrix.

At first the above discussion seems to be a circular argument, since while is a convex function of , this says nothing about finding (or and ). However, results for semidefinite programs in standard form [15] show that one can minimize by solving for directly without introducing any additional local minima, provided that the rank of is larger than the rank of the true solution, . Further, if the rank of is not known a priori and is twice differentiable, then any local minima w.r.t. such that is rank-deficient give a global minimum of [11].

Unfortunately, many objective functions (such as those with the projective tensor norm) are not twice differentiable in general, so this result can not be applied directly. Nonetheless, if is the sum of a twice differentiable and a non-differentiable convex function, then our prior work [13] has shown that it is still possible to guarantee that rank-deficient local minima w.r.t. give global minima of . In particular, the result from [11] can be extended to non-differentiable functions as follows.

###### Proposition 1

[13] Let , where is a twice differentiable convex function with compact level sets and is a proper, lower semi-continuous convex function that is potentially non-differentiable. If is a rank deficient local minimum of , then is a global minimum of .

These results allow one to solve (11) using a potentially highly reduced set of variables if the rank of the true solution is much smaller than the dimensionality of .

However, while the above results from semidefinite programming are sufficient if we only wish to find general factors such that , for the purposes of solving structured matrix factorizations, we are interested in finding factors that achieve the infimum in the definition of (7), which is not provided by a solution to (11), as the that one obtains is not necessarily the that minimize (7). As a result, the results from semidefinite optimization are not directly applicable to problems such as (11) as they deal with different optimization problems. In the remainder of this paper, we will show that results regarding global optimality can still be derived for the non-convex optimization problem given in (11) as well as for more general matrix factorization formulations.

## 3 Structured Matrix Factorization Problem

To develop our analysis we will introduce a matrix regularization function, , that generalizes the norm and is similarly defined in the product space but allows one to enforce structure in the factorized space. We will establish basic properties of the proposed regularization function, such as its convexity, and discuss several practical examples.

### 3.1 Structured Matrix Factorization Regularizers

The proposed matrix regularization function, , will be constructed from a regularization function on rank-1 matrices, which can be defined as follows:

###### Definition 1

A function is said to be a rank-1 regularizer if

1. is positively homogeneous with degree 2, i.e., .

2. is positive semi-definite, i.e., and .

3. For any sequence such that we have that .

The first two properties of the definition are straight-forward and their necessity will become apparent when we derive properties of . The final property is necessary to ensure that is well defined as a regularizer for rank-1 matrices. For example, taking , where denotes the non-negative indicator function, satisfies the first two properties of the definition, but not the third since can always be decreased by taking , for some without changing the value of . Note also that the third property implies that for all .

These three properties define a general set of requirements that are satisfied by a very wide range of rank-1 regularizers (see §3.3 for specific examples of regularizers that can be used for well known problems). While we will prove our theoretical results using this general definition of a rank-1 regularizer, later, when discussing specific algorithms that can be used to solve structured matrix factorization problems in practice, we will require that satisfies a few additional requirements.

Using the notion of a rank-1 regularizer, we now define a regularization function on matrices of arbitrary rank:

###### Definition 2

Given a rank-1 regularizer , the matrix factorization regularizer is defined as

 Ωθ(X)≡infr∈N+infU∈RD×rV∈RN×rr∑i=1θ(Ui,Vi)  s.t.  X=UVT. (14)

The function defined in (14) is very closely related to other regularizers that have appeared in the literature. In particular, taking or for arbitrary vector norms and gives the norm in (7). Note, however, there is no requirement for to be convex w.r.t. or to be composed of norms.

### 3.2 Properties of the Matrix Factorization Regularizer

As long as satisfies the requirements from Definition 1, one can show that satisfies the following proposition:

###### Proposition 2

Given a rank-1 regularizer , the matrix factorization regularizer satisfies the following properties:

1. and .

2. .

3. .

4. is convex w.r.t. .

5. The infimum in (14) is achieved with .

6. If or , then is a norm on .

7. The subgradient of is given by:

 {W:⟨W,X⟩=Ωθ(X),uTWv≤θ(u,v) ∀(u,v)}.
8. Given a factorization , if there exists a matrix such that and , then is an optimal factorization of , i.e., it achieves the infimum in (14), and .

Proof. A full proof of the above Proposition can be found in Appendix C.1 and uses similar arguments to those found in [10, 11, 12, 14, 16] for related problems.

Note that the first 3 properties show that is a gauge function on (and further it will be a norm if property 6 is satisfied). While this also implies that must be a convex function of , note that it can still be very challenging to evaluate or optimize functions involving due to the fact that it requires solving a non-convex optimization problem by definition. However, by exploiting the convexity of , we are able to use it to study the optimality conditions of many associated non-convex matrix factorization problems, some of which are discussed next.

### 3.3 Structured Matrix Factorization Problem Examples

The matrix factorization regularizer provides a natural bridge between convex formulations in the product space (6) and non-convex functions in the factorized space (5) due to the fact that is a convex function of , while from the definition (14) one can induce a wide range of properties in by an appropriate choice of function. In what follows, we give a number of examples which lead to variants of several structured matrix factorization problems that have been studied previously in the literature.

Low-Rank: The first example of note is to relax low-rank constraints into nuclear norm regularized problems. Taking gives the well known variational form of the nuclear norm, , and thus provides a means to solve problems in the factorized space where the size of the factorization gets controlled by regularization. In particular, we have the conversion

 minXℓ(Y,X)+λ∥X∥∗⟺minr,U,Vℓ(Y,UVT)+λ2r∑i=1(∥Ui∥22+∥Vi∥22)⟺minr,U,Vℓ(Y,UVT)+λr∑i=1∥Ui∥2∥Vi∥2, (15)

where the notation implies that solutions to all 3 objective functions will have identical values at the global minimum and any global minimum w.r.t. will be a global minimum for . While the above equivalence is well known for the nuclear norm [17, 9], the factorization is “unstructured” in the sense that the Euclidean norms do not bias the columns of and to have any particular properties. Therefore, to find factors with additional structure, such as non-negativity, sparseness, etc., more general functions need to be considered.

Non-Negative Matrix Factorization: If we extend the previous example to now add non-negative constraints on , we get , where denotes the indicator function that . This choice of acts similarly to the variational form of the nuclear norm in the sense that it limits the number of non-zero columns in , but it also imposes the constraints that and must be non-negative. As a result, one gets a convex relaxation of traditional non-negative matrix factorization

 minU,Vℓ(Y,UVT)  s.t.  U≥0, V≥0⟹ (16) minr,U,Vℓ(Y,UVT)+λ2r∑i=1(∥Ui∥22+∥Vi∥22)  s.t.  U≥0, V≥0.

The notation is meant to imply that the two problems are not strictly equivalent as in the nuclear norm example. The key difference between the two problems is that in the first one the number of columns in is fixed a priori, while in the second one the number of columns in is allowed to vary and is adapted to the data via the low-rank regularization induced by the Frobenius norms on .

Row or Columns Norms: Taking results in , i.e., the sum of the norms of the rows of , while taking results in , i.e., the sum of the norms of the columns of [11, 10]. As a result, the regularizer generalizes the and mixed norms. The reformulations into a factorized form give:

 minXℓ(Y,X)+λ∥X∥1,v⇔minr,U,Vℓ(Y,UVT)+λr∑i=1∥Ui∥1∥Vi∥v minXℓ(Y,X)+λ∥X∥u,1⇔minr,U,Vℓ(Y,UVT)+λr∑i=1∥Ui∥u∥Vi∥1.

However, the factorization problems in this case are relatively uninteresting as taking either or to be the identity (depending on whether the norm is on the columns of or , respectively) and the other matrix to be (or ) results in one of the possible optimal factorizations.

Sparse Dictionary Learning: Similar to the non-negative matrix factorization case, convex relaxations of sparse dictionary learning can also be obtained by combining norms with sparsity-inducing regularization. For example, taking results in a relaxation

 minU,Vℓ(Y,UVT)+λ∥V∥1  s.t.  ∥Ui∥2=1  ∀i⟹minr,U,Vℓ(Y,UVT)+λ2r∑i=1(∥Ui∥22+∥Vi∥22+γ∥Vi∥21), (17)

which was considered as a potential formulation for sparse dictionary learning in [11], where now the number of atoms in the dictionary is fit to the dataset via the low-rank regularization induced by the Frobenius norms. A similar approach would be to take .

Sparse PCA: If both the rows and columns of and are regularized to be sparse, then one can obtain convex relaxations of sparse PCA [18]. One example of this is to take . Alternatively, one can also place constraints on the number of elements in the non-zero support of each column in via a rank-1 regularizer of the form , where denotes the indicator function that has or fewer non-zero elements. Such a form was analyzed in [19] and gives a relaxation of sparse PCA that regularizes the number of sparse components via the norm, while requiring that a given component have the specified level of sparseness.

General Structure: More generally, this theme of using a combination of norms and additional regularization on the factors can be used to model additional forms of structure on the factors. For example one can take or with a function that promotes the desired structure in and provided that satisfies the necessary properties in the definition of a rank-1 regularizer. Additional example problems can be found in [12, 13].

Symmetric Factorizations: Assuming that is a square matrix, it is also possible to learn symmetrical formulations with this framework, as the indicator function that requires and to be equal is also positively homogeneous. As a result, one can use regularization such as to learn low-rank symmetrical factorizations of , and add additional regularization to encourage additional structures. For example learns symmetrical factorizations where the factors are required to be non-negative and encouraged to be sparse.

## 4 Theoretical Analysis

In this section we provide a theoretical analysis of the link between convex formulations (6), which offer guarantees of global optimality, and factorized formulations (5), which offer additional flexibility in modeling the data structure and recovery of features that can be used in subsequent analysis. Using the matrix factorization regularizer introduced in §3.1, we consider the convex optimization problem

 minX,Q{F(X,Q)≡ℓ(Y,X,Q)+λΩθ(X)}. (18)

Here the term can be used to model additional variables that will not be factorized. For example, in robust PCA (RPCA) [20] the term accounts for sparse outlying entries, and a formulation in which the data is corrupted by both large corruptions and Gaussian noise is given by:

 minX,Q{F(X,Q)RPCA≡12∥Y−X−Q∥2F+γ∥Q∥1+λ∥X∥∗}.

In addition to the convex formulation (18), we will also consider the closely related non-convex factorized formulation

 minU,V,Q{f(U,V,Q)≡ℓ(Y,UVT,Q)+λr∑i=1θ(Ui,Vi)}. (19)

Note that in (19) we consider problems where is held fixed. If we additionally optimize over the factorization size, , in (19), then the problem becomes equivalent to (18), as we shall see. We will assume throughout that is jointly convex w.r.t. and once differentiable w.r.t. . Also, we will use the notation to denote the set .

### 4.1 Conditions under which Local Minima Are Global

Given the non-convex optimization problem (19), note from the definition of that for all such that we must have . This yields a global lower bound between the convex and non-convex objective functions, i.e., for all such that

 F(X,Q) =ℓ(Y,X,Q)+λΩθ(X) (20) ≤ℓ(Y,UVT,Q)+λr∑i=1θ(Ui,Vi)=f(U,V,Q).

From this, if denotes an optimal solution to the convex problem , then any factorization such that is also an optimal solution to the non-convex problem . These properties lead to the following result.

###### Theorem 1

Given a function that is jointly convex in and once differentiable w.r.t. ; a rank-1 regularizer that satisfies the conditions in Definition 2; and a constant , local minima of in (19) are globally optimal if for some . Moreover, is a global minima of and is an optimal factorization of .

Proof. Since provides a global lower bound for , the result follows from the fact that local minima of that satisfy the conditions of the theorem also satisfy the conditions for global optimality of . More specifically, because is a convex function, we have that is a global minimum of iff

 −1λ∇Xℓ(Y,^X,^Q)∈∂Ωθ(X)  and  0∈∂Qℓ(Y,^X,^Q). (21)

Since is a local minimum of , it is necessary that . Moreover, from the characterization of the subgradient of given in Proposition 2, we also have that will be true for if the following conditions are satisfied

 uT(−1λ∇Xℓ(Y,~U~VT,~Q))v≤θ(u,v) ∀(u,v) (22) r∑i=1~UTi(−1λ∇Xℓ(Y,~U~VT,~Q))~Vi=r∑i=1θ(~Ui,~Vi). (23)

To show (22), recall that the local minimum is such that one column pair of is 0. Assume without loss of generality that the final column pair of is 0 and let and for some and arbitrary . Then, due to the fact that is a local minimum, we have that for all there exists such that for all we have

 ℓ(Y,UϵVTϵ,~Q)+λr∑i=1θ(~Ui,~Vi)+λθ(ϵ1/2u,ϵ1/2v)= (24) ℓ(Y,~U~VT+ϵuvT,~Q)+λr∑i=1θ(~Ui,~Vi)+ϵλθ(u,v)≥ (25) ℓ(Y,~U~VT,~Q)+λr∑i=1θ(~Ui,~Vi), (26)

where the equivalence between (24) and (25) follows from the positive homogeneity of . Rearranging terms, we have

 −1λϵ[ℓ(Y,~U~VT+ϵuvT,~Q)−ℓ(Y,~U~VT,~Q)]≤θ(u,v). (27)

Since is differentiable w.r.t. , after taking the limit w.r.t. , we obtain for any vector pair, showing (22).

To show (23), let and for some . Since is a local minimum, there exists such that for all we have

 ℓ(Y,U1±ϵVT1±ϵ,~Q)+λr∑i=1θ((1±ϵ)1/2~Ui,(1±ϵ)1/2~Vi)=ℓ(Y,(1±ϵ)~U~VT,~Q)+λ(1±ϵ)r∑i=1θ(~Ui,~Vi)≥ℓ(Y,~U~VT,~Q)+λr∑i=1θ(~Ui,~Vi). (28)

Rearranging terms gives

 −1λϵ[ℓ(Y,(1±ϵ)~U~VT,~Q)−ℓ(Y,~U~VT,~Q)]≤±r∑i=1θ(~Ui,~Vi),

and taking the limit w.r.t. gives

 r∑i=1θ(~Ui,~Vi)≤⟨−1λ∇Xℓ(Y,~U~VT,~Q),~U~VT⟩≤r∑i=1θ(~Ui,~Vi),

showing (23). The last two statements of the result follow from the discussion before the theorem, together with Proposition 2 part (8).

Note that the above proof provides sufficient conditions to guarantee the global optimality of local minima with specific properties, but in addition it also proves the following sufficient conditions for global optimality of any point.

###### Corollary 1

Given a function that is jointly convex in and once differentiable w.r.t. ; a rank-1 regularizer that satisfies the conditions in Definition 2; and a constant , a point is a global minimum of in (19) if it satisfies the conditions

1. .

Condition 1 is easy to verify, as one can hold constant and solve a convex optimization problem for . Likewise, condition 2 is simple to test (note that the condition is equivalent to (23) due to the bound in (22)), and if a pair exists which does not satisfy the equality, then one can decrease the objective function by scaling by a non-negative constant. Further, for many problems, it is possible to show that points that satisfy first-order optimality will satisfy conditions 1 and 2, such as in the following result.

###### Proposition 3

Given a function that is jointly convex in and once differentiable w.r.t. ; a constant ; and two gauge functions , then for or , any first-order optimal point of in (19) satisfies conditions 1-2 of Corollary 1.

Proof. See Appendix C.2.

As many optimization algorithms can guarantee convergence to first-order optimal points, from the above result and discussion it is clear that the primary challenge in verifying if a given point is globally optimal is to test if condition 3 of Corollary 1 holds true. This is known as the polar problem and is discussed in detail next.

### 4.2 Ensuring Local Minimality via the Polar Problem

Note that, because the optimization problem in (19) is non-convex, first-order optimality is not sufficient to guarantee a local minimum. Thus, to apply the results from Section 4.1 in practice one needs to verify that condition 3 from Corollary 1 is satisfied. This problem is known as the polar problem and generalizes the concept of a dual norm. In particular, the proof of Proposition 2 shows that the polar function of a given matrix factorization regularizer can be computed as

 Ω∘θ(Z)=supu,vuTZv  s.t.  θ(u,v)≤1. (29)

Therefore, condition 3 of Corollary 1 is equivalent to . Note that the difficulty of solving the polar problem heavily depends on the particular choice of the function. For example for the polar problem reduces to simply finding the largest entry of in absolute value, while for solving the polar problem is known to be NP-hard [21].

While for general functions it is not necessarily known how to efficiently solve the polar problem, given a point that satisfies conditions 1 and 2 of Corollary 1, the value of the polar problem solution at a given point and how closely the polar problem can be approximated provide a bound on how far a particular point is from being globally optimal. The bound is based on the following result.

###### Proposition 4

Given a function that is lower-semicontinuous, jointly convex in , and once differentiable w.r.t. ; a rank-1 regularizer that satisfies the conditions in Definition 2; and a constant , for any point that satisfies conditions 1 and 2 of Corollary 1, we have the following bound

 f(~U,~V, ~Q)−F(^X,^Q)≤λΩθ(^X)[Ω∘θ(−1λ∇Xℓ(Y,~U~VT,~Q))−1]−mX2∥~U~VT−^X∥2F−mQ2∥~Q−^Q∥2F, (30)

where