# Sublinear-Time Quadratic Minimization via Spectral Decomposition of Matrices

We design a sublinear-time approximation algorithm for quadratic function minimization problems with a better error bound than the previous algorithm by Hayashi and Yoshida (NIPS'16). Our approximation algorithm can be modified to handle the case where the minimization is done over a sphere. The analysis of our algorithms is obtained by combining results from graph limit theory, along with a novel spectral decomposition of matrices. Specifically, we prove that a matrix A can be decomposed into a structured part and a pseudorandom part, where the structured part is a block matrix with a polylogarithmic number of blocks, such that in each block all the entries are the same, and the pseudorandom part has a small spectral norm, achieving better error bound than the existing decomposition theorem of Frieze and Kannan (FOCS'96). As an additional application of the decomposition theorem, we give a sublinear-time approximation algorithm for computing the top singular values of a matrix.

## Authors

• 9 publications
• 32 publications
12/09/2019

### Robust and Sample Optimal Algorithms for PSD Low-Rank Approximation

Recently, Musco and Woodruff (FOCS, 2017) showed that given an n × n pos...
07/31/2019

### Sublinear Subwindow Search

We propose an efficient approximation algorithm for subwindow search tha...
07/24/2018

### Metric Sublinear Algorithms via Linear Sampling

In this work we provide a new technique to design fast approximation alg...
12/17/2021

### Sublinear Time Approximation of Text Similarity Matrices

We study algorithms for approximating pairwise similarity matrices that ...
03/17/2020

### A Spectral Approach to Network Design

We present a spectral approach to design approximation algorithms for ne...
09/16/2021

### Sublinear Time Eigenvalue Approximation via Random Sampling

We study the problem of approximating the eigenspectrum of a symmetric m...
07/12/2019

### Structured inversion of the Bernstein mass matrix

Bernstein polynomials, long a staple of approximation theory and computa...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Quadratic function minimization/maximization is a versatile tool used in machine learning, statistics, and data mining and can represent many fundamental problems such as linear regression,

-means clustering, principal component analysis (PCA), support vector machines, kernel machines and more (see

[Mur12]). In general, quadratic function minimization/maximization is NP-Hard. When the problem is convex (for minimization) or concave (for maximization), we can solve it by solving a system of linear equations, which requires time, where

is the number of variables. There are faster approximation methods based on stochastic gradient descent

[Bot04], and the multiplicative update algorithm [CHW12]. However, these methods still require time, which is prohibitive when we need to handle a huge number of variables.

Quadratic function minimization over a sphere is also an important problem. This minimization problem is often called the trust region subproblem since it must be solved in each step of a trust region algorithm

. Trust region algorithms are among the most important tools in solving nonlinear programming problems, as they are robust and can be applied to ill-conditioned problems. In addition, trust region subproblems are useful in many other problems such as constrained eigenvalue problems

[GGvM89], least-square problems [ZCS10]

, combinatorial optimization problems

[Bus06] and many more. While the problem is non-convex, it has been shown that the problem exhibit strong duality properties and is known to be solved in polynomial time (see [BTT96, YZ03]). In particular, it was shown to be equivalent to some semidefinite programming optimization problems that can be solved in polynomial time ([NN94, Ali95]). As in the non-constrained case, there are approximation algorithms based on gradient descent [Nes83] and on reducing the problem to a sequence of eigenvalues computations [HK16]. However, as in the unconstrained case, these methods require running time which is linear in the number of the non-zero elements of the matrix (which might be linear in ).

### 1.1 Our Contributions

In this work, we provide sublinear-time approximation algorithms for minimizing quadratic functions, assuming random access to the entries of the input matrix and the vector.

First, we consider unconstraind minimization. Specifically, for a matrix and vectors , we consider the following quadratic function minimization problem:

 minv∈Rnψn,A,d,b(v), where ψn,A,d,b(v)=⟨v,Av⟩+n⟨v,diag(d)v⟩+n⟨b,v⟩. (1)

Here is a matrix whose diagonal entries are specified by and denotes the standard inner product.

###### Theorem 1.1.

Fix and let and be an optimal solution and the optimal value, respectively, of Problem (1). Let be a random set such that each index is taken to independently w.p with

 k=max{O(log2nϵ2),(1ϵ)O(1/ϵ2)}.

Then, the following holds with probability at least

: Let and be an optimal solution and the optimal value, respectively, of the problem

 minv∈R|S|ψ|S|,A|S,d|S,b|S(v),

where is an operator that extracts a submatrix (or subvector) specified by an index set . Then,

 ∣∣ ∣∣1|S|2~z∗−1n2z∗∣∣ ∣∣≤ϵLmax{∥~v∗∥22|S|,∥v∗∥22n},

where .

Recently, Hayashi and Yoshida [HY16] proposed a constant-time sampling method for this problem with an additive error of for , where and are the optimal solutions to the original and sampled problems, respectively. Although their algorithm runs in constant time, the guarantee is not meaningful when because the optimal value is always of order . Theorem 1.1 shows that we can improve the additive error to , where , as long as the number of samples is polylogarithmic (or more). We note that we always have and the difference is significant when and are sparse. For example, if and have only non-zero elements, then we have . Our new bound provides a trade off between the additive error and the time complexity, which was unclear from the argument by Hayashi and Yoshida [HY16].

Moreover, we consider minimization over a sphere. Specifically, given a matrix , vectors and , we consider the following quadratic function minimization problem over a sphere of radius :

 minv:∥v∥2≤rψn,A,d,b(v). (2)

We give the first sublinear-time approximation algorithm for this problem.

###### Theorem 1.2.

Let and be an optimal solution and optimal value, respectively, of Problem (2). Let and let be a random set such that each index is taken to independently w.p with

 k=max{O(log2nϵ2),(1ϵ)O(1/ϵ2)}.

Then, the following holds with probability at least : Let and be an optimal solution and the optimal value, respectively, of the problem

 min∥v∥2≤√|S|nrψ|S|,A|S,d|S,b|S(v).

Then,

 ∣∣ ∣∣1|S|2~z∗−1n2z∗∣∣ ∣∣≤ϵLr2n,

where .

We can design a constant-time algorithm for (2) by using the result of [HY16], but the resulting error bound will be , which is times worse than the bound in Theorem 1.2.

The proofs of Theorems 1.1 and 1.2 rely on a novel decomposition theorem of matrices, which will be discussed in Section 1.3.

As another application of this decomposition theorem, we show that for any (small) , we can approximate the -th largest singular values of a matrix (denoted ) to within an additive error of in time

 max{O(max{log2n,log2m}ϵ2),(1ϵ)O(1/ϵ2)}.

Our algorithms are very simple to implement, and do not require any structure in the input matrix. However, similar results (with better running time) can be obtained by applying known sampling techniques from [FKV04]. Formally, we prove the following.

###### Theorem 1.3.

Given a matrix , , let

 k=max{O(max{log2n,log2m}ϵ2),(1ϵ)O(1/ϵ2)}.

Then, for every , there is an algorithm that runs in time, and outputs a value such that with probability at least ,

 |σt(A)−z|≤L√ϵtnm.

We note that since the (see Fact 2.1), the relative error the algorithm achieves is at least . Therefore, to get meaningful approximation, one must have that . So, if we wish to set then we must have .

Finlay, we present numerical experiments that confirm the empirical performance for accuracy and runtime of our singular values algorithm (see Section 5.2.3)

### 1.2 Related work

In machine learning context, Clarkson et al. [CHW12] considered several machine learning optimization problems and gave sublinear-time approximation algorithms for those problems. In particular, they considered approximate minimization of a quadratic function over the unit simplex . Namely, given a positive semidefinite matrix and , they showed that it is possible to obtain an approximate solution to (up to an additive error of ) in time, which is sublinear in the input size . In contrast, our algorithms run in polylogarithmic time and are much more efficient. Hayashi and Yoshida [HY17]

proposed a constant-time approximation algorithm for Tucker decomposition of tensors, which can be seen as minimizing low-degree polynomials.

In addition to the work of Hayashi and Yoshida [HY16] mentioned above, an additional line of relevant work is constant-time approximation algorithms for the max cut problem on dense graphs [FK96, MS08]. Let be the Laplacian matrix of a graph on vertices. Then, the max cut problem can be seen as maximizing subject to , and these methods approximate the optimal value to within . Our method for approximating the largest singular values can be seen as an extension of these methods to a continuous setting.

### 1.3 Techniques

The main ingredient in our proof is a novel spectral decomposition theorem of matrices, which may be of independent interest. The theorem states that we can decompose a matrix into a structured matrix and a pseudorandom matrix . Here, is structured in the sense that it is a block matrix with a polylogarithmic number of blocks such that the entries in each block are equal. Also, is pseudorandom in the sense that it has a small spectral norm. Formally, we prove the following. For a matrix , we define its max norm as .

###### Theorem 1.4.

For any matrix and , there exists a decomposition with the following properties for :

1. is structured in the sense that it is a block matrix with blocks, such that the entries in each block are equal.

2. .

3. .

Our decomposition theorem is a strengthening of the matrix decomposition result of Frieze and Kannan [FK99, FK96]. In particular, they showed that any matrix can be decomposed to for , where the matrices are block matrices and . Here, is the cut norm, which is defined as

 maxS⊆{1,…,n}maxT⊆{1,…,m}∣∣ ∣∣∑i∈S∑j∈TWij∣∣ ∣∣.

By using a result of Nikiforov [Nik09] that and the fact that the Frieze-Kannan result implies , we get that , which is too loose, and thus insufficient for our applications.

Given our decomposition theorem, we can conclude the following. When approximating (1) and (2), we can disregard the pseudorandom part . This will not affect our approximation by much, since has a small spectral norm. In addition, as consists of a polylogarithmic number of blocks, such that the entries in each block are equal, we can hit all the blocks by sampling a polylogarithmic number of indices. Hence, we can expect that is a good approximation to . To formally define the distance between and and to show it is small, we exploit graph limit theory, initiated by Lovász and Szegedy [LS06] (refer to [Lov12] for a book).

## 2 Preliminaries

For an integer we let . Given a set of indices , and a vector , we let be the restriction of to ; that is, , for every . Similarly, for a matrix and sets and , we denote the restriction of to by ; that is, , for every and . When we often use as a shorthand for . We use the notation as a shorthand for .

Given a matrix we define the Frobenius norm of as and the max norm of as . For a matrix , we let denote the -th largest singular value of . It is well known that the largest singular value can be evaluated using the following.

 σ1(A)=maxv∈Rm:∥v∥2≤1∥Av∥2.

In addition, we state the following fact regarding the singular values.

###### Fact 2.1.

Let , and consider the singular values of : . Then, for every , .

## 3 Spectral Decomposition Theorem

In this section we will prove the following decomposition theorem.

###### Theorem 3.1 (Spectral decomposition, restatement of Theorem 1.4).

For any matrix and , there exists a decomposition with the following properties for :

1. is structured in the sense that it is a block matrix with blocks, such that the entries in each block are equal.

2. .

3. .

The above theorem will serve as a central tool in the analysis of our algorithms. The fact that is a block matrix with polylogarithmic number of blocks, such that the entries in each block are equal, implies that by using polylogarithmic number of samples, we can query (with high probability) an entry from each of the blocks. In addition, the fact that has a small spectral norm allows us to disregard it, which only paying a small cost in the error of our approximation.

In order to prove the theorem, we introduce the following definition, two lemmas and a claim.

###### Definition 3.2.

We say that a partition is a refinement of a partition , if is obtained from , by splitting some sets into one or more parts.

###### Lemma 3.3.

Given a matrix and , there exists a block matrix with blocks such that the entries in each block are equal and , where .

In order to prove Lemma 3.3, we will need to prove the following.

###### Lemma 3.4.

Given a matrix and , let . Then, .

Proof:   Assume that has singular values such that . For any , let , and let denote . Then, we can write as,

 A=σ1u1(v1)⊤+⋯+σsus(vs)⊤A′+∑ℓ:σℓ<γNLσℓuℓ(vℓ)⊤=M1+⋯+Ms+B.

Consider any , , and let .

Let denote the -th column of the matrix . So, . For any , is perpendicular to and all such that , and thus,

 ∥cAj∥2≥∥cMℓj∥2=∥σℓuℓvℓj∥2=σℓ|vℓj|.

Let denote the -th row of the matrix . Then similarly, we have , and it follows that .

On the other hand, since , we have and , and therefore,

 βℓij≤√nmL2σℓ≤√nmL2γNL=Lγ.

By the fact that , we get that , which concludes the proof.

With this lemma at hand, we are ready to prove Lemma 3.3.

Proof of Lemma 3.3:   Recall that can be written as,

 A=∑ℓσℓuℓ(vℓ)⊤,

where are the singular values of and and are the corresponding left and right singular vectors. If we let be such that

 A′=∑ℓ:σℓ≥γNLσℓuℓ(vℓ)⊤,

then we have that for any .

Next we show the existence of , which is a block matrix (with blocks), such that has the same value on every block. We construct as follows.

Let be determined later and let . Let and , and partition the interval into buckets such that

 Bntdef=[(t−1)δn,tδn)∀t∈[T]andBnLargedef=[√J/ϵn,1].

For every such that , we define a partition of the indices in so that for each and . We eliminate emptysets from if exist. Note that by the definition of we have that . Next, for every such that , we define as follows.

 ^uℓi={0,if |uℓi|∈BnLarge,δn(t−1),if |uℓi|∈Bnt for some t∈[T]..

Next, let and let be a partition of that refines all for which is such that .

Similarly define from by setting , and defining analogously for each . Let and let be a partition of that refines all for which is such that .

Define,

 Astrdef=∑ℓ:σℓ≥γNLσℓ^uℓ(^vℓ)⊤andApsddef=A−Astr.

By Fact 2.1 we have that for all . Therefore, we can have at most indices such that . Thus, the partition satisfies . Similarly, we have . Therefore, the resulting matrix is a block matrix with many blocks, such that all the entries in each block are the same. We refer to and the row partition of and the column partition of respectively.

Next, we have that

 ∥Apsdx∥=∥(A−Astr)x∥22 ≤∥(A−A′)x∥22+∥(A′−Astr)x∥22 ≤γ2N2L2∥x∥22+∥(A′−Astr)x∥22.

Consider the “error” term .

 ∥(A′−Astr)x∥22≤∥x∥22∑(i,j)∈[n]×[m](A′ij−Astrij)2=∥x∥22(∑(i,j)∈¯ΣR×¯ΣC(A′ij−Astrij)2 +∑(i,j)∈(ΣR×¯ΣC)∪(¯ΣR×ΣC)(A′ij−Astrij)2+∑(i,j)∈ΣR×ΣC(A′ij−Astrij)2),

where and .

We analyze each of these terms separately. First, note that

 (A′ij−Astrij)2=⎛⎝∑ℓ:σℓ≥γNLσℓ(uℓivℓj−^uℓi^vℓj)⎞⎠2.

So,

 ∑(i,j)∈¯ΣR×¯ΣC(A′ij−Astrij)2 =∑(i,j)∈¯ΣR×¯ΣC⎛⎝∑ℓ:σℓ≥γNLσℓ(uℓivℓj−^uℓi^vℓj)⎞⎠2 (∗)≤∑(i,j)∈¯ΣR×¯ΣC⎛⎝2√ϵN∑ℓ:σℓ≥γNLσℓ⎞⎠2 (∗∗)≤∑(i,j)∈¯ΣR×¯ΣC(2√ϵN⋅2NLγ)2≤16ϵN2L2γ2.

Here, follows from the fact that for two indices we have that . follows from the fact that there can be at most indices , such that , and therefore

 ∑ℓ:σℓ≥γNLσℓ≤1/γ2∑ℓ=1σℓ≤1/γ2∑ℓ=1NL√ℓ≤2NLγ. (3)

Next, we have that,

 ∑(i,j)∈(ΣR×¯ΣC)∪(¯ΣR×ΣC)(A′ij−Astrij)2 =∑(i,j)∈(ΣR×¯ΣC)∪(¯ΣR×ΣC)⎛⎝∑ℓ:σℓ≥γNLσℓ(uℓivℓj−^uℓi^vℓj)⎞⎠2 (∗∗∗)=∑(i,j)∈(ΣR×¯ΣC)∪(¯ΣR×ΣC)(A′ij)2≤∑(i,j)∈(ΣR×¯ΣC)∪(¯ΣR×ΣC)(Lγ3)2≤2ϵN2L2γ6.

Here, follows from the fact that when one of the indices is in or we set the corresponding entry in the rounded vector to . In addition, the last inequality follows from the fact that when we remove the lower singular part of the matrix, we can only increase the value by at most factor of (see Lemma 3.4). Finally,

 ∑(i,j)∈ΣR×ΣC(A′ij−Astrij)2 =∑(i,j)∈ΣR×ΣC⎛⎝∑ℓ:σℓ≥γNLσℓ(uℓivℓj−^uℓi^vℓj)⎞⎠2 =∑(i,j)∈ΣR×ΣC(A′ij)2≤ϵ2N2L2γ6

Combining all the three terms and setting gives,

 ∥(A′−Astr)x∥22≤∥x∥22(16ϵγ2+2ϵγ6+ϵ2γ6)N2L2≤19γ2∥x∥22N2L2.

Therefore, we get that,

 ∥(A−Astr)x∥22 ≤2∥(A−A′)x∥22+2∥(A′−Astr)x∥22 ≤2(γ2N2L2∥x∥22+19γ2∥x∥22N2L2)≤40γ2∥x∥22N2L2,

and the lemma follows.

We are left with bounding the max norm of .

###### Claim 3.5.

Given a matrix and , let be the block approximation matrix defined above. Then,

Proof:  By the definition of we have that . By the definition of the rounding process, we have that for every we have that (recall that and ). Similarly, for every we have that . Therefore,

 |Astrij|=∣∣ ∣∣∑ℓ:σℓ≥γNLσℓ^uℓi^vℓj∣∣ ∣∣≤1γ10N∑ℓ:σℓ≥γNLσℓ≤2Lγ11,

where the last inequality uses (3).

Proof of Theorem 3.1:   The proof follows directly from Lemma 3.3 and Claim 3.5.

## 4 Dikernels and Sampling Lemmas

In this section we will formalize the idea that