Approximating Sparse Quadratic Programs

Given a matrix A ∈ℝ^n× n, we consider the problem of maximizing x^TAx subject to the constraint x ∈{-1,1}^n. This problem, called MaxQP by Charikar and Wirth [FOCS'04], generalizes MaxCut and has natural applications in data clustering and in the study of disordered magnetic phases of matter. Charikar and Wirth showed that the problem admits an Ω(1/ n) approximation via semidefinite programming, and Alon, Makarychev, Makarychev, and Naor [STOC'05] showed that the same approach yields an Ω(1) approximation when A corresponds to a graph of bounded chromatic number. Both these results rely on solving the semidefinite relaxation of MaxQP, whose currently best running time is Õ(n^1.5·min{N,n^1.5}), where N is the number of nonzero entries in A and Õ ignores polylogarithmic factors. In this sequel, we abandon the semidefinite approach and design purely combinatorial approximation algorithms for special cases of MaxQP where A is sparse (i.e., has O(n) nonzero entries). Our algorithms are superior to the semidefinite approach in terms of running time, yet are still competitive in terms of their approximation guarantees. More specifically, we show that: - UnitMaxQP, where A ∈{-1,0,1}^n× n, admits an (1/3d)-approximation in O(n^1.5) time, when the corresponding graph has no isolated vertices and at most dn edges. - MaxQP admits an Ω(1/ a_max)-approximation in O(n^1.5 a_max) time, where a_max is the maximum absolute value in A, when the corresponding graph is d-degenerate. - MaxQP admits a (1-ε)-approximation in O(n) time when the corresponding graph has bounded local treewidth. - UnitMaxQP admits a (1-ε)-approximation in O(n^2) time when the corresponding graph is H-minor-free.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/11/2022

Partial Vertex Cover on Graphs of Bounded Degeneracy

In the Partial Vertex Cover (PVC) problem, we are given an n-vertex grap...
05/17/2022

Faster Knapsack Algorithms via Bounded Monotone Min-Plus-Convolution

We present new exact and approximation algorithms for 0-1-Knapsack and U...
07/09/2020

Some algorithms for maximum volume and cross approximation of symmetric semidefinite matrices

Finding the r× r submatrix of maximum volume of a matrix A∈ℝ^n× n is an ...
08/10/2020

Sketching semidefinite programs for faster clustering

Many clustering problems enjoy solutions by semidefinite programming. Th...
10/02/2019

Subexponential-time algorithms for finding large induced sparse subgraphs

Let C and D be hereditary graph classes. Consider the following problem:...
03/19/2021

On a recolouring version of Hadwiger's conjecture

We prove that for any ε>0, for any large enough t, there is a graph G th...
07/05/2021

Spanner Approximations in Practice

A multiplicative α-spanner H is a subgraph of G=(V,E) with the same vert...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this paper we are interested in the following (integer) quadratic problem which was coined MaxQP by Charikar and Wirth [14]. Given an symmetric matrix with zero valued diagonal entries , for all , we want to maximize

(1)

Observe that the requirement that all diagonal values of are zero is to avoid the term which is constant in (1). Furthermore, a non-symmetric matrix can be replaced with an equivalent symmetric by setting , and so the requirement that  is symmetric is just for convenience sake.

Our interest in MaxQP lies in the fact that it is a generic example of integer quadratic programming which naturally appears in different contexts. Below we review three examples:

  • Graph cuts: Readers familiar with the standard quadratic program formulation of MaxCut [23] will notice the similarity to (1). Indeed, given a graph with vertex set and edge weights for each , the corresponding MaxQP instance on has an optimum solution of value iff has a maximum cut of total weight . Thus, MaxQP with only negative entries can be used to solve MaxCut exactly, implying that even this special case is NP-hard. Furthermore, this special case translates to the closely related MaxCut Gain problem [14, 29].

  • Correlation clustering: In correlation clustering [6, 13, 15, 35], we are provided with pairwise judgments of the similarity of data items. In the simplest version of the problem there are three possible inputs for each pair: similar (i.e. positive), dissimilar (i.e. negative), or no judgment. In a given clustering of the items, a pair of items is said to be in agreement (disagreement) if it is a positive (negative) pair within one cluster or a negative (positive) pair across two distinct clusters. In MaxCorr, the goal is to maximize the correlation of the clustering; that is, the absolute difference between the number of pairs in agreement and the number of pairs in disagreement, across all clusters. Note that when only two clusters are allowed, this directly corresponds to Unit MaxQP, the variant of MaxQP where for each entry of .

  • Ising spin glass model: Spin glass models are used to in physics to study disordered magnetic phases of matter. Such system are notoriously hard to solve, and various techniques to approximate the free energy were developed. In the Ising spin-glass model [9, 36], each node in the graph represents a single spin which can either point up (+1) or down (-1), and neighboring spins may have either positive or negative coupling energy between themThe energy of this system (when there is no external field) is given by its Hamiltonian , where is the spin at site . A famous problem in the physics of spin-glasses is the characterization of the ground state — the state that minimizes the energy of the system. This problem is precisely MaxQP.

It is convenient to view MaxQP in graph-theoretic terms. Let be the graph associated with , where and . The first algorithmic result for MaxQP was from Bieche et al. [12] and Barahona et al. [11] who studied the problem in the context of the Ising spin glass model. They showed that when  is restricted to be planar, the problem is polynomial-time solvable via a reduction to maximum weight matching. At the same time, Barahona proved that the problem is NP-hard for three-dimensional grids [9], or apex graphs (graphs with a vertex whose removal leaves the graph planar) [10].

1.1 Approximation Algorithms

As MaxQP is NP-hard, even for restricted instances, our focus is naturally on polynomial-time approximation algorithms. We note that the fact that the values of are allowed to be both positive and negative makes MaxQP quite unique in this context. First of all, there is an immediate equivalence between the maximization version of MaxQP and its minimization version, as maximizing is the same as minimizing . Furthermore, solutions might have negative values. This poses an extra challenge since a solution with a non-positive value is not an -approximate solution, for any function , in case the optimum is positive (which it always is whenever , see [14] and Lemma 6). In particular, a uniformly at random chosen solution has on expectation, and unlike MaxCut, such a solution is unlikely to be useful as any kind of approximation.

Alon and Naor [2] were the first to show that these difficulties can be overcome by carefully rounding a semidefinite relaxation of MaxQP. In particular, they studied the problem when is restricted to a bipartite graph, and showed that using a rounding technique that relies on the famous Grothendieck inequality, one can obtain an approximation factor guarantee of  for the bipartite case. Later, together with Makarychev and Makarychev [1], they showed that the integrality gap of the semidefinite relaxation is and , where  and are the chromatic and clique numbers of  respectively. In particular, this gap is constant for several interesting graph classes such as -degenerate graphs and -minor-free graphs, and it generalizes the previous result in [2] as when is bipartite.

Theorem 1 ([1, 2]).

MaxQP restricted to graphs of chromatic number can be approximated within a factor of in polynomial time.

Regarding the general version of the problem, where can be an arbitrary graph, an integrality gap for the semidefinite relaxation was first shown by Nestrov [33]. However, his proof was non-constructive. Charikar and Wirth [14] made his proof constructive, and provided a rounding procedure for the relaxation that guarantees -approximate solutions regardless of the structure of .

Theorem 2 ([14, 33]).

MaxQP can be approximated within a factor of in polynomial time.

As for the time complexity of the algorithm in Theorem 1 and 2 above, Arora, Hazan, and Kale [4] provided improved running times for several semidefinite programs, including the relaxation of MaxQP. They showed that this relaxation can be solved (to within any constant factor) in time , where is the number of nonzero entries in and ignores polylogarithmic factors. Thus, for general matrices this running time is , and for matrices with nonzero entries this is .

There has also been work on approximation lower bounds for MaxQP. Alon and Naor [2] showed that MaxQP restricted to bipartite graphs cannot be approximated within unless P=NP, while Charikar and Wirth [14] showed that, assuming PNP, the problem admits no -approximation when is an arbitrary graph. Both these results follow somewhat directly from the lower bound for MaxCut [27]. In contrast, Arora et al. [3] showed a much stronger lower bound by proving that there exists a constant  such that MaxQP cannot be approximated within , albeit under the weaker assumption that NP DTime(.

1.2 Our results

In this paper we focus on sparse graphs, i.e. graphs where the number of edges is . This corresponds to matrices having nonzero entries. Note that MaxQP remains APX-hard in this case as well (see Theorem 8 in Appendix B). Nevertheless, we show that one can abandon the semidefinite approach in favor of simpler “purely combinatorial” algorithms, while still maintaining comparable performances. In particular, our algorithms are faster than than those obtained from the semidefinite approach whose fastest known implementation requires time [4]. Furthermore, most of them are quite easy to implement.

Our first result concerns Unit MaxQP, the special case of MaxQP where for each entry of (recall the MaxCorr problem above). Here we obtain an -approximation algorithm for general sparse graphs that do not have any particular structure.

Theorem 3.

Let . Then Unit MaxQP restricted to graphs with no isolated vertices and can be approximated within a factor of in time.

Note that there are several interesting graph classes included in the theorem above but excluded by Theorem 1. For instance, consider a graph consisting of a clique of size together with a perfect matching on the remaining vertices. The result of Alon et al. [1] implies that the semidefinite relaxation has an integrality gap of on such a graph, while the algorithm in Theorem 3 provides an -approximation.

Our next result extends Theorem 3 to the weighted case, but at a cost to the approximation factor guarantee. Furthermore, it applies for a less general graph class, namely the class of -degenerate graphs. Recall that a graph is -degenerate if each of its subgraphs has a vertex of degree at most . Let denote the maximum absolute value in .

Theorem 4.

Let . Then MaxQP restricted to -degenerate graphs can be approximated within a factor of in time.

Note that Theorem 1 provides an -approximation for -degenerate graphs, yet the algorithm in Theorem 4 is faster by a factor of .

We next consider sparse graph classes with additional structure. We show that one can obtain a -approximation algorithm, for any , if the given graph has bounded local treewidth [20] (see precise definition in Section 4.1). Well-known graph classes that are known to have bounded local treewidth are planar and bounded genus graphs, as well as bounded degree graphs.

Theorem 5.

Let . Then MaxQP restricted to graphs of bounded local treewidth can be approximated within a factor of  in  time.

This theorem is in particular relevant for the problem of finding the ground state of Ising spin-glass models on finite graphs, since typically these graphs are embeddable on a fixed surface and/or have bounded degree. As an example, the popular Edwards-Anderson model [19] is concerned with three dimensional cubic lattices, and Theorem 5 above gives a -approximation in linear-time for such graphs (see also [22, 26, 37] for other examples).

Recall that is -minor free if one cannot obtain in an isomorphic copy of by a series of edge contractions, edge deletions, and vertex deletions. For -minor-free graphs we present an algorithm with a factor guarantee, for any , but only for the unit weight case.

Theorem 6.

For  and any graph  there is an time -approximation algorithm for Unit MaxQP restricted to -minor free graphs.

Finally, we note that our results have direct consequences for the MaxCorr problem: Cherikar and Wirth [14] proved that an -approximation algorithm for MaxQP implies an -approximation algorithm for MaxCorr. Combining this with Theorems 3, 5 and 6 directly gives us the following:

Corollary 1.

MaxCorr can be approximated within a factor of

  • on graphs with in  time;

  • on graphs of bounded local treewidth in  time;

  • on -minor free graphs in  time.

2 Preliminaries

Throughout the paper we use to denote the graph associated with our input matrix ; that is, and . Thus, and we let . We slightly abuse notation by allowing a solution

to denote either a vector in

indexed by or a function . For a solution , we let , and we let . We use to denote the sum of absolute values in , i.e. . Note that .

We use standard graph-theoretic terminology when dealing with the graph , as in e.g. [17]. in particular, for a subset , we let denote the subgraph of induced by ; i.e., the subgraph with vertex set and edge-set . We let denote , and for a subset of edges we let denote the graph without isolated vertices. For a pair of disjoint subsets , we let denote the set of edges . Finally, we use to denote the neighborhood of a vertex of .

2.1 Useful observations

Note that for a uniformly chosen at random solution , the value is zero in expectation for any edge . This implies that . Moreover, a solution with can be computed in linear time:

Lemma 1.

One can compute in time a solution for which .

Proof.

For each vertex , define the subset of edges . Consider an arbitrary initial solution , and let . Then is the contribution of edges in  to . We compute a solution by scanning the vertices from to . For a given vertex , we check whether . If so, we set , and otherwise we set . Note that must now be positive. As the value of does not change for any , when we finish our scan we have for each . Thus, . ∎

Lemma 2.

Let be two disjoint subsets of vertices, and let and be two solutions for and of value and respectively. Then at least one of the solutions and has value for .

Proof.

Suppose has value less than . This means that the total contribution of the edges in is negative in this solution. Observe that in each edge of with negative contribution under now has positive contribution, and vice versa. The lemma thus follows. ∎

Combining Lemma 1 and Lemma 2 above, we get an important property of , namely that it is monotone with respect to induced subgraphs.

Lemma 3.

Let be an induced subgraph of . Then given a solution , one can compute a solution for with .

Proof.

Let be the vertices of which are not present in . According to Lemma 1 we can compute a solution for with value at least zero in linear time. According to Lemma 2 either or have value at least . Thus, taking to be the solution with higher value out of or proves the lemma. ∎

3 A Lower Bound

In this section we present approximation algorithms for MaxQP using a lower bound we develop for the value of the optimum solution. In particular, we provide complete proofs for Theorem 3 and Theorem 4.

Beginning with the unit weight case, i.e. the case when , we obtain a lower bound analogous to the classical MaxCut bound of Edwards [18], although our approach follows the later proof of Erdős et al. [21]. This will directly imply Theorem 3. We then show how to extend our lower bound to general weights in case is triangle-free, i.e. the case where contains no three pairwise mutually adjacent vertices. In the last subsection we show how to remove the triangle-freeness restriction in case is -degenerate, providing a proof for Theorem 4.

3.1 Unit weights

A set of vertices is a star in if and is connected and has at most one vertex of degree greater than 1 (called the center of ). We say a star is uniform if the edges of are either all positive or all negative. A star packing of is a family of pairwise disjoint subsets of vertices such that each is a uniform star in . We let , and . We refer to as the size of , and to the value (the total number of edges in ), as the magnitude of .

Star packings will be useful throughout the section for showing lower bounds on . The direct connection between these two concepts is given in the lemma below.

Lemma 4.

Given a star packing of magnitude , one can compute in linear time a solution  with .

Proof.

By Lemma 3 it suffices to compute a solution for with . Let . We construct such a solution by induction on . For , we assign the vertices of  the same value in  in case all edges of  are positive, and we assign the leaves and the center vertex of  opposing values if all edges of  are negative. Thus, . Suppose then that , and let . By induction, we have a solution for with . Let be such that , as in the case of . Then, by Lemma 2, either or have value at least , and we are done. ∎

We construct a particular star packing for . We begin by first letting be any matching of maximum size in . Observe that since is a maximum matching, the set  is independent in , and there are no other star packings in of greater size. Both of these invariants will be maintained throughout our construction. We next greedily add edges to by exhaustively applying the following rule as long as possible:

Rule 1: If is a uniform star for some and some , then add to .

Once Rule 1 cannot be applied, every edge between a vertex in a positive star and is negative, and vice versa. We then exhaustively apply Rule 2:

Rule 2: If there is a center of a star , , which has more than neighbors , then replace with in .

It is clear that remains a star packing after we finish applying both rules above. We next provide a lower bound on the magnitude of . For each star , let denote the set of neighbors of in ; that is, . Then we have:

Lemma 5.

for each .

Proof.

Suppose , and let be the center of . First observe that any edge between and can be used to create a new star , contradicting the fact that is of maximum size. Assume that all edges in are positive (the negative case is symmetric). Then, since Rule 1 cannot be applied, all edges between and are negative. Furthermore, since Rule 2 cannot be applied, there are no more than of these edges. Thus, in this case.

Suppose then that , and that is positive (again, the case where is negative is symmetric). Since Rule 1 cannot be applied, all edges between or and are negative. Moreover, since Rule 2 cannot be applied, neither nor can be adjacent to more than one vertex in . Finally, if is adjacent to and is adjacent to with , then we can replace with and , contradicting the fact that is of maximum size. Thus, in this case as well. ∎

Lemma 6.

Let . Then .

Proof.

We present a mapping from to the edges of which maps at most three vertices to a single edge, proving that . The lemma will then follow immediately from the fact that .

For a vertex belonging to some star of , we map to any edge incident to in . Thus, exactly one edge of will have two vertices in its preimage, while the remaining edges have only one. After mapping all vertices in the stars of , we proceed to map the remaining vertices in as follows. We go through each star at a time, and map the vertices in that are connected to vertices in . There are at most such vertices according to Lemma 5, so we can map each such vertex to a unique edge in . This increases the size of the preimage of each edge in to at most three. After going over all stars we map all vertices of , as has no isolated vertices, and so we obtain a mapping from to the edges of with the promised property. ∎

Proof of Theorem 3.

Due to Lemmata 4 and 6, the star packing yields a solution  with . Since , this solution is -approximate. The running time for computing  is dominated by the computation of the maximum matching for the initial star packing, taking  time [32]; exhaustive application of Rules 1 and 2 and the computation of the solution from the star packing both take time. ∎

3.2 Triangle-free with general weights

As an intermediate step towards the proof of Theorem 4 we extend the lower bound of the previous subsection to arbitrary weights in case  is triangle-free. For weighted graphs, we let the magnitude of a star packing be the total absolute value of edges in , i.e. .

Let and denote the minimum and maximum absolute values in respectively. Let us first consider the case where the ratio between these two values is at most 2, i.e. . Observe that in this case the lower bound given in Lemma 6 can be easily be extended to . This is because the star packing has magnitude at least , while .

Lemma 7.

in case .

Next, consider the general weight case. We partition the edges of into subsets , where contains all edges with . For each , let denote the submatrix of corresponding to , and let . Then, as shown in the previous subsection, we can compute in polynomial time a star packing with . By the pigeonhole principle this gives us:

Lemma 8.

for some .

Now the crucial observation here is that, as is triangle-free, each is also a star packing in . Indeed, if is a star on at least three vertices, then there can be no edges in between degree 1 vertices of . Thus, by Lemma 4, each corresponds to a solution with . Combining this with Lemma 8 above proves that the solution of maximum value is an -approximate solution.

3.3 Triangle deletion

Towards proving Theorem 4 we show how to obtain a triangle-free subgraph of , the total edge weights of which are a constant fraction of . For this we utilize the local-ratio technique [8] commonly used in approximation algorithms [7].

Recall that if is a -degenerate graph, then there exists an ordering of the vertices of such for each (and this ordering can be computed in linear time). To simplify notation, we assume the natural ordering on the vertices of satisfies this property. We let . Furthermore, we let for each , and use  to denote the edge set of .

We present an algorithm which we call the triangle deletion algorithm. The algorithm recursively computes a triangle traversal set , that is, an edge set such is triangle-free. We use to denote for each , and we let for any subset of edges . The algorithm starts with .

TriangleDeletion:

  1. Let be all edges with .

  2. Find the smallest such that contains a triangle.

    • if no such exists, then return .

  3. Let be the minimum of any edge .

  4. Set if is an edge in , and otherwise .

  5. Let TriangleDeletion.

  6. If , then remove some edge from .

  7. Return .

Lemma 9.

The triangle deletion algorithm returns a set of edges such that:

  1. contains no triangle.

  2. .

Proof.

First observe that the algorithm is guaranteed to terminate, as in each recursive step at least one edge gets it weight decreased to zero. We prove the two properties in the lemma via induction on the recursive steps of the algorithm. So the base case is the case in which no graph  contains a triangle (see step 2), so clearly, is triangle-free. Furthermore, and so , and the second condition is satisfied as well.

Consider a recursive call in which the algorithm does not terminate at step 2, i.e. in which there exists a smallest where contains a triangle. Let be the set computed in step 5 of the algorithm. Then by induction we know that contains no triangle. Furthermore, by definition of , any triangle in containing vertex must be completely included in . Thus, if , removing some edge from in step 6 does not add a triangle to . It follows that the set returned at step 7 satisfies the first condition of the lemma.

To see that it also satisfies the second condition, let , where is the weight function constructed at step 4 of the algorithm. By induction we have

after step 5 of the algorithm, and this also holds at step 7 since we do not add edges to at step 6. Now, observe that by construction of , we have . Furthermore, at step 7 the set contains at most one edge of , and so . Together this implies that

Thus, from the two inequalities above we get

and so satisfies the second condition of the lemma as well. ∎

Proof of Theorem 4.

First observe that as is -degenerate we have . Further, we may assume that  has no isolated vertices since deleting them does not affect the degeneracy. Our algorithm obtains a triangle-free subgraph  of  using the triangle deletion algorithm above. Letting denote the matrix corresponding to , we have by Lemma 9. Next, our algorithm uses Lemma 8 to obtain a star packing of magnitude

Finally, using Lemma 4, the algorithm computes a solution with . As , this solution has an approximation ratio of .

As for the time complexity of our algorithm, observe that the triangle deletion algorithm runs in time. The next step of the algorithm requires computing star packings, each taking time to compute. Altogether this gives us a running time of . ∎

4 Treewidth Partitions

In this section we present approximation algorithm for sparse MaxQP instances that have some additional structure. Namely, we prove Theorems 5 and 6. Our algorithms all evolve around the Baker technique for planar graphs [5] and its generalizations [16, 20, 24], all using what we refer to here as a treewidth partition — a partition of the vertices of into such that has bounded treewidth for any subset in the partition. As treewidth plays a central role here, we begin with formally defining this notion.

A tree decomposition is a pair  where  is a family of vertex subsets of , called bags, and  is a tree with  as its node set. The decomposition is required to satisfy (i) is connected in  for each , and (ii) for each  there is a bag  that contains both  and . The width of a tree decomposition  is , and the treewidth of  is the smallest width amongst all its tree decompositions. The proof of the following lemma is deferred to Appendix A.

Lemma 10.

MaxQP restricted to graphs of treewidth at most  can be solved in  time.

4.1 Locally bounded treewidth

For a vertex and a fixed positive integer , let denote the ball of radius around ; that is, where is the number of edges in a shortest path from to in . Graph is said to have locally bounded treewidth [20, 24] if there exists some function  such that for any and . As mentioned in Section 1.2, notable examples of graphs of locally bounded treewidth are planar graphs, bounded genus graphs, and bounded degree graphs.

Our starting point is a layer decomposition of , where for some arbitrary vertex , and are all vertices at distance from , for each . This is the standard starting point of all Baker type algorithms, and can be computed via breadth-first search from  in linear time. Note that is a partition of , and that for each , each vertex in has neighbors only in (here and elsewhere in this section we set when necessary).

Given , we let be the smallest integer such that . For each , let  denote the union of all vertices in layers with index equal to ; that is, . We define two subgraphs of : The graph is the graph induced by , and the graph is the graph induced by . Note that there is some overlap between the vertices of and , but each edge of appears in exactly one of these subgraphs. Also note that since has bounded local treewidth, both and  are bounded treewidth graphs [24].

Our algorithm computes different solutions for , and selects the best one (i.e. the one which maximizes (1)) as its solution. For , we first compute an optimal solution for in linear time using the algorithm given in Lemma 10. We then extend this solution to a solution for as is done in Lemma 3. In this way we obtain in linear time solutions with for each . In Lemma 11 we argue that the solution of maximum objective value is -approximate to the optimum of ; the proof of Theorem 5 will then follow as a direct corollary.

Lemma 11.

There is a solution with .

Proof.

Let denote the optimal solution for . Then, as the edge set of is partitioned into the edges of and , we have

for each . Next observe that any two subgraphs and with do not have vertices in common, nor are there any edges between these two subgraphs in . It follows that for any , the graph is an induced subgraph in , and so by Lemma 3. Thus, we have

Combining the two inequalities above we get

It follows that the best solution out of has value at least , which is at least , since . ∎

4.2 -minor-free instances

Let  be an -minor free graph for some fixed graph , that is, cannot be formed from  by deleting edges and vertices and by contracting edges. To obtain the -approximation for Unit MaxQP on -minor free graphs (Theorem 6) we make use of the lower bound obtained in Section 3.1, our algorithm for MaxQP restricted to bounded-treewidth graphs, and the following theorem by Demaine et al. [16]:

Theorem 7 ([16]).

For a fixed graph , there is a constant such that, for any integer and for every -minor-free graph , the vertices of can be partitioned into sets such that the graph obtained by taking the union of any of these sets has treewidth at most . Furthermore, such a partition can be found in polynomial time.

Note that this theorem gives a similar partition to the one used in the previous subsection, albeit slightly weaker. In particular, there is no restriction on the edges connecting vertices in different subsets of the partition as was the case in the previous subsection. It is for this reason that arbitrary weights are difficult to handle, and we need to resort to the lower bound of Lemma 6. Fortunately, for the unweighted case, we can use the fact that there exists some constant depending only on such that has at most edges (see e.g. [17]). In particular, it can be shown that  [31], where is the number of vertices of . Combining this fact with Lemma 6 we get:

Lemma 12.

.

Our algorithm proceeds as follows. Fix , and let denote the partition of computed by the algorithm from Theorem 7. For each , let denote the set of edges , and let . Furthermore, let . As both and have bounded treewidth, we can compute an optimal solution for each of these subgraphs (and therefore also for ) using the algorithm in Lemma 10. Using Lemma 2, we can extend the optimal solutions for and to a solution  for with value

On the other hand, the optimal solution of cannot do better than

Combining the two inequalities above, we can bound the sum of the objective values obtained by all our solutions by

where the last inequality follows from Lemma 12. Thus at at least one of these solutions has value at least , which is greater than by our selection of parameter .

To analyze the time complexity of our algorithm, observe that computing each solution  requires time according to Lemma 10 and Lemma 2. Thus, the time complexity of the algorithm is dominated by the time required to compute the partition promised by Theorem 7. Demaine et al. [16] showed that this partition can be computed in linear time given the graph decomposition promised by Robertson and Seymour’s graph minor theory [34]. In turn, Grohe et al. [25] presented an time algorithm for this decomposition, improving earlier constructions [16, 28]. Thus, the total running time of our algorithm can also be bounded by . This completes the proof of Theorem 6.

5 Conclusion

We presented efficient combinatorial approximation algorithms for sparse instances of MaxQP without resorting to the semidefinite relaxation, as done by Alon and Naor [2] and Charikar and Wirth [14]. From a theoretical perspective, we still leave open whether there is a combinatorial algorithm approximating -degenerate MaxQP instances in polynomial time. Further, is it possible to approximate sparse Unit MaxQP instances up to a constant factor in linear time? Finally, the simplicity of our algorithms compels the study of their usability in practice, especially for characterizations of ground states of spin glass models.

References

  • [1] Noga Alon, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. Quadratic forms on graphs. In

    Proceedings of the 37th Annual ACM Symposium on Theory Of Computing (STOC)

    , pages 486–493, 2005.
  • [2] Noga Alon and Assaf Naor. Approximating the cut-norm via Grothendieck’s inequality. SIAM Journal on Computing, 35(4):787–803, 2006.
  • [3] Sanjeev Arora, Eli Berger, Elad Hazan, Guy Kindler, and Muli Safra. On non-approximability for quadratic programs. In Proceedings of the 46th Annual IEEE Symposium on Foundations Of Computer Science (FOCS), pages 206–215, 2005.
  • [4] Sanjeev Arora, Elad Hazan, and Satyen Kale. Fast algorithms for approximate semidefinite programming using the multiplicative weights update method. In 46th Annual IEEE symposium on Foundations Of Computer Science (FOCS), pages 339–348, 2005.
  • [5] Brenda S. Baker. Approximation algorithms for NP-complete problems on planar graphs. Journal of the ACM, 41(1):153–180, 1994.
  • [6] Nikhil Bansal, Avrim Blum, and Shuchi Chawla. Correlation clustering. Machine Learning, 56(1-3):89–113, 2004.
  • [7] Reuven Bar-Yehuda, Keren Bendel, Ari Freund, and Dror Rawitz. Local ratio: a unified framework for approximation algorithms. ACM Computing Surveys, 36(4):422–463, 2004.
  • [8] Reuven Bar-Yehuda and Shimon Even. A local-ratio theorem for approximating the weighted vertex cover problem. Annals of Discrete Mathematics, 25:27–45, 1985.
  • [9] Francisco Barahona. On the computational complexity of ising spin glass models. Journal of Physics A: Mathematical, Nuclear and General, 15:3241–3253, 1982.
  • [10] Francisco Barahona. The max-cut problem on graphs not contractible to . Operations Research Letters, 2(3):107–111, 1983.
  • [11] Francisco Barahona, Roger Maynard, Rammal Rammal, and Jean-Pierre Uhry. Morphology of ground states of two-dimensional frustration model. Journal of Physics A: Mathematical, Nuclear and General, 15:673–699, 1982.
  • [12] Isabelle Bieche, Roger Maynard, Rammal Rammal, and Jean-Pierre Uhry. On the ground states of the frustration model of a spin glass by a matching method of graph theory. Journal of Physics A: Mathematical, Nuclear, and General, 13:2553–2576, 1980.
  • [13] Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. Clustering with qualitative information. Journal of Computer and System Sciences, 71(3):360–383, 2005.
  • [14] Moses Charikar and Anthony Wirth. Maximizing quadratic programs: extending Grothendieck’s inequality. In Proceedings of the 45th annual IEEE symposium on Foundations Of Computer Science (FOCS), pages 54–60, 2004.
  • [15] Erik D. Demaine, Dotan Emanuel, Amos Fiat, and Nicole Immorlica. Correlation clustering in general weighted graphs. Theoretical Computer Science, 361(2-3):172–187, 2006.
  • [16] Erik D. Demaine, Mohammad Taghi Hajiaghayi, and Ken ichi Kawarabayashi. Algorithmic graph minor theory: Decomposition, approximation, and coloring. In Proceedings of the 46th Annual IEEE symposium on Foundations Of Computer Science (FOCS), pages 637–646, 2005.
  • [17] Reinhard Diestel. Graph Theory, volume 173 of Graduate Texts in Mathematics. Springer, 5th edition, 2016.
  • [18] Carol S. Edwards. Some extremal properties of bipartite subgraphs. Canadian Journal of Mathematics, 25(3):475–485, 1973.
  • [19] Sam F. Edwards and Philip W Anderson. Theory of spin glasses. Journal of Physics F: Metal Physics, 5(5):965–974, 1975.
  • [20] David Eppstein. Subgraph isomorphism in planar graphs and related problems. Journal of Graph Algorithms & Applications, 3(3):1–27, 1999.
  • [21] Paul Erdős, András Gyárfás, and Yoshiharu Kohayakawa. The size of the largest bipartite subgraphs. Discrete Math, 177(1-3):267–271, 1997.
  • [22] Daniel S. Fisher and David A. Huse. Ordered phase of short-range ising spin-glasses. Physical Review Letters, 56:1601–1604, 1986.
  • [23] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):1115–1145, 1995.
  • [24] Martin Grohe. Local tree-width, excluded minors, and approximation algorithms. Combinatorica, 23(4):613–632, 2003.
  • [25] Martin Grohe, Ken-ichi Kawarabayashi, and Bruce A. Reed. A simple algorithm for the graph minor decomposition—Logic meets structural graph theory. In Proceedings of the 24th annual ACM-SIAM Symposium On Discrete Algorithms (SODA), pages 414–431, 2013.
  • [26] Martin Hasenbusch, Andrea Pelissetto, and Ettore Vicari. Critical behavior of three-dimensional ising spin glass models. Physical Reviews B, 78:214205, 2008.
  • [27] Johan Håstad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001.
  • [28] Ken ichi Kawarabayashi and Paul Wollan. A simpler algorithm and shorter proof for the graph minor decomposition. In Proceedings of the 43rd ACM Symposium on Theory Of Computing (STOC), pages 451–458, 2011.
  • [29] Subhash Khot and Ryan O’Donnell. SDP gaps and UGC-hardness for max-cut-gain. Theory of Computing, 5(1):83–117, 2009.
  • [30] Ton Kloks. Treewidth, Computations and Approximations, volume 842 of Lecture Notes in Computer Science. Springer, 1994.
  • [31] Alexandr V. Kostochka. Lower bound of the Hadwiger number of graphs by their average degree. Combinatorica, 4(4):307–316, 1984.
  • [32] Silvio Micali and Vijay V. Vazirani. An algorithm for finding maximum matching in general graphs. In Proceedings of the 21st Annual Symposium on Foundations Of Computer Science (FOCS), pages 17–27, 1980.
  • [33] Yurii Nesterov. Global quadratic optimization via conic relaxation. CORE Discussion Papers 1998060, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE), 1998.
  • [34] Neil Robertson and Paul D. Seymour. Graph minors. XVI. Excluding a non-planar graph. Journal of Combinatorial Theory, Series B, 89(1):43–76, 2003.
  • [35] Chaitanya Swamy. Correlation clustering: maximizing agreements via semidefinite programming. In Proceedings of the 15th Annual ACM-SIAM Symposium On Discrete Algorithms (SODA), pages 526–527, 2004.
  • [36] Michel Talagrand. Spin Glasses: A Challenge for Mathematicians, volume 46 of A Series of Modern Surveys in Mathematics. Springer, 1st edition, 2003.
  • [37] Jian-Sheng Wang, Walter Selke, Vl. S. DotsenkoV, and B. Andreichenko. The critical behaviour of the two-dimensional dilute Ising magnet. Physica A: Statistical Mechanics and its Applications, 2(164):221–239, 1990.

Appendix A An Exact Algorithm for Bounded Treewidth Graphs

We next prove Lemma 10 by presenting an algorithm for MaxQP restricted to graphs of treewidth at most  running in  time. For this we require the concept of nice tree decompositions [30].

A tree decomposition  is rooted if there is a designated bag  being the root of . A rooted tree decomposition is nice if each bag  is either (i) a leaf node ( contains exactly one vertex and has no children in ), (ii) an introduce node ( has one child  in  with  and ), (iii) a forget node ( has one child in  in  with  and ), or (iv) a join node ( has two children  in  with ). Given a tree decomposition, one can compute a corresponding nice tree decomposition with the same width in linear time [30].

Our algorithm employs the standard dynamic programming technique on nice tree decompositions.

Proof of Lemma 10.

Let  be a nice tree decomposition of  of width  with root bag . For a node  let  be the subtree of  rooted at . Furthermore, let  be the subgraph of  induced by the vertices in the bags of  (while  is the subgraph of  induced only by the vertices in ). We describe a table in which we have an entry  for each bag  and for each solution . The entry  contains the value of an optimum solution for , where the values of the vertices in  are fixed by the solution .

If  is a leaf node, then  contains no edges and so . If  is an introduce node, then let  be the introduced vertex, where  is the child of  in , and let  be the solution  restricted to the vertices of . Then  additionally contains the value of all edges incident to