Fine-grained reductions from approximate counting to decision

07/14/2017 ∙ by Holger Dell, et al. ∙ University of Oxford Universität Saarland 0

The main problems in fine-grained complexity are CNF-SAT, the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). In this paper, we consider the approximate counting version of each problem; thus instead of simply deciding whether a witness exists, we attempt to (multiplicatively) approximate the number of witnesses. In each case, we provide a fine-grained reduction from the approximate counting form to the usual decision form. For example, if there is an O(c^n)-time algorithm that solves k-SAT for all k, then we prove there is an O((c+o(1))^n)-time algorithm to approximately count the satisfying assignments. Similarly, we get that the exponential time hypothesis (ETH) is equivalent to an approximate counting version. This mirrors a result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985), who proved such a result in the classical polynomial-time setting, and a similar result due to Müller (IWPEC 2006) in the FPT setting. Our algorithm for polynomial-time problems applies in a general setting in which we approximately count edges of a bipartite graph to which we have limited access. In particular, this means it can be applied to problem variants in which significant improvements over the conjectured running time bounds are already known. For example, the Orthogonal Vectors problem over GF(m)^d for constant m can be solved in time n·poly(d) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. Moreover, our overhead is only polylogarithmic, so it can be applied to subpolynomial improvements such as the n^3/(Θ(√( n))) time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Approximate counting and decision in coarse-grained settings

It is clearly at least as hard to count objects as it is to decide their existence, and very often it is harder. For example, Valiant [21] defined the class as the natural counting variant of , and Toda [19] proved that contains the entire polynomial hierarchy. The decision counterparts of many -complete problems are in , for example, counting perfect matchings is -complete and detecting one is in .

However, the situation changes substantially if we consider approximate counting rather than exact counting. For all real  with , we say that is an -approximation to if holds. Clearly, computing an -approximation to is at least as hard as deciding whether holds, but surprisingly, in many settings it is no harder. Indeed, Sipser [16] and Stockmeyer [17] proved implicitly that every problem in has a polynomial-time randomised -approximation algorithm using an -oracle; the result is later proved explicitly in Valiant and Vazirani [22]. This is a foundational result in the wider complexity theory of polynomial approximate counting initiated by Dyer, Goldberg, Greenhill and Jerrum [5].

Another example arises in parameterised complexity. Here, the usual goal is to determine whether an instance of size with parameter can be solved in “FPT time” for some computable function . Hardness results are normally presented using the -hierarchy (see for example [6]). Müller [14] has proved that for any problem in #W[1], there is a randomised -approximation algorithm using a W[1]-oracle which runs on size- parameter- instances in time for some computable . He also proves analogous results for the rest of the hierarchy.

We consider the subexponential-time setting. The popular (randomised) exponential time hypothesis (ETH) introduced by Impagliazzo and Paturi [9] asserts that there exists such that no randomised algorithm can solve an -variable instance of -SAT in time .222

To streamline our discussion, we ignore the detail that some papers only allow for deterministic algorithms. Throughout the paper, we require randomised algorithms to have success probability at least

unless otherwise specified. We prove that ETH is equivalent to a seemingly weaker approximate counting version.

Theorem 1.

ETH is true if and only if there exists such that no randomised -approximation algorithm can run on -variable instances of #-SAT in time .

Note that the usual argument of Valiant and Vazirani [22] does not apply in this setting without modification, as it adds clauses of linear width to the instance. Our proof will take a similar approach, but making use of a sparse hashing technique due to Calabro, Impagliazzo, Kabanets and Paturi [2]. (We give more detail in Section 1.4.)

1.2 Fine-grained results in the exponential setting

In fine-grained complexity, we are concerned not only with the classification of algorithms into broad categories such as polynomial, FPT, or subexponential, but with their more precise running times. A more fine-grained analogue of ETH is known as the strong exponential time hypothesis (SETH, see Impagliazzo, Paturi and Zane [10]), which asserts that for all , there exists such that no randomised algorithm can solve -variable instances of -SAT in time .

An analogue of Theorem 1 for SETH is implicit in Thurley [18], who provides a randomised approximate counting algorithm that makes use of a decision oracle: SETH is true if and only if for all , there exists such that no randomised -approximation algorithm can run on -variable instances of -SAT in time . However, this result is not ideal from a fine-grained perspective, as it does not guarantee that solving -SAT and approximating #-SAT have similar time complexities in the limit as . Indeed, given an algorithm for -SAT with running time , Thurley’s approximation algorithm has worst-case running time , that is, the exponential savings over exhaustive search goes down from  for decision to using Thurley’s algorithm. Traxler [20] proved that if we allow super-constant clause width instead of considering -SAT, then the same savings can be achieved for approximate counting and decision. We strengthen Traxler’s result so that it applies in the setting of SETH.

Theorem 2.

Let . Suppose that for all , there is a randomised algorithm which runs on -variable instances of -SAT in time . Then for all and all , there is a randomised -approximation algorithm which runs on -variable instances of #-SAT in time .

In particular, if SETH is false, then Theorem 2 yields an efficient algorithm for approximating #-SAT for sufficiently large . Note that there is no particular reason to believe that an efficient decision algorithm would yield an efficient counting algorithm directly. Indeed, when , the most efficient known algorithms run in time  for decision (due to Hertli [7]), in time  for -approximate counting (due to Thurley [18]), and in time  for exact counting (due to Kutzkov [12]).

It remains an open and interesting question whether a result analogous to Theorem 2 holds for fixed , that is, whether deciding -SAT and approximating #-SAT have the same time complexity up to a subexponential factor. For (large) fixed , the best-known decision, -approximate counting, and exact counting algorithms (due to Paturi, Pudlák, Saks, and Zane [15], Thurley [18], and Impagliazzo, Matthews, and Paturi [8], respectively) all have running time , but with progressively worse constants in the exponent. Our reduction has an overhead of , so we do not get improved approximate counting algorithms for fixed .

1.3 Fine-grained results in the polynomial-time setting

Alongside SAT, perhaps the most important problems in fine-grained complexity are 3SUM, Orthogonal Vectors (OV), and All-Pairs Shortest Paths (APSP). All three problems admit well-studied notions of hardness, in the sense that many problems reduce to them or are equivalent to them under fine-grained reductions, and they are not known to reduce to one another. See Williams [26] for a recent survey. We prove analogues of Theorem 2 for 3SUM and OV. While it is not clear what a “canonical” counting version of APSP should be, we nevertheless prove an analogue of Theorem 2 for the Negative-Weight Triangle problem (NWT) that is equivalent to APSP under subcubic reductions. Our results, together with previous decision algorithms, immediately imply three new approximate counting algorithms. (However, we believe that two of these, Theorems 4 and 8, may also be derived more directly by modifying their known decision algorithms.)

3SUM asks, given three integer lists , and of total length , whether there exists a tuple with . (Frequently the input is taken to be a single list rather than three; it is well-known that the two versions are equivalent.) It is easy to see that 3SUM can be solved in operations by sorting and iterating over all pairs in , and it is conjectured that for all , 3SUM admits no -time randomised algorithm. We obtain an analogue of Theorem 2 for the natural counting version of 3SUM, in which we approximate the number of tuples with . (See Section 2 for details on our model of computation.)

Theorem 3.

If 3SUM with  integers from has a randomised -time algorithm, then there is an -time randomised -approximation algorithm for #3SUM.

Note that normally is assumed to be at most , in which case our algorithm has only polylogarithmic overhead over decision. Thus independently of whether or not the 3SUM conjecture is true, we can conclude that 3SUM and, say, -approximating #3SUM have essentially the same time complexity.

Chan and Lewenstein [3] have proved that the 3SUM conjecture fails when the problem is restricted to instances in which elements of one list are somewhat clustered. This is an interesting special case with several applications, including monotone multi-dimensional 3SUM with linearly-bounded coordinates — see the introduction of [3] for an overview. By Chan and Lewenstein’s algorithm combined with an analogue of Theorem 3, we obtain the following result.

Theorem 4.

For all , there is a randomised -approximation algorithm with running time for instances of #3SUM with integers in such that at least one of , , or may be covered by intervals of length .

We next consider OV, which asks, given two lists and of total length of zero-one vectors over , whether there exists an orthogonal pair . It is easy to see that OV can be solved in operations by iterating over all pairs, and it is conjectured that for all , when , OV admits no -time randomised algorithm. This conjecture is implied by SETH [23], and Abboud, Williams and Yu [1] proved that it fails when . We obtain an analogue of Theorem 2 for the natural counting version of OV, in which we approximate the number of orthogonal pairs.

Theorem 5.

If OV with  vectors in dimensions has a randomised -time algorithm, then there is a randomised -time -approximation algorithm for #OV.

Note that it is impossible to decide OV in time , so when is polylogarithmic (as is the usual assumption), our algorithm has only polylogarithmic overhead over decision. Thus our result is able to turn the -time algorithm of [1] into an approximate counting algorithm, but Chan and Williams [4] already gave a deterministic exact counting algorithm of similar complexity.

A version of OV in which the real zero-one vectors are replaced by arbitrary vectors over finite fields or rings is also studied, and there are efficient randomised algorithms due to Williams and Yu [25]. Their algorithms do not immediately generalise to counting, but by an analogue of Theorem 5, we nevertheless obtain efficient approximate counting algorithms.

Theorem 6.

Let be a constant prime power. Then there is a randomised -approximation algorithm for -vector instances of #OV over (resp. ) in time (resp. ).

Note that the dependence on may be close to best possible; under SETH, for all and with , OV over (resp. ) cannot be solved in time (resp. ) for all but finitely many values of [25]

Finally, we study the Negative-Weight Triangle problem (NWT) of deciding whether an edge-weighted tripartite graph contains a triangle of negative total weight, which Williams and Williams [27] have shown is equivalent to APSP under fine-grained reductions. It is easy to see that NWT can be solved in operations by checking every possible triangle, and it is conjectured that for all , NWT admits no -time randomised algorithm. We obtain an analogue of Theorem 2 for the natural counting version of NWT, in which we approximate the number of negative-weight triangles.

Theorem 7.

If NWT for -vertex graphs with weights from has a randomised -time algorithm, then there is a randomised -time -approximation algorithm for #NWT.

Note that it is impossible to decide NWT in time , so when is polynomially bounded, our algorithm has only polylogarithmic overhead over decision. Note also that [27] provides a subcubic reduction from listing negative-weight triangles to NWT, although it has polynomial overhead and so does not imply our result. Together with an algorithm of Williams [24], Theorem 7 implies the following.

Theorem 8.

There is a randomised -approximation which runs on -vertex instances of #NWT with polynomially bounded weights in time .

1.4 Techniques

We first discuss Theorems 1 and 2, which we prove in Section 3. In the polynomial setting, the standard reduction from approximating #-SAT to deciding -SAT is due to Valiant and Vazirani [22], and runs as follows. If a -CNF formula has at most solutions for some , then using a standard self-reducibility argument, one can count the number of solutions with  calls to a -SAT-oracle. Otherwise, for any , one may form a new formula by conjoining with uniformly random XOR clauses. It is relatively easy to see that as long as the number of satisfying assignments of is substantially greater than , then is concentrated around . Thus by choosing appropriately, one can count exactly, then multiply it by

to obtain an estimate for

.

Unfortunately, this argument requires modification in the exponential setting. If has variables, then each uniformly random XOR has length and therefore cannot be expressed as a width- CNF without introducing new variables. It follows that (for example) will contain  variables. This blowup is acceptable in a polynomial setting, but not an exponential one — for example, given a -time algorithm for -SAT, it would yield a useless -time randomised approximate counting algorithm for #-SAT. We can afford to add only constant-length XORs, which do not in general result in concentration in the number of solutions.

We therefore make use of a hashing scheme developed by Calabro, Impagliazzo, Kabanets, and Paturi [2] for a related problem, that of reducing -SAT to Unique--SAT. They choose a -sized subset of uniformly at random, where is a large constant, then choose variables binomially at random within that set. This still does not yield concentration in the number of solutions of

, but it turns out that the variance is sufficiently low that we can remedy this by summing over many slightly stronger independently-chosen hashes.

Our results in Section 1.3 follow from a more general theorem, in which we consider the problem of approximately counting edges in an arbitrary bipartite graph to which we have only limited access. In particular, we only allow adjacency queries and independence queries: An adjacency query checks whether an edge exists between two given vertices of the graph, and an independence query checks whether a given set of vertices is an independent set in the graph. The standard approach (as used by Thurley [18]) would be to use random adjacency queries to handle instances with many edges, and independence queries and self-reducibility to handle instances with few edges. This approach requires polynomially many independence queries, which is too many to establish the tight relationship between approximate counting and decision required for the results of Section 1.3. In contrast, our main algorithm (Theorem 10) approximates the number of edges in such a graph in quasi-linear time and makes only poly-logarithmically many independence queries.

Using this algorithm, we obtain the results for polynomial-time problems in a straightforward way. For example, in the proof of Theorem 5 for OV, the vertices of  are the input vectors and the edges correspond to orthogonal pairs. An adjacency query corresponds to an orthogonality check, which can be done in time in  dimensions, and an independence query corresponds to a call to the decision oracle for OV on a sub-instance, which takes time by assumption. Since only poly-logarithmically independence queries occur, Theorem 5 follows.

Our algorithm for Theorem 10 works roughly as follows. Let be the bipartite graph whose edges we are trying to count, and let and be the vertex classes of . Using binary search together with our independence oracle, we can quickly find non-isolated vertices of . If contains few such vertices, then by the standard self-reducibility argument used in Theorems 1 and 2, we can quickly determine exactly, so suppose contains many such vertices. If every vertex of is contained in only a small proportion of its edges, then we can approximately halve by passing to a uniformly random subset of , and proceed similarly to Valiant and Vazirani. However, in general this will not be the case and the number of edges in the resulting graph will not be concentrated. We must therefore detect and remove problematic vertices as we go. The procedure we use for this is quite technical, and forms the bulk of the proof, so we defer further explanation to Section 4.

2 Preliminaries

We write for the set of all positive integers. For a positive integer , we use to denote the set . We use to denote the base- logarithm, and to denote the base- logarithm.

2.1 Notation

We write for the set of all positive integers. For a positive integer , we use to denote the set . We use to denote the base- logarithm, and to denote the base- logarithm.

We consider graphs  to be undirected, and write . For all , we use to denote the neighbourhood of . For convenience, we shall generally present bipartite graphs  as a triple in which is a partition of and .

When stating quantitative bounds on running times of algorithms, we assume the standard word-RAM machine model with logarithmic-sized words. We assume that lists and functions in the problem input are presented in the natural way, that is, as an array using at least one word per entry. In general, we shall not be overly concerned with logarithmic factors in running times. We shall write when for some constant , as . Similarly, we write when for some constant , as .

We require our problem inputs to be given as finite binary strings, and write for the set of all such strings. A randomised approximation scheme for a function is a randomised algorithm that takes as input an instance and a rational error tolerance , and outputs a rational number

(a random variable depending on the “coin tosses” made by the algorithm) such that, for every instance

, . All of our approximate counting algorithms will be randomised approximation schemes.

2.2 Probability theory

We will require some well-known results from probability theory, which we collate here for reference. First, we state Chebyshev’s inequality.

Lemma 1.

Let be a real-valued random variable with mean and let . Then

We also use the following concentration result due to McDiarmid [13].

Lemma 2.

Suppose is a real function of independent random variables , and let . Suppose there exist such that for all and all pairs differing only in the th coordinate, . Then for all ,

Finally, we use the following Chernoff bounds, proved in (for example) Corollaries 2.3-2.4 and Remark 2.11 of Janson, Łuczak and Rucinski [11].

Lemma 3.

Suppose is a binomial or hypergeometric random variable with mean . Then:

  1. [label=()]

  2. for all , ;

  3. for all , .∎

3 From decision to approximate counting CNF-SAT

In this section we prove our results for the satisfiability of CNF formulae, formally defined as follows.

Problem: -SAT Input: A -CNF formula . Task: Decide if  is satisfiable. Problem: #-SAT. Input: A -CNF formula . Task: Compute the number of satisfying assignments of .

We also define a technical intermediate problem. For all , we say that a matrix is -sparse if every row of contains at most non-zero entries. In the following definition, and are constants.

Problem: . Input: An -variable Boolean formula of the form . Here is a -CNF formula, is an -sparse matrix over for some , and . Task: Decide if is satisfiable.

We define the growth rate of as the infimum over all such that has a randomised algorithm that runs in time and outputs the correct answer with probability at least . Our main reduction is encapsulated in the following theorem.

Theorem 9.

Let with , let , and suppose . Then there is a randomised approximation scheme for #-SAT which, when given an -variable formula and approximation error parameter , runs in time .

Before we prove this theorem, let us derive Theorems 1 and 2 as immediate corollaries.

Theorem 1 (restated).

ETH is true if and only if there exists such that no randomised -approximation algorithm can run on -variable instances of #-SAT in time .

Proof.

First note that we may use any randomised approximation scheme for #-SAT to decide -SAT with success probability at least 2/3 by taking and outputting ‘yes’ if and only if the result is non-zero. Thus the backward implication of Theorem 1 is immediate. Conversely, suppose ETH is false. A well-known result of Impagliazzo, Paturi and Zane [10, Lemma 10] then implies that for all constant and , there is a randomised algorithm which can decide -SAT in time with success probability at least . Hence for all constant , by the natural reduction from  to -SAT, we obtain . The result therefore follows by Theorem 9. ∎

Theorem 2 (restated).

Let . Suppose that for all , there is a randomised algorithm which runs on -variable instances of -SAT in time . Then for all and all , there is a randomised -approximation algorithm which runs on -variable instances of #-SAT in time .

Proof.

Suppose that is as specified in the theorem statement. Then for all constant , by the natural reduction from to -SAT, we have . Thus the result again follows by Theorem 9. ∎

3.1 Proof of Theorem 9

Given access to an oracle that decides satisfiability queries, we can compute the exact number of solutions of a formula with few solutions using a standard self-reducibility argument given below (see also [18, Lemma 3.2]).


Algorithm . Given an instance of on variables, , and access to an oracle for , this algorithm computes if ; otherwise it outputs FAIL.

(S1)

(Query the oracle) If is unsatisfiable, return .

(S2)

(No variables left) If contains no variables, return .

(S3)

(Branch and recurse) Let and be the formulae obtained from by setting the first free variable in  to 0 and 1, respectively. If is at most , then return this sum; otherwise abort the entire computation and return FAIL.

Lemma 4.

Sparse is correct and runs in time at most with at most calls to the oracle. Moreover, each oracle query is a formula with at most variables.

Proof.

Consider the recursion tree of Sparse on inputs  and . At each vertex, the algorithm takes time at most  to compute and , and it issues a single oracle call. For convenience, we call the leaves of the tree at which Sparse returns 0 (in (S1)) or 1 (in (S2)) the 0-leaves and 1-leaves, respectively.

Let be the number of 1-leaves. Each non-leaf is on the path from some 1-leaf to the root, otherwise it would be a 0-leaf. There are at most such paths, so there are at most non-leaf vertices in total. Finally, every 0-leaf has a sibling which is not a 0-leaf, or its parent would be a 0-leaf, so there are at most 0-leaves in total. Overall, the tree has at most vertices. An easy induction using (S3) implies that , and certainly , so the running time and oracle access bounds are satisfied. Correctness likewise follows by a straightforward induction. ∎

When our input formula has too many solutions to apply Sparse efficiently, we first reduce the number of solutions by hashing. In particular, we use the same hash functions as Calabro et al. [2]; they are based on random sparse matrices over and formally defined as follows:

Definition 5.

Let . An -hash is a random matrix over defined as follows. For each row , let be a uniformly random size- subset of . Then for all and all , we choose values independently and uniformly at random, and set all other entries of to zero.

For intuition, suppose is an -variable -CNF formula and is the set of satisfying assignments of , and that holds for some small . It is easy to see that for all and uniformly random , if is an -hash, then the number  of satisfying assignments of has expected value . (See Lemma 6.) If  were concentrated around its expectation, then by choosing an appropriate value of , we could reduce the number of solutions to at most , apply Sparse to count them exactly, then multiply the result by to obtain an approximation to . This is the usual approach pioneered by Valiant and Vazirani [22].

In the exponential setting, however, we can only afford to take , which means that is not in general concentrated around its expectation. In [2], only very limited concentration was needed, but we require strong concentration. To achieve this, rather than counting satisfying assignments of a single formula , we will sum over many such formulae. We first bound the variance of an individual -hash when and are suitably large. Our analysis here is similar to that of Calabro et al. [2], although they are concerned with lower-bounding the probability that at least one solution remains after hashing and do not give bounds on variance.

Lemma 6.

Let and let . Suppose and . Let and suppose . Let be an -hash, and let be uniformly random and independent of . Let . Then and .

Proof.

For each , let be the indicator variable of the event that . Exposing implies that for all , and hence

We now bound the second moment. We have

(1)

It will be convenient to partition the terms of this sum according to Hamming distance, which we denote by . Write for the binary entropy function , write for its left inverse, and let . Then it follows immediately from (1) that

(2)

Denote the projection of any vector onto by . For any and any we have

Since whenever , it follows that

Since

has an equal number of odd- and even-sized subsets, on exposing 

it follows that

(3)

In particular, this implies . Since a ball of Hamming radius in contains at most points, it follows that

(4)

Now suppose . Since by definition,

Hence by (3), we have . It follows that

(5)

Combining (2), (4) and (5), we obtain

Since , we have . It follows that , and so . Since and , the result follows. ∎

We now state the algorithm we will use to prove Theorem 9, then use the lemmas above to prove correctness. In the following definition, is a rational constant with .


Algorithm (#-SAT). Given an -variable instance of #-SAT, a rational number , and access to an oracle for for some , this algorithm computes a rational number such that with probability at least , .

(A1)

(Brute-force on constant-size instances) If , solve the problem by brute force and return the result.

(A2)

(If there are few satisfying assignments, count them exactly) Let , and apply Sparse to and . Return the result if it is not equal to FAIL.

(A3)

(Try larger and larger equation systems) For each :

  1. (Set maximum number of solutions to find explicitly) Let .

  2. For each :

    • (Prepare query) Independently sample an -hash  and a uniformly random vector . Let .

    • (Ask oracle using subroutine) Let be the output of .

    • (Bad hash or too small) If , go to next  in the outer for-loop.

    • Otherwise, set .

  3. (Return our estimate) Return .

(A4)

(We failed, return garbage) Return .

Lemma 7.

is correct for all and runs in time at most . Moreover, the oracle is only called on instances with at most variables.

Proof.

Let be an instance of #-SAT on variables, and let . The running time of the algorithm is dominated by (A2) and (A3b). Clearly (A2) takes time at most by Lemma 4. In the inner for-loop, the number  controls the maximum running time we are willing to spend. In particular, again by Lemma 4, the running time for one iteration of the inner for-loop is if and otherwise it is bounded by but the remaining iterations of the inner loop are then skipped. It is easy to see that holds at any point of the inner loop, and hence the overall running time is as required. Likewise, the oracle access requirements are satisfied, so it remains to prove the correctness of .

If terminates at (A1) or (A2), then correctness is immediate. Suppose not, so that holds, and the set of solutions of satisfies . Let , and note that and . The formulas  are oblivious to the execution of the algorithm, so for the analysis we may view them as being sampled in advance. Let be the set of solutions to . For each  with , let be the following event:

Thus implies . By Lemma 6 applied to , , , and , for all and we have and . Since the ’s are independent, it follows by Lemma 1 that

Thus a union bound implies that, with probability at least , the event  occurs for all  with simultaneously. Suppose now that this happens. Then in particular, we have . But then, if reaches iteration , none of the calls to Sparse fail in this iteration and we have for all . Thus returns some estimate  in (A3c) while . Moreover, since occurs, this estimate satisfies as required. Thus behaves correctly with probability at least , and the result follows. ∎

Theorem 9 (restated).

Let with , let , and suppose . Then there is a randomised approximation scheme for #-SAT which, when given an -variable formula and approximation error parameter , runs in time .

Proof.

If , then we solve the #-SAT instance exactly by brute force in time , so suppose . By the definition of , there exists a randomised algorithm for with failure probability at most and running time at most . By Lemma 3(i), for any constant , by applying this algorithm times and outputting the most common result, we may reduce the failure probability to at most . We apply to and , using this procedure in place of the -oracle. If we take  sufficiently large, then by Lemma 7 and a union bound, the overall failure probability is at most , and the running time is as required. ∎

4 General fine-grained result

We first define the setting of our result.

Definition 8.

Let be a bipartite graph. We define the independence oracle of to be the function such that if and only if is an independent set in . We define the adjacency oracle of to be the function such that if and only if .

We think of edges of as corresponding to witnesses of a decision problem. For example, in OV, they will correspond to pairs of orthogonal vectors. Thus calling will correspond to verifying a potential witness, and calling will correspond to solving the decision problem on a sub-instance. Our main result will be the following.

Theorem 10.

There is a randomised algorithm with the following properties:

  1. [label=()]

  2. is given as input two disjoint sets and and a rational number ;

  3. for some bipartite graph with , has access to the independence and adjacency oracles of ;

  4. returns a rational number such that holds with probability at least ;

  5. runs in time at most ;

  6. makes at most calls to the independence oracle.

Throughout the rest of the section, we take to be the bipartite graph of the theorem statement and . Moreover, for all , we define and .

We briefly compare the performance of the algorithm of Theorem 10 with that of the standard approach of sampling to deal with dense instances combined with brute force counting to deal with sparse instances (as used in Thurley [18]). Suppose is constant, that we can evaluate in time  for some , that we can evaluate in time , and that the input graph contains  edges for some . Then sampling requires time, and brute force enumeration of the sort used in Sparse (p. 3.1) requires time. The worst case arises when , in which case the algorithm requires time. However, the algorithm of Theorem 10 requires only time in all cases. Thus it has only polylogarithmic overhead over deciding whether the graph contains edges at all.

Similarly to Section 3, we shall obtain our approximation by repeatedly (approximately) halving the number of edges in the graph until few remain, then counting the remaining edges exactly. For the halving step, rather than hashing, if our current graph is induced by for some then we shall simply delete half the vertices in chosen uniformly at random. However, if any single vertex in is incident to a large proportion of the remaining edges, then the edge count of the resulting graph will not be well-concentrated around its expectation and so this approach will fail. We now prove that this is essentially the only obstacle.

Definition 9.

Given , we say a non-empty set is -balanced if every vertex in has degree at most .

Lemma 10.

Let , suppose is -balanced, and suppose . Let be a random subset formed by including each vertex of independently with probability . Then with probability at least , we have and