1. Introduction
Algorithms for direct access allow to access answers to a query on a database essentially as if they are materialized and stored in an array: given an index , the algorithm returns the th answer (or an outofbounds error) in very little time. To make this possible, the algorithm first runs a preprocessing on the database that then allows answering arbitrary access queries efficiently. As the number of answers to a database query may be ordersofmagnitude larger than the size of the database, the goal is to avoid materializing all the answers during preprocessing and only simulate the array.
The direct access task (previously also called th answer and random access) was introduced by Bagan et al. (Bagan et al., 2008). They also mentioned that this task can be used for uniform sampling (by first counting and drawing a random index) or for enumerating all query answers (by consecutively accessing all indices). They devised an algorithm that runs in only linear preprocessing time and average constant time per access call for a large class of queries (firstorder queries) on databases of bounded degree. Direct access algorithm were also considered for queries in monadic second order logic on databases of bounded treewidth (Bagan, 2009). To reason about general databases and still expect extremely efficient algorithms, another approach is to restrict the class of queries instead. Followup work gave linear preprocessing and logarithmic access algorithms for subclasses of conjunctive queries over general databases (BraultBaron, 2013; Carmeli et al., 2020; Keppeler, 2020). There, it was also explained how direct access can be used to sample without repetitions (Carmeli et al., 2020).
Though all of the algorithms we mentioned above simulate storing the answers in a lexicographic order (whether they state it or not), one shortcoming they have in common is that the specific lexicographic order cannot be chosen by the user (but rather it depends on the query structure). Allowing the user to specify the order is desirable because then direct access can be used for additional tasks that are sensitive to the order, such as median finding and boxplot computation. Progress in this direction was recently made by Carmeli et al. (Carmeli et al., 2021) who identified which combinations of a conjunctive query and a lexicographic order can be accessed with linear preprocessing time and logarithmic access time. They identified an easytocheck substructure of the query, called disruptive trios, whose (non)existence distinguishes the tractable cases (w.r.t. the time guarantees we mentioned) from the intractable ones. In particular, if we consider acyclic join queries, they suggested an algorithm that works for any lexicographic order that does not have disruptive trios with respect to the query. If the join query is also selfjoin free, they proved conditional lower bounds stating that for all other variable orders, direct access with polylogarithmic direct access requires superlinear time preprocessing. These hardness results assume the hardness of Boolean matrix multiplication. Given the known hardness of selfjoin free cyclic joins, if we also assume the hardness of hyperclique detection in hypergraphs, this gives a dichotomy for all selfjoin free join queries and lexicographic orders: they are tractable if and only if the query is acyclic with no disruptive trio with respect to the order.
What happens if the query and order we want to compute happen to fall on the intractable side of the dichotomy? This question was left open by previous work, and we aim to understand how much preprocessing is needed to achieve polylogarithmic access time for each combination of query and order. To this end, we introduce disruptionfree decompositions of a query with respect to a variable order. These can be seen as hypertree decompositions of the queries, induced by the desired variable orders, that resolve incompatibilities between the order and the query. Practically, these decompositions specify which relations should be joined at preprocessing in order to achieve an equivalent acyclic join query with no disruptive trios. We can then run the known direct access algorithm from (Carmeli et al., 2021) with linear time preprocessing on the result of this preprocessing to get an algorithm for the query and order at hand with logarithmic access time. The cost of our preprocesing phase is therefore dominated by the time it takes to join the selected relations. We define the incompatibility number of a query and order and show that the preprocessing time of our solution is polynomial where the exponent is this number. Intuitively, the incompatibility number is when the query is acyclic and the order is compatible, and it grows as the incompatibility between them grows.
Next, we aspire to know whether our solution can be improved. Though we can easily show that no other decomposition can be better than the specific decomposition we propose, we can still wonder whether a better algorithm can be achieved using an alternative technique. Thus, we set out to prove conditional lower bounds. Such lower bounds show hardness independently of a specific algorithmic technique, but instead they assume that some known problem cannot be solved significantly faster than the stateofthe art algorithms. We show that the incompatibility number corresponds in a sense to the number of leaves of the largest star query that can be embedded in our given query. We then prove lower bounds for queries that allow embedding stars through a reduction from online SetDisjointness. In this problem, we are given during preprocessing sets of subsets of the universe, and then we need to answer queries that specify one subset from each set and ask whether these subsets are disjoint. On a technical level, in case the query is acyclic, the link of these hardness results with the existence of disruptive trios is that the latter correspond exactly to the possibility to embed a star with two leaves.
Using known hardness results for SetDisjointness, our reduction shows that the acyclic hard cases in the known dichotomy need at least quadratic preprocessing, unless both the 3SUM Conjecture and the APSP Conjecture fail. These are both central, wellestablished conjectures in finegrained complexity theory, see e.g. the recent survey (Williams, 2018). To have tighter lower bounds for the case that the incompatibility number is not , we show the hardness of SetDisjointness through a reduction from the ZeroClique Conjecture. This conjecture postulates that deciding whether a given edgeweighted node graph contains a clique of total edge weight 0 has no algorithm running in time for any . For this conjecture is implied by the 3SUM Conjecture and the APSP Conjecture, so it is very believable. For For the ZeroClique Conjecture is a natural generalization of the case , and it was recently used in several contexts (Lincoln et al., 2018; Abboud et al., 2018; Bringmann et al., 2020; Abboud et al., 2014; Backurs et al., 2016; Backurs and Tzamos, 2017). Assuming the ZeroClique Conjecture, we prove that the preprocessing time of our decompositionbased algorithm is (near)optimal.
To conclude, our main result is as follows: a join query and an ordering of its variables with incompatibility number admit a lexicographic direct access algorithm with preprocessing time and logarithmic access time. Moreover, if the query is selfjoin free and the ZeroClique conjecture holds for all , there is no lexicographic direct access algorithm for this query and order with preprocessing time and polylogarithmic access time for any .
As we develop our lower bound results, we notice that our techniques can also be used in the context of constant delay enumeration of query answers. We show that, assuming the ZeroClique Conjecture, the preprocessing of any constant delay enumeration algorithm for the variable LoomisWhitney join is roughly at least which tightly matches the trivial algorithm in which the answers are materialzed during preprocessing using a worstcase optimal join algorithm (Ngo et al., 2012, 2018; Veldhuizen, 2014). From the lower bound for LoomisWhitney joins, we then infer the hardness of other cyclic joins using a construction by BraultBaron (BraultBaron, 2013). Specifically, we conclude that the selfjoin free join queries that allow constant delay enumeration after linear processing are exactly the acyclic ones, unless the ZeroClique Conjecture fails for some . This dichotomy has been established based on the hardness of hyperclique detection in hypergraphs (BraultBaron, 2013), see also (Berkholz et al., 2020), and we here give more evidence for this dichotomy by showing it also holds under a different complexity assumption.
2. Preliminaries
2.1. Databases and Queries
A join query is an expression of the form , where each is a relation symbol, are tuples of variables, and is a tuple of all variables in . Each is called an atom of . A query is called selfjoin free if no relation symbol appears in two different atoms. If the same relation symbol appears in two different atoms, , then and must have the same arity. A conjunctive query is defined as a join query, with the exception that may contain any subset of the query variables. We say that the query variables that do not appear in are projected.
The input to a query is a database which assigns every relation symbol in the query with a relation : a finite set of tuples of constants, each tuple having the same arity as . The size of is defined as the total number of tuples in all of its relations. An answer to a query over a database is determined by a mapping from the variables of to the constants of such that for every atom in . The answer is then . The set of all answers to over is denoted .
A lexicographic order of a join query is specified by a permutation of the query variables. We assume databases comes with an order on their constants. Then, defines an order over : the order between two answers is the same as the order between their assignments to the first variable in on which their assignments differ.
We consider three types of tasks where the problem is defined by a query and the input is a database. Testing is the task where the user specifies a tuple of constants, and we need to determine whether this tuple is an answer to query over the database. Enumeration is the task of listing all query answers. We measure the efficiency of an enumeration algorithm with two parameters: the time before the first answer, which we call preprocessing, and the time between successive answers, which we call delay. Finally, directaccess in lexicographic order is a task defined by a query and an order where, after a preprocessing phase, the user can specify an index and expect the th answer in that order or an outofbounds error if there are less than answers. We call the time it takes to provide an answer given an index the access time.
2.2. Hypergraphs
A hypergraph consist of a finite set of vertices and a set of edges, i.e., subsets of . Given a set , the hypergraph induced by is with . A superhypergraph of is a hypergraph such that . By we denote the set of neighbors of a vertex in , i.e., . For , we define . In these notations we sometimes leave out the subscript when it is clear from the context.
A hypergraph is called acyclic if we can eliminate all of its vertices by applying the following two rules iteratively:

if there is an edge that is completely contained in another edge , delete from , and

if there is a vertex that is contained in a single edge , delete from , i.e., delete from and .
An order in which the vertices can be eliminated in the above procedure is called an elimination order for
. Note that it might be possible to apply the above rules on several vertices or edges at the same moment. In that case, the order in which they are applied does not change the final result of the process.
A fractional edge cover of is defined as a mapping such that for every we have . The weight of is defined as . The fractional edge cover number is where the minimum is taken over all fractional edge covers of . We remark that and an optimal fractional edge cover of
can be computed efficiently by linear programming. For any set of vertices
we denote by the fractional edge cover number of the induced hypergraph .Every join query has an underlying hypergraph , where the vertices of correspond to the variables of and the edges of correspond to the variable scopes of the atoms of . We use and interchangably in our notation.
2.3. Known Algorithms
Here, we state some results that we will use in the remainder of this paper. All running time bounds are in the wordRAM model with bit words and unitcost operations.
We use the following notions from (Carmeli et al., 2021). Given a join query and a lexicographic order , a disruptive trio consists of three variables such that appears after and in , the variables and do not appear together in an atom of , but shares (different) atoms with both and .
Theorem 1 ((Carmeli et al., 2021)).
If a join query and a lexicographic order have no disruptive trios, then lexicographic direct access for and can be solved with preprocessing and access time.
A celebrated result by Atserias, Marx and Grohe (Atserias et al., 2013) shows that join queries of fractional edge cover number , can, on any database , result in query results of size at most , and this bound is tight for every query for some databases. The upper bound is made algorithmic by socalled worstcase optimal join algorithms.
2.4. FineGrained Complexity
Finegrained complexity theory aims to find the exact exponent of the best possible algorithm for any problem, see e.g. (Williams, 2018) for a recent survey. Since unconditional lower bounds of this form are currently far out of reach, finegrained complexity provides conditional lower bounds that hold when assuming a conjecture about some central, wellstudied problem.
Some important reductions and algorithms in finegrained complexity are randomized, i.e., algorithms are allowed to make random choices in their run and may return wrong results with a certain probability, see e.g.
(Mitzenmacher and Upfal, 2017) for an introduction. Throughout the paper, when we write “randomized algorithm” we always mean a randomized algorithm with success probability at least . It is wellknown that the success probability can be boosted to any by repeating the algorithm times and returning the majority result. In particular, we can assume to have success probability at least , at the cost of only a factor . Our reductions typically worsen the success probability by some amount, but using boosting it can be improved back to ; we do not make this explicit in our proofs.We stress that randomization is only used in our hardness reductions, while all our algorithmic results are deterministic.
We will base our lower bounds on the following problem which is defined for every fixed constant .
Definition 3 ().
In the ZeroClique problem, given an node graph with edgeweights for some constant , the task is to decide whether there are vertices such that they form a clique (i.e. for all ) and their total edgeweight is 0 (i.e. ). In this case we say that is a zeroclique.
We remark that by hashing techniques we can assume .
The following conjectures have been used in several places, see e.g. (Lincoln et al., 2018; Abboud et al., 2018; Bringmann et al., 2020; Abboud et al., 2014; Backurs et al., 2016; Backurs and Tzamos, 2017). The first conjecture is for a fixed , the second postulates hardness for all .
Conjecture 0 (ZeroClique Conjecture).
For no constant ZeroClique has a randomized algorithm running in time .
Conjecture 0 (ZeroClique Conjecture).
For every the ZeroClique Conjecture is true.
It is known that the ZeroClique Conjecture, also called ZeroTriangle Conjecture, is implied by two other famous conjectures: the 3SUM Conjecture (Williams and Williams, 2013) and the APSP Conjecture (Williams and Williams, 2018). Since we do not use these conjectures directly in this paper, we do not formulate them here and refer the interested reader to the survey (Williams, 2018).
We remark that instead of ZeroClique some references work with the ExactWeightClique problem, where we are additionally given a target weight and want to find a clique of weight . Both problems are known to have the same time complexity up to constant factors, see e.g. (Abboud et al., 2018).
A related problem is MinClique, where we are looking for the clique of minimum weight. The MinClique Conjecture postulates that this problem also cannot be solved in time . It is known that the MinClique Conjecture implies the ZeroClique Conjecture, see e.g. (Abboud et al., 2018).
3. DirectAccess Algorithm
In this section, we give an algorithm that, for every join query and desired lexicographic order, provides direct access to the query result on an input database. In particular, we propose to add new atoms to the query such that the resulting query has no disruptive trios with respect to the order. Then, any directaccess algorithm that assumes acyclicity and no disruptivetrios can be applied, provided we can compute a database for the new query that yields the same answers. We show that the new query is essentially a generalized hypertree decomposition of optimal fractional hypertree width out of all decompositions with the required properties. This shows that the suggested solution here is the best we can achieve using a decomposition, but it does not mean we cannot do better using a different method—the latter question is studied in the later sections of this paper.
3.1. DisruptionFree Decompositions
We describe a process that iteratively eliminates disruptive trios in a query by adding new atoms.
Definition 1 (DisruptionFree Decomposition).
Let be a join query and an ordering of . Let be the hypergraph of , and for construct hypergraph from by adding an edge . The disruptionfree decomposition of is then defined to be .
Example 2 ().
Consider the query with the order of its variables. Its graph is shown in Figure 1. In the first step of the construction of Definition 1, we add the edge . Similarly, in the second step, we add . For the third step, note that due to the edges we have added before. Out of these neighbors, only and come before in the order. So we add the edge . Finally, for , we add the edge and for the singleton edge .
Proposition 3 ().
The disruptionfree decomposition of a query is an acyclic superhypergraph of the hypergraph of without any disruptive trios.
Proof.
Denote the hypergraph constructed by the procedure of Definition 1 by . To show that is acyclic, we claim that the reverse of is an elimination order. Consider a vertex that we would like to eliminate. By induction, all vertices that are after in the order have already been eliminated at this point. Note that all edges containing are contained in the edge that was added for in the construction of Definition 1. Also, contains at least , so it cannot have been eliminated before . Thus, all edges contaning except can be eliminated. Afterwards, we can eliminate . It follows that, as claimed, all vertices can be eliminated and is acyclic. If had a disruptive trio where is last in , then and would already be neighbors of when handling in the procedure of Definition 1, and so they would become neighbors and the disruptive trio would be fixed. So, has no disruptive trio. ∎
It will be useful in the remainder of this section to have a noniterative definition of disruptionfree decompositions. Let be the vertices in the connected component of in . In particular, we have that .
Lemma 4 ().
The edges introduced in Definition 1 are for .
Example 5 ().
Proof of Lemma 4.
Denote . We prove that by induction on decreasing . The claim holds by definition for since and since is the hypergraph of .
For the induction step, we first show . Let . If , we know that by definition. Otherwise, . If , we have that because . Otherwise, is a neighbor of in from Definition 1 because they were both neighbors of some in with . In this case, and we also conclude that .
It is left to show the opposite direction. Let . If , by definition. Otherwise, with . Then, when removing from , it is split to connected components, one of them containing a neighbor of . Consider the smallest variable in such a connected component. Then, this connected component is , and we have that and . By induction, and both appear in . Thus is a neighbor of when creating , and so . ∎
3.2. The Algorithm
The idea of our direct access algorithm is to use the disruptionfree decomposition. More precisely, we define a new query from such that the hypergraph of is the disruptionfree decomposition of . Then, given an input database for , we compute a new database for such that . Since has no disruptive trio, we can then use the algorithm from (Carmeli et al., 2021) on and to allow direct access. A key component to making this approach efficient is the efficient computation of . To measure its complexity, we introduce the following notion.
Definition 6 (Incompatibility Number).
Let be a join query with hypergraph and be an ordering of . Let be the fractional edge cover of as defined in Definition 1. We call the incompatibility number of and .
Note that we assume that queries have at least one atom, so the incompatibility number of any query and any order is at least . The incompatibility number can also be seen as the fractional width of the disruptionfree decomposition, and we can show that this decomposition has the minimum fractional width out of all decompositions with the properties we need, see Section 4 for details. We now show that the incompatability number lets us state an upper bound for the computation of a database with the properties claimed above.
Theorem 7 ().
Given a join query and an ordering of its variables with incompatability number , lexicographic direct access with respect to can be achieved with preprocessing and logarithmic access time.
Proof.
We construct by adding for every edge an atom whose variables are those in . We compute a relation as follows: let be a fractional edge cover of of weight . Let be the edges of that have positive weight in and let be the corresponding atoms. We set where . Then clearly for all tuples in , the tuple we get by projecting to lies in . As a consequence, we can construct by adding all relations to to get . Moreover, the hypergraph of is the disruptionfree decomposition of the hypergraph of , so we can apply the algorithm from (Carmeli et al., 2021) for direct access.
It remains to show that all can be constructed in time . To this end, consider the join query from before. By definition, its variable set is , and yields a fractional edge cover of weight at most . Thus, we can use a worstcase optimal join algorithm from Theorem 2 to compute in time . ∎
4. DisruptionFree Decompositions and Fractional Hypertree Width
In this section, we will relate disruptionfree decompositions to fractional hypertree decompositions and the incompatability number to fractional width.
Let be a hypergraph. A hypertree decomposition of is defined to be an acylic hypergraph such that for every edge there is a with .^{1}^{1}1Hypertree decompositions appear in previous work under several names, such as generalize hypertree decompositions and fractional hypertree decompositions depending on the way the bags get covered. Usually, the definitions are more involved as they also contain a tree structure with the bags as nodes and an edge cover of the bags. This is a simplified definition containing only what we need here. The sets are called the bags of the decomposition. The fractional width of is defined as . The fractional hypertree width of is defined to be the minimial fractional width of any hypertree decomposition of .
Note that, with this definition, the disruptionfree decomposition of a query and an order is a hypertree decomposition of whose fractional width is the incompatability number of and .
Observation 1 ().
Let be a join query and let be an order of its variables. Then the incompatibility number of and is at least the fractional edge cover number of .
Of course there are other hypertree decompositions of that we could have used for a direct access algorithm. The only property that we need is that has no disruptive trio for . If this is the case, then inspection of the proof of Theorem 7 shows that we get an alternative algorithm whose preprocessing depends on the fractional width of . We will see next that this approach cannot yield better running times than Theorem 7. To this end, we prove that any alternative decomposition contains the disruptionfree decomposition in a way.
Lemma 2 ().
Given a join query and an ordering of its variables , and let be a hypertree decomposition of with no disruptive trio. Then, for every edge of the disruptionfree decomposition of , there exists a bag of such that .
Proof.
Let be the hypergraph of . Assume by way of contradiction that there is an edge of the disruptionfree decomposition of that is not contained in any bag of . Since is a hypertree decomposition of , for every edge of , the decomposition contains, by definition, a bag such that . Thus, the edge of not contained in any bag of must be of the form where consists of the preceding neighbors of at the moment of creation of the edge; here, when we say ‘preceding’, we mean the neighbors that come before in . Consider the noncovered edge with the largest . We show next that every pair of vertices in appears in a common bag in . It is known that acyclic graphs are conformal (BraultBaron, 2016); that is, any set of pairwise neighbors must be contained in an edge. Thus, has a bag containing , which is a contradiction.
Since is the largest noncovered edge, we know that contains, for all edges that were present at the moment of creation of , a bag with . Thus, for every variable in , there is a bag of that contains both and . Now consider . If are not in a common bag of , then is a disruptive trio of , which is a contradiction to the conditions of the lemma. So must appear in a common bag. It follows that the vertices of are all pairwise neighbors in as we wanted to show. ∎
Since the fractional edge cover number is monotone with respect to set inclusion, we directly get that the disruptionfree decomposition is optimal in the following sense.
Corollary 3 ().
Let be a join query and let be an order of its variables. The disruptionfree decomposition of and has the minimal fractional width taken over all hypertree decompositions of that have no disruptive trio with respect to .
Note that in general finding a optimal fractional hypertree decomposition is known to be hard (Gottlob et al., 2021). However, in our case, decompositions are only useful if they eliminate disruptive trios. If we restrict the decompositions in that way, it is much easier to find a good decomposition compared to the general case: since the optimal decomposition is induced by the order, we get it in polynomial time using the procedure of Definition 1.
5. SetDisjointnessBased Hardness for Direct Access
In this section, we show lower bounds for lexicographically ranked direct access for all selfjoin free queries and all variable orders. We first reduce general direct access queries to the special case of star queries, which we introduce in Section 5.1. Then we show that lower bounds for direct access to star queries follow from lower bounds for SetDisjointness, see Section 5.3. Later, in Section 6, we prove a lower bound for SetDisjointness based on the ZeroClique Conjecture.
5.1. From Star Queries to Direct Access
A crucial role in our reduction is played by the star query :
as well as its variant in which variable is projected away:
We will always denote their input database by . We say that a variable order is bad for if is the last variable in .
Lemma 1 ().
Let be a selfjoinfree join query and let be an ordering of . Let be the incompatibility number of and and assume . If there is an such that for all there is a direct access algorithm for and with preprocessing time and access time , then there is a , and such that for all there is a direct access algorithm for with respect to a bad ordering with preprocessing time and access time .
Proof.
We will use the concept of fractional independent sets. A fractional independent set in a hypergraph is a mapping such that for every we have . The weight of is defined to be . The fractional independent set number of is defined as where the maximum is taken over all fractional independent sets of . We also use the notation . Using linear programming duality, we have for every graph and every vertex set that . We use the same notation on join queries, with the meaning of applying it to the underlying hypergraph.
We use the notation and as in Lemma 4. Let be such that . Then, by what we said above, we know that there is a fractional independent set such that . Since is the solution of a linear program with integer coefficients and weights, all values are rational numbers. Let be the least common multiple of the denominators of . Now define a new weight function by .
Let . We will show how to simulate random access to by embedding it into . To this end, every takes roles, that is we assign many of the variables of to . We do this in such a way that for all pairs , whenever a variable comes before in , then for every role of and every role of we have that . Moreover, for every we have that there is a unique variable whose role is . Note that , so this distribution of roles is possible. Now add as an additional role the role for every . Note that when doing so, the variable can have both some roles and the role .
We now construct a database for , given an input database for . For every variable , denote by the active domain in . We fix all variables of that have no role to a constant . To simplify the reasoning, we will in the remainder ignore these variables, so only the variables in remain. For each variable we define the domain as follows: let be the roles of given in the order defined by . Then the domain of is . We order lexicographically. Intuitively, takes the values of all its roles.
To define the relations of , consider two cases: if none of the variables in an atom of plays the role , the relation is simply . Otherwise, let be the roles played by variables in . Then we compute the subjoin where . Note that there is an injection that consists of “packing” the values for the different variables into the variables of according to the roles in the following sense: for every tuple , maps each variable with role set to the tuple we get from by deleting the coordinates not in . The relation is then simply .
Note that the construction gives a bijection between and : in all tuples in , all variables that have the role must take the same value on the corresponding coordinate by construction, because is connected. Moreover, for every atom of , there is an atom of containing (not necessarily different) variables and such that has the role and has the role . By construction it then follows that on the corresponding components the values and take values that are consistent with the relation . This directly gives the desired bijection. Also, given an answer to , we can easily compute the corresponding answer to by “unpacking”, i.e., inverting . Hence, a direct access algorithm for gives a direct access algorithm for . Moreover, by construction, the correct order is maintained by .
It remains to analyze the running time of this algorithm. We first show that the constructed instance can be computed reasonably efficiently.
Claim 1 ().
All relations in can be computed in time .
Proof.
We show first that for every atom its variables in total have at most roles different from . To see this, let be the variables of . Then the overall number of nonroles is
where the inequality comes from the fact that is a fractional independent set.
Now for atoms in which no variable has the role , the claim follows directly. For the other atoms, the computation time is essentially that of computing the join , which can be computed in time , since it involves at most atoms. ∎
It follows that the size of is at most . If for some and all there is a direct access algorithm for and with preprocessing time and access time , then by running this algorithm on the constructed instance we get direct access to with preprocessing time:
where we used and thus for sufficiently small . For any desired , by setting the access time of this algorithm is . This finishes the proof. ∎
The situation of Lemma 1 simplifies for acyclic queries.
Corollary 2 ().
Let be an acyclic selfjoinfree join query and an order of with incompatibility number . Then is an integer, and if has a direct access algorithm with preprocessing time and access time , then so has .
Proof.
If is acyclic and is its hypergraph, then contains a (nonfractional) independent set of size for every . This is because in acyclic hypergraphs the independent set number and the edge cover number coincide (Durand and Mengel, 2014). Hence, the fractional independent set in the proof of Lemma 1 takes w.l.o.g. only values in . Thus, and the proof of Lemma 1 simplifies to give the claimed result. ∎
5.2. From Star Queries to Projected Stars
Proposition 3 ().
Let be a bad order on . If there is an algorithm that solves the direct access problem for and with preprocessing time and access time , then there is an algorithm for the testing problem for with preprocessing time and query time .
Proof.
If an answer is in , then answers of the form (for any ) form a contiguous sequence in , since the position of has the lowest priority in . Thus, a simple binary search using the direct access algorithm allows to test whether such a tuple exists, using a logarithmic number of direct access calls. ∎
5.3. From SetDisjointness to Projected Stars
We observe that the testing problem for the projected star query is equivalent to the following problem:
Definition 4 ().
In the SetDisjointness problem, we are given an instance consisting of a universe and families of subsets of . We denote the sets in family by . The task is to preprocess into a data structure that can answer queries of the following form: Given indices , decide whether .
We denote the number of sets by and the input size by . We call the universe size.
Lemma 5 ().
If there is a testing algorithm for with preprocessing time and access time , then SetDisjointness has an algorithm with preprocessing time and query time .
Proof.
For each family , consider the relation
Let be the resulting database, and note that . Then for every query of the SetDisjointness problem, we have if and only if . ∎
6. Hardness of Set Disjointness
In this section, we will show lower bounds for Set Disjointness, as defined in Definition 4. In combination with the reductions from the previous section, this will give us lower bounds for direct access in Section 7. The main result of this section is the following.
Theorem 1 ().
If there is and such that for all there is an algorithm for SetDisjointness that, given an instance , answers queries in time after preprocessing time , then the ZeroClique Conjecture is false.
Case
Kopelowitz, Pettie, and Porat (Kopelowitz et al., 2016) established hardness of 2SetDisjointness under the 3SUM Conjecture, and Vassilevska Williams and Xu (Williams and Xu, 2020) showed that the same hardness holds under the Zero3Clique Conjecture (and thus also under the APSP Conjecture).
Theorem 2 (Corollary 3.12 in (Williams and Xu, 2020)).
Assuming the Zero3Clique Conjecture (or the SUM or APSP Conjecture), for any and any , on 2SetDisjointness instances with sets, each of size , and universe size , the total time of preprocessing and queries cannot be .
This implies Theorem 1 for .
Corollary 3 ().
If there is such that for all there is an algorithm for 2SetDisjointness that, given an instance , answers queries in time after preprocessing time , then the Zero3Clique Conjecture is false (and thus also the 3SUM and APSP Conjectures are false).
Proof.
Assume for the sake of contradiction that we can solve 2SetDisjointness with preprocessing time and query time , for . Set , and consider an instance consisting of sets, each of size , so the input size is . Over queries, the total query time is . The preprocessing time on is . Hence, the total running time of running the preprocessing and queries on is for . This contradicts Theorem 2 (for parameters ). ∎
In particular, by combining our previous reductions, for with a bad order there is no algorithm with polylogarithmic access time and preprocessing , assuming any of the three conjectures mentioned above.
Case
In the following, we show how to rule out preprocessing time for SetDisjointness for any . To this end, we make use of the ZeroClique Conjecture. Our chain of reductions closely follows the proof of the case by Vassilevska Williams and Xu (Williams and Xu, 2020), but also uses additional ideas required for the generalization to larger .
6.1. SetIntersection
We start by proving a lower bound for the following problem of listing many elements in a set intersection.
Definition 4 ().
In the SetIntersection problem, we are given an instance consisting of a universe and families of subsets of . We denote the sets in family by . The task is to preprocess into a data structure that can answer queries of the following form: Given indices and a number , compute the set if it has size at most ; if it has size more than , then compute any elements of this set.
We denote the number of sets by and the input size by . We call the universe size.
The main result of this section shows that SetIntersection is hard in the following sense. Later we will use this result to show hardness for SetDisjointness.
Theorem 5 ().
Assuming the ZeroClique Conjecture, for every constant there exists a constant such that no randomized algorithm solves SetIntersection on universe size and in preprocessing time and query time .
We prove Theorem 5 in the rest of this section.
Preparations.
To start our reductions, first note that finding a zeroclique in a general graph is equivalent to finding a zeroclique in a complete partite graph. This can be shown using a reduction that duplicates the nodes and the edges times, and replaces nonexisting edges by edges of very large weight.
Observation 6 ((Abboud et al., 2018)).
If the ZeroClique Conjecture is true, then it is also true restricted to complete partite graphs.
Due to this observation, we can assume that we are given a complete partite graph with vertices and with color classes and a weight function on the edges. We denote by the weight of the edge from to , and more generally by the total weight of the clique .
For our construction it will be convenient to assume that the edge weights lie in a finite field. Recall that in the ZeroClique problem we can assume that all weights are integers between and , for some constant that may depend on . We first compute a prime number between and by a standard algorithm: Pick a random number in that interval, check whether it is a prime, and repeat if it is not. By the prime number theorem, a random number in that interval is a prime with probability . It follows that in expected time we find a prime in that interval.
Having found the large prime , we consider all edge weights as given in the finite field . Note that is bigger than the sum of weights of any clique in , so the cliques of weight are the same over and . Thus, in the remainder we assume all arithmetic to be done over .
We now want to spread the edge weights evenly over . To this end, define a new weight function by choosing independently and uniformly at random from :

one value ,

for all and all a value .
We then define a new weight function by setting for any and any and :
(1) 
Note that in every clique the sum contains every term