Independent Range Sampling, Revisited Again

03/19/2019 ∙ by Peyman Afshani, et al. ∙ 0

We revisit the range sampling problem: the input is a set of points where each point is associated with a real-valued weight. The goal is to store them in a structure such that given a query range and an integer k, we can extract k independent random samples from the points inside the query range, where the probability of sampling a point is proportional to its weight. This line of work was initiated in 2014 by Hu, Qiao, and Tao and it was later followed up by Afshani and Wei. The first line of work mostly studied unweighted but dynamic version of the problem in one dimension whereas the second result considered the static weighted problem in one dimension as well as the unweighted problem in 3D for halfspace queries. We offer three main results and some interesting insights that were missed by the previous work: We show that it is possible to build efficient data structures for range sampling queries if we allow the query time to hold in expectation (the first result), or obtain efficient worst-case query bounds by allowing the sampling probability to be approximately proportional to the weight (the second result). The third result is a conditional lower bound that shows essentially one of the previous two concessions is needed. For instance, for the 3D range sampling queries, the first two results give efficient data structures with near-linear space and polylogarithmic query time whereas the lower bound shows with near-linear space the worst-case query time must be close to n^2/3, ignoring polylogarithmic factors. Up to our knowledge, this is the first such major gap between the expected and worst-case query time of a range searching problem.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In range searching, the goal is store a set of points in a data structure such that given a query range, we can answer certain questions about the subset of points inside the query range. The difficulty of the range searching problem, thus depends primarily on the shape of the query as well as types of questions that the data structure is able to answer. These range searching questions have been studied extensively and we refer the reader to the survey by Agarwal and Erickson [3] for a deeper review of range searching problems.

Let us for a moment fix a particular query shape. For example, assume we are given a set

to preprocess and at the query time we will be given a query halfplane . The simplest type of question is an emptiness query where we simply want to report to the user whether is empty or not. Within the classical literature of range searching, the most general (and thus the most difficult) variant of range searching is semigroup range searching where each point in is assigned a weight from a semigroup and at the query time the goal is to return the sum of the weights of the points of . The restriction of the weights to be from a semigroup is to disallow subtraction. As a result, semigroup range searching data structures can answer a diverse set of questions. Other classical variants of range searching lie between the emptiness and the semigroup variant. In our example, the emptiness queries can be trivially solved with space and query time whereas the semigroup variant can only be solved with query time using space. Finally, the third important variant, range reporting, where the goal is to output all the points in , is often close to the emptiness variant in terms of difficulty. E.g., halfplane queries can be answered in time where is the size of the output, using space.

Sampling queries.

Let be a large set of points that we would like to preprocess for range searching queries. Consider a query range . Classical range searching solutions can answer simple questions such as the list of points inside , or the number of them. However, we immediately hit a barrier if we are interested in more complex questions, e.g., what if we want to know how a “typical” point in looks like? Or if we are curious about the distribution of the data in . In general, doing more complex data analysis requires that we extract the list of all the points inside but this could be an expensive operation. For questions of this type, as well as many other similar questions, it is very useful to be able to extract a (relatively) small random sample from the subset of points inside . In fact, range sampling queries were considered more than two decades ago within the context of database systems [9, 4, 7, 13, 12]

. Indeed the entire field of sample complexity, which provides the basis for statistics and machine learning, argues how any statistical quantity can be understood from an iid sample of the data. Range sampling allows this literature to quickly be specified to data in a query range. However, many of these classical solutions fail to hold in the worst-case as they use R-trees or Quadtrees whose performance depends on the distribution of the coordinates of the input points (which could be pretty bad). Some don’t even guarantee that the samples extracted in the future will be independent of the samples extracted in the past. For example, sometimes asking the same query twice would return the same samples.

The previous results.

Given a set of weights, one can sample one weight with proportional probability using the well-known “alias method” by A. J. Walker [16]. This method uses linear space and can sample a weight in worst-case constant time.

The rigorous study of independent range sampling was initiated by Hu et al. [10]. They emphasized the requirement that any random sample extracted from the data structure should be independent of all the other samples. They studied the one-dimensional version and for an unweighted point set and presented a linear-sized data structure that could extract random samples in time and it could be updated in time as well. A few years later, Afshani and Wei [2] revisited the problem and solved the one-dimensional version of the problem for weighted points: they presented a linear-size data structure that could answer queries in time where referred to the running time of a predecessor query (often but sometimes it could be faster, e.g., if the input is indexed by an array then the predecessor query can be answered trivially in time). They also studied the 3D halfspace queries but for unweighted points. Their main result was an optimal data structure of linear-size that could answer queries in time.

Our results.

We provide general results for the independent range sampling problem on weighted input (wIRS), and design specific results for 3D halfspace queries. We show a strong link between the wIRS and the range max problem. Namely, we show that the range max problem is at least as hard as wIRS, and also we provide a general formulation to solve the wIRS problem using range max. For halfspace queries in 3D, our framework gives a structure that uses space and has query time. We improve the space complexity to when the ratio of the weights is . This solution uses rejection sampling, so it only provides an expected query time bound. To compensate, we provide another solution that has worst-case query time, but allows the points to be sampled within a factor of their desired probability, and may ignore points that would be sampled with probability less than for . This structure requires space, and has a worst-case query time of .

Finally, we show a conditional lower bound when we enforce worst-case query time and exact sampling probabilities, in what we call the separated algebraic decision tree (SAD) model. This model allows any decision tree structure that compares random bits to algebraic functions of a set of input weights. In this model, we show wIRS is as hard as the range sum problem, which is conjectured to be hard. In particular, if the best known solution to the range sum problem for halfspaces in 3D is optimal, then the wIRS problem would require

query time if it uses near-linear space. This provides the first such separation between expected and worst-case query time for a range searching problem that we are aware of.

2 A Randomized Data Structure

In this section, we show that if we allow for the query bound to hold in expectation, then the range sampling problem can be solved under some general circumstances. Intuitively, we show that we need two ingredients: one, a data structure for range maximum queries, and two, a data structure that can sample from a weighted set of points under the assumptions that the weights are within a constant factor of each other. Furthermore, with an easy observation, we can also show that the range sampling problem is at least as hard as range maximum problem. We consider the input as a general set system . We assume the input is a set of data elements (e.g., points) and we consider queries to be elements of a set of ranges where each is a subset of . The set is often given implicitly, and for example, if is a set of points in , could be the set of all where is a halfspace in 3D. Note that our model of computation is the real RAM model.

[The Range Maximum Problem] Let be a data set, s.t., each element is assigned a real-value weight . The goal is to store in a structure, s.t., given a range , we can find the element in with the maximum weight.

Given a weighted set , for any subset , we denote by the sum . First we observe that range sampling is at least as hard as the range maximum problem. [] Assume we can solve range sampling queries on input and for any weight assignment using space and query time where the query only returns one sample. Then, given the set and a weight function , we can store in a data structure of size such that given a query , we can find the data element in with the maximum weight in time, with high probability.

Proof.

Let be the list of input elements in sorted by their weight function . We then assign the element a new weight , for a large enough constant , and store them in the data structure for the range sampling problem. This takes space. Given a query for the range maximum problem, we sample one element from the range . We have and the total weight of all the other points in can be at most , i.e., we find the element with the maximum weight with high probability. ∎

Next, we show that weighted range sampling can be obtained from a combination of a range maximum data structure and a particular form of weighted range sampling data structure for almost uniform weights.

Let be an input to the range sampling problem. Assume, we have a structure for the range maximum queries that uses space and with query time of . Furthermore, assume for any subset we can build a structure that uses space and given a query , it does the following: it can return for a subset with the property that , and and furthermore, the structure can extract random samples from in time.

Then, we can answer range sampling queries using space and with expected query time of .

Proof.

Let . We store in a data structure for the range maximum queries. We partition into subsets in the following way. We place the element with the largest weight in and then we add to any element whose weight is at least and then recurse on the remaining elements. Observe that for all we have . Thus, the weight function is almost uniform on each . We store in a data structure .

Next, we build a subset sum information over the total weight of the (disjoint) union of all subsets with , that is, . Consider a query .

Step 1: Use the Range Maximum Structure.

Issue a single range-max query on for the query range . Let be the answer to the range-max query, assume .

Step 2: top-level alias structure.

Having found , we identify the smallest index such that the maximum weight for and the minimum weight for satisfy . As the weights in the sets decreases geometrically, we have . Then, for each with , we use the data structure to identify the set such that contains the set . This returns the values , and the total running time of this step is .

We build a top-level alias structure on values: all the values , , as well as the value . Let . This can be done in time as .

Step 3: Extracting samples.

To generate random samples from , we first sample a value using the top-level alias structure. This can result in two different cases:

Case 1.

It returns the value . Let . In this case, we sample an element from the set , by building an alias structure on in time. The probability of sampling an element is set exactly to . Note that these probabilities of do not add up to one which means the sampling might fail and we might not return any element. If this happens, we go back to the top-level alias structure and try again. Notice that contains at least one element from , and that for any , have which implies . Thus, this case can happen with probability at most , meaning, even if we spend time to answer the query, the expected query time is .

Case 2.

It returns a value , for . We place into a list for now. At some later point (to be described) we will extract a sample from . If happens to be inside , then we return , otherwise, the sampling fails and we go back to the top-level structure.

We iterate until queries have been pooled. Then, we issue them in batches to data structure , . Ignoring the failure events in case (2), processing a batch of queries will take time. Notice that each iteration of the above procedure will succeed with a constant probability: case (1) is very unlikely (happens with probability less than ) and for case (2) observe that we have and thus each query will succeed with constant probability. As a result, in expectation we only issue a constant number of batches of size to extract random samples.

It remains to show that we sample each element with the correct probability. The probability of reaching case (1) is equal to . Thus, the probability of sampling an element is equal to . The same holds in case (2) and the probability of sampling an element , is . Thus, conditioned on the event that the sampling succeeds, each element is sampled with the correct probability. ∎

3 3D Halfspace Sampling

3.1 Preliminaries

In this section, we consider random sampling queries for 3D halfspaces. But we first need to review some preliminaries. [] Let be a set of points in . Let be a partition of into subsets. We can store in a data structure of size such that given a query halfspace , we can find the smallest index such that in time.

Proof.

We consider the dual problem where

is the set of hyperplanes dual to

and the goal is to store them in a data structure such that given a query point , we can find the smallest index such that there exists a halfspace of that passes below .

Let and . We compute the lower envelope of , and store its projection in a point location data structure and then recurse on and . The depth of the recursion is and each level of the recursion consumes space and thus the total storage is .

Given a query , we have two cases: if a halfspace of passes below , then the answer is obviously in so we recurse there; otherwise, no halfspace of passes below and thus, we can recurse on . This decision can be easily made using the point location data structure. The total query time is . ∎

The following folklore result is a special case of the above lemma. Let be a set of points in where each point is assigned a real-valued weight . We can store in a data structure of size such that given a query halfspace , we can find the point with maximum weight in time.

We also need the following preliminaries. Given a set of hyperplanes in , the level of a point is the number hyperplanes that pass below . The -level of (resp. -level of ) is the closure of the subset of containing points with level at most (resp. exactly ). An approximate -level of is a surface composed of triangles (possibly infinite triangles) that lies above -level of but below -level of for a fixed constant . For a point , we define the conflict list of with respect to as the subset of hyperplanes in that pass below and we denote this with . Similarly, for a triangle with vertices , and , we define . One of the main tools that we will use is the existence of small approximate levels. This follows from the existence of shallow cuttings together with some geometric observations. [] For any set of hyperplanes in , and any parameter , there exists an approximate -level which is a convex surface consisting of triangles.

Furthermore, we can construct a hierarchy of approximate -levels , for and , together with the list for every triangle in time and space. Given a query point , we can find an index in time such that there exists a triangle that lies above such that .

Proof.

Shallow cuttings were first introduced by Matoušek [15] and later Chan [5] observed that we can work with the triangulation of the convex hull of Matoušek’s construction. As a result, shallow cuttings could be represented as convex surfaces formed by triangles. Ramos [14] offered a randomized time construction algorithm that could build a hierarchy of shallow cuttings in time possibly together with the conflict lists (i.e., for any triangle). This was recently made deterministic [6].

Regarding the query part, it is known that we can store in a data structure of size such that given a query point , we can find a constant factor approximation such that in worst-case time [1]. We can then set and note that his method also finds . ∎

We will also use the following results.

[The Partition Theorem][11] Given a set of points in 3D and an integer , there exists a partition of into subsets , each of size , where each is enclosed by a tetrahedron , s.t., any hyperplane crosses tetrahedra.

[] Let be tree of size where each leaf stores a real-valued non-negative weight . We can build a data structure s.t., given an internal node at the query time, we can independently sample a leaf with probability proportional to its weight in the subtree of . The data structure uses space and it can answer queries in worst-case time.

Proof.

Let be an array of the leafs obtained using the DFS ordering of the leafs. Observe that for any internal node , the leafs in the subtree of correspond to a contiguous interval of . Thus, the problem reduces to one-dimensional range sampling queries. Afshani and Wei [2] showed that these queries can be solved in time plus the time it takes to answer predecessor queries. In our problem, since there are only different possible queries (one for each internal node), we can simply store a pointer from each internal node to the location of its predecessor in array . ∎

Afshani and Wei [2] consider unweighted sampling for 3D halfspace queries. We next use the following technical result of theirs. Let be a set of hyperplanes in 3D. Let where is a large enough constant. We can build a tree with leafs where each hyperplane is stored in one leaf such that the following holds: Given a point with level where , we can find internal nodes in such that where is the set of hyperplanes stored in the subtree of . Unfortunately, the above lemma is not stated explicitly by Afshani and Wei, however, is the “Global Structure” that is described in [2] under the same notation.

3.2 A Solution with Expected Query Time

We now observe that we can use Lemma 2 to give a data structure for weighted halfspace range sampling queries in 3D. We first note that using Corollary 3.1 and by building a hierarchy of shallow cuttings, i.e., building approximate -levels for , , we can get a data structure with space that can answer queries in query time. Furthermore, as Lemma 2 shows, our problem is at least as hard as the halfspace range maximum problem which currently has no better solution than space and expected query time. Thus, it seems we cannot do better unless we can do better for range maximum queries, a problem that seems very difficult.

However, the reduction given by Lemma 2

is not completely satisfying since we need to create a set of weights that are exponentially distributed. As a result, it does not capture a more “natural” setting where the ratio between the largest and the smallest weight is bounded by a parameter

that is polynomial in . Our improved solution is the following which shows when we can in fact reduce the space to linear. [] Let be a set of weighted points in , where the smallest weight is 1 and the largest weight is . We can store in a data structure of size such that given a query halfspace and an integer , we can extract independent random samples from the points inside in time.

Proof summary.

Let be the set of hyperplanes dual to . We partition into sets where the weights of the elements in are larger than those of and for two hyperplanes we have . Clearly, we have . Given a query point , we can find the smallest index such that a hyperplane of passes below , in time using Lemma 3.1. Also, by building a prefix sum, , we can essentially focus on sampling from , similar to Lemma 2. With a slight abuse of the notation, let us rename as . Let . We build a hierarchy of approximate levels by Lemma 3.1.

We use Lemma 3.1. Consider a triangle with vertices , and where the level of each vertex is at most . By the lemma, if , then we can find a representation of , , and , each using internal nodes. Since , we can also represent using internal nodes of . At we store an alias data structure on the weights of the subtrees of that represent . This requires space. Observe that at the query time, we can proportionally sample one of these subtrees and then using Lemma 3.1 we can sample an element from the subtree. Thus, we can extract one random sample from in worst-case time using only space. The total size of the lists for all and all is , meaning, this in total will consume space.

By Lemma 3.1, we can identify an index s.t., lies below a triangle with . It is clear that we can now sample from using tree . This satisfies all the conditions in Lemma 2 as we can sample from in time, after an initial search time.

If , we can repeat a classical idea that was also used by Afshani and Wei and that is build an approximate -level and for each triangle , we can repeat the above solution just one more time. We will be able to handle the indices such that . However, if , then we can simply find all the hyperplanes passing below in time and build an alias data structure on them. ∎

3.3 Worst-Case Time with Approximate Weights

All of the previous data structures sample items exactly proportional to their weight, but rely on rejection sampling. Hence, their query time is expected, and not worst case. With small probability these structures may repeatedly sample items which are not in , and then need to reset and try again with no bound on the worst-case time. To achieve worst-case query time, we need some modifications. We allow for items to be sampled “almost” proportional to their weights, i.e., we introduce a notion of approximation. As we shall see in the next chapter, without some kind of approximation, our task is very likely impossible.

Problem definition.

We consider an input set of points where has weight and we would like to store in a data structure. At the query time, we are given a halfspace and a value and we would like to extract random samples from . Let , and set two parameters . Ideally, we would like to sample each with probability . Instead we sample with probability , if then

(1)

and if then we must have . That is, we sample all items within a factor of their desired probability, except for items with very small weight, which could be ignored and not sampled. The smaller items are such that the sum of their desired probabilities is at most .

Overview of modifications.

We start in the same framework as Section 3.2 and Lemma 2, i.e., we partition into subsets by weight. Then, we need to make the following modifications: (1) We can now ignore small enough weights; (2) We can no longer use rejection sampling to probe into each set , rather we need to collect a bounded number (a function ) of candidates from all which is guaranteed to contain some point in the query . (3) We can also no longer use rejection sampling within each to get candidates, instead we build a stratified sample via the partition theorem.

Change (1) is trivial to implement. Remember that given query , we first identify indices and such that contains the largest weight in and the weights in are a factor smaller. We now require weights in to be a factor smaller, and let where . We can now ignore all the remaining sets: the sets with will have weights so small that even if there are points within, the sum of their weights will be at most times the largest weight. Since , this implicitly increases all of the other weights in each (for ) by at most a factor .

We next describe how we can build a data structure on each to generate candidate points. Once we have these points, we can build two alias structures on them (they will come with weights proportional to the probability they should be selected), and select points until we find one in . As a first pass, to generate samples, we can repeat this times, or bring samples from each . We will return to this subproblem to improve the dependence on and by short-circuiting a low-probability event and reloading these points dynamically.

3.3.1 Generating Candidate Points

Here we will focus on sampling our set of candidates. We will do this for every set , . However, to simplify the presentation, we will assume that the input is a set of points such that the weights of the points in are within factor 2 of each other. We will sample candidate points from (representing a subset ) s.t., the set of candidates intersects with the query halfspace . Each candidate will be sampled with a probability that is almost uniform, i.e., it fits within our framework captured by Eq. 1.

Let be the set of hyperplanes dual to points in . We maintain a hierarchical shallow-cutting of approximate levels (Lemma 3.1) on . By Lemma 3.1, we get the following in the primal setting (on ), given a query halfspace : We can build subsets of where the subsets have in total points. Given a query halfspace , in time, we can find a subset so that and . We now augment this structure with the following information on each such subset , without increasing the space complexity. We maintain an -partition on . (That is, so , each subset and has size bound , and the boundary of any halfspace intersects at most cells .) For each we maintain an alias structure so in time we can generate a random from the points within. It is given a weight . In time we can generate a weighted set ; this will be the candidate set.

The sum of all weights of points within is , and so we would like to approximately sample each with probability .

Let and consider a candidate set , and sample one point proportional to their weights. For a point , the probability that it is selected satisfies

Proof.

Of the cells in

, classify them in sets as

inside if , as outside if , and as straddling otherwise. We can ignore the outside sets. There are inside sets, and straddling sets.

For point from an inside set, it ideally should be selected with probability . Indeed it is the representative of with probability and is give weight proportional to in the alias structure. We now examine two cases, that all representative points in the straddling sets are in , and that none are; the probability is selected will be between the probability of these two cases, and the desired probability it is selected will also be between these two cases. Let and . The probability is selected if it is the representative of is then in the range . The ratio of these probabilities is . Setting ensures that these probabilities are within a -factor of each other, and on all points from an inside set, are chosen with approximately the correct probability.

For a point in a straddling set , it should be selected with probability and is selected with probability between and . As with points from an inside set, these are within a -factor of each other if . And indeed since then , and the desired probability of sampling straddling point is in that range. ∎

3.3.2 Constructing the Samples

To select random samples, the simplest way is to run this procedure times sequentially, generating candidate points; this is on top of to identify the proper subset from the shallow cutting, applied to all weight partitions .

We can do better by first generating candidate points per , enough for a single random point in . Now we place these candidates in two separate alias structures along with the candidate points from the other structures. There are candidate points placed in an inside alias structure, and points placed in a straddling alias structure. Now to generate one point, we flip a coin proportional to the total weights in the two structures. If we go to the inside structure, we always draw a point in , we are done. If we go to the straddling structure, we may or may not select a point in . If we do, we are done; if not we flip another coin to decide to go to one of the two structures, and repeat.

It is easy to see this samples points with the correct probability, but it does not yet have a worst case time. We could use a dynamic aliasing structure [8] on the straddling set, so we sample those without replacement, and update the coin weight. However, this adds a factor to each of steps which might be needed. A better solution is to only allow the coin to direct the sampler to the straddling sets at most once; if it goes there and fails, then it automatically redirects to the alias structure on the inside sets which must succeed. This distorts the sampling probabilities, but not by too much since in the rejection sampling scheme, the probability of going to the straddling set even once is .

For any candidate point let be the probability it should be sampled, and the probability it is sampled with the one-shot deterministic scheme. Then if is an inside point and if is from a straddling set.

Proof.

The probability that the coin directs to the inside set is . Let be the probability of selecting a point inside , given that the coin has directed to the straddling set; we only need that .

For a candidate point in the inside set , the probability it is selected in the rejection sampling scheme is , and in the deterministic scheme is which is in the range , and these have a ratio .

Similarly, for a candidate point in the straddling set , the probability it is selected in the rejection sampling scheme is

and in the deterministic scheme is Thus is in the range . Adjusting the constant coefficients in elsewhere in the algorithm completes the proof. ∎

Now to generate the next independent point (which we need of), we do not need to re-query with or rebuild the alias structures. In particular, each candidate point can have a pointer back to the alias structure within its partition cell, so it can replace its representative candidate point. Moreover, since the points have weight proportional to their cell in the partition , these weights do not change on a replacement. And more importantly, the points we never inspected to see if they belong to do not need to be replaced. This deterministic process only inspects at most points, and these can be replaced in time. Hence extending to samples, only increases the total runtime by an additive term .

Final bound.

We now have the ingredients for our final bound. In general the argument follows that of Lemma 2 except for a few changes. First, we allow items with probability total less than to be ignored. This replaces a factor in query time to be replaced with term. Second, we require time to identify the relevant subset in each of shallow-cutting structures. Then we select candidate points from each of weight partitions, in time. Then pulling independent samples, and refilling the candidates takes time. This results in the following final bound:

Let be a set of weighted points in , where the smallest weight is and the largest weight is . Choose . We can store in a data structure of size such that for any integer , we can extract independent random samples from the points -approximately proportional to their weights in worst case time In particular, if , then the worst case time is matching the expected time algorithm to sample by the exact weights.

4 Lower Bound for Worst-Case Time with Exact Weights

In this section, we focus on proving a (conditional) lower bound for a data structure that can extract one random sample in worst-case time using space. Our main result is that under a reasonably restricted model, the data structure must essentially solve an equivalent range searching problem in the “group model”. As a result, we get a conditional lower bound as this latter problem is conjectured to be difficult. As an example, our conditional lower bound suggests that halfspace range sampling queries would require that , i.e., with near-linear space we can only expect to get close to query time. This is in contrast to the case when we allow expected query time or approximate weights. As already shown, we can solve the same problem with space and query time which reveals a large polynomial gap between the expected and the worst-case variants of the problem. To our knowledge, this is the first time such a large gap appears in the range searching literature between the worst-case and expected query times.

The Model of Computation.

We assume the input is a list of elements, , where each element is associated with a real-valued weight . We use the decision tree model of computation. We assume the data structure has three components: a preprocessing algorithm that builds the data structure, a data structure which is a set of stored real values, and finally a query algorithm that given a query it returns an element sampled with the correct probability in worst-case time. We allow no approximation: the query algorithm should return an element with exactly probability.

The main bottleneck.

The challenge in obtaining our lower bound was understanding where the main computational bottleneck lied and formulating a plan of attack to exploit it. This turned out to be in the query algorithm. As a result, we place no restrictions on the storage, or the preprocessing part of the algorithm, an idea that initially sounds very counter intuitive. After giving the algorithm the input together with the weight assignment , the algorithm stores some values in its storage. Afterwards, we move to the query phase and this is where we would like to put reasonable limits. We give the query algorithm the query . We allow the query algorithm access to a set of real random numbers, , generated uniformly in . We restrict the query algorithm to be a “binary decision tree” but in a specialized format: each node of involves a comparison between some random number and a rational function where and are -variate polynomials of the input weights . To be more precise, we assume the query algorithm can compute polynomials and (either using polynomials stored at the ancestors of or using the values stored by the data structure). The query algorithm at node computes the ratio and and compares it to the random value . If is an internal node, then will have two children and and the algorithm will proceed to if and to if otherwise. If is a leaf node, then will return (a fixed) element . Note that there is no restriction on the rational function ; it could be of an arbitrary size or complexity. We call this the separated Algebraic decision tree (SAD) model since at each node, we have “separated” the random numbers from the rational function that involves the weights of the input elements; a more general model would be to allow a rational function of the weights and the random numbers . If we insist the polynomials and be linear (i.e., degree one), then we call the model separated linear decision tree model (SLD).

Consider an algorithm for range sampling in the SAD model where the input is a list of of elements, together with a weight assignment , for each . Assume that the query algorithm has a worst-case bound, i.e., the maximum depth of its decision tree is bounded by a function . Then, for every query , there exists a node such that is divisible by the polynomial .

Proof.

Consider the query decision tree . By our assumptions, is a finite tree that involves some nodes and it has the maximum depth of . Each node involves a comparison between some random number and a rational function . To reduce clutter, we will simply write the rational function as . Let be the set of all the rational functions stored at the nodes of . Note that each unique rational function appears only once in . Consider the set which is the set of pairwise differences. As each unique rational function appears only once in , it follows that none of the rational functions in is identical to zero. This in particular implies that we can find real values such that none of the rational functions in are zero on . Thus, as these functions are continuous at the points , it follows that we can find a real value such that for every weight assignment , the rational functions in have the same (none zero) sign. In the rest of the proof, we only focus on the weight assignment functions with the property that .

Let be the unit cube in .

denotes the total probability space that corresponds to the random variables

. Every point in corresponds to a possible value for the random variables . Now consider one rational function stored at a node of , and assume involves a comparison between and and let and be the left and the right child of . By our assumption, we follow the path to if but to if otherwise. Observe that this is equivalent to partitioning into two regions by a hyperplane perpendicular to the -th axis at point . As a result, for every node , we can assign a rectangular subset of that denotes its region and it includes all the points of such that we would reach node if our random variables were sampled to be .

The next observation is that we can assume the region of each node is defined by fixed rational functions. Consider the list of rational functions encountered on the path from to the root of . Assume among this list, the rational functions are involved in comparisons with . Clearly, the lower boundary of the region of along the -th dimension is whereas its upper boundary is . Now, observe that since we have assumed that each has a fixed sign, it follows that these evaluate to a fixed rational function. Thus, let (resp. ) be the rational function that describes the lower (resp. upper) boundary of the region of along the -th dimension. Let and where are some polynomials of stored in tree (e.g., is equal to some for some ancestor of and the same holds for ).

The Lebesgue measure of the region of , , is thus defined by the rational function

is the probability of reaching . Consider a query that contains elements. W.l.o.g, let be these elements. Let be the set of all the leaf nodes that return . For to have been sampled with correct probability we must have the following identity

Observe that since the polynomial is irreducible, it follows that at least one of the polynomials or for some has this polynomial as a factor. ∎

Conditional Lower Bound.

Intuitively, our above theorem suggests that if we can perform range sampling in finite worst-case time, then we should also be able to find the total weight of the points in the query range – since the total weight is encoded in some rational function. This latter problem is conjectured to be difficult but obtaining provable good lower bounds remains elusive. This suggests that short of a breakthrough, we can only hope for a conditional lower bound. This is reinforced by this observation that an efficient data structure for range sum queries leads to an efficient data structure for range sampling.

Observation 1.

Assume for any set of of elements each associated with a real-valued weight, we can build a data structure that uses space such that given a query range , it can output the total weight of the elements in in time.

Then, we can extract random samples from by a data structure that uses space and has the query time of .

Proof.

Partition into two equal-sized sets and and build the data structure for range sum on each. Then recurse on and . The total space complexity is . Given a query , it suffices to show how to extract one sample. Using the range sum query on and , we can know the exact value of and . Thus, we can recurse into with probability