Many different areas of theoretical computer science are concerned with quantifying the complexity of a permutation , ranging widely from circuit lower bounds to data compression to Kolmogorov complexity. Two such areas are adaptive sorting—that is, the design of sorting algorithms which perform faster on data which is closer to sorted by some measure—and the design of binary search trees with short query times.
An adaptive sorting algorithm typically pairs a measure of disorder for a list with an algorithm which is optimal for this measure. Here, optimal roughly means that sorting only requires the number of comparisons needed to distinguish it from all other lists which are more presorted than [petersson1995adaptive]. An example of such a measure is the number of inverted pairs in an unsorted list, along with an algorithm which uses comparisons for a list of length [cook1980sort]. An accompanying notion is that a measure of disorder may be superior to another measure—that is, always requires fewer comparisons for any given permutation. Mannila first formalized these ideas in [mannila1985presorted]. During the 80s and 90s, many researchers devised new measures of disorder and searched for optimal algorithms for them [cook1980sort, castro1989sort, katajainen1989insertion, levcopoulos1990shuffled, levcopoulos1991splitsort, levcopoulos1993heapsort, moffat1990hist]; furthermore, there was also interest in work-optimal parallel versions of optimal sorting algorithms [Carlsson1991adaptive, levcopoulos1996inversions, chen1992improved]. Culminating this line of work, Petersson and Moffat in [petersson1995adaptive] give a complete hierarchy of all known measures of disorder, along with a new measure, Reg, which is superior to all known measures.
A concurrent line of work in adaptive sorting initiated by Mcilroy concerns itself with the information-theoretic properties of a measure of disorder rather than its strict comparison with other measures [mcilroy1993sorting].
Binary Search Trees.
Most work on binary search trees (BSTs) and their associated cost model, the BST model, stems from the famous dynamic optimality conjecture of Sleator and Tarjan [sleator1985self], which states that there exists a binary search tree whose performance on any online sequence of searches is constant factor competitive with the best offline algorithm. In the same paper, they present the splay tree, a BST which they conjecture to be dynamically optimal.
The pursuit of dynamic optimality led to a string of work in both upper and lower bounds on the cost of a sequence of searches on a BST. Three important upper bounds in the literature are the dynamic finger bound [sleator1985self, cole2000dyn_pt1, cole2000dyn_pt2, chalermsook2018multi, bose2014lazy, Iacono2016weighted], the working set bound [sleator1985self], and the unified bound [badoiu2007unified, derryberry2009thesis], which respectively state that accessing an element is fast if its key is close to the key of the previous search, if it has been searched recently, and a combination of the two. There has also been significant work in lower bounding the cost an access sequence in the BST model. Two such lower bounds, the interleave bound and the funnel bound, were introduced by Wilber in [wilber1989bounds]; a recent work by Lecomte and Weinstein [lecomte2020wilber] affirmatively settled the 30-year open question of whether the funnel bound was tighter than the interleave bound, proving a multiplicative separation in some cases. Another lower bound, the rectangle bound, was introduced by Demaine et al in [demaine2009bst].
The BST that comes closest to dynamic optimality is the tango tree of Demaine et al [demaine2007tango], which has a competitive ratio of with respect to the best offline algorithm. Interestingly, Wilber’s interleave bound was vital in the analysis of the competitive ratio, since the authors showed that on any access sequence , the tango tree takes time , where represents the interleave bound of the sequence.
Commonalities Between Sorting and BSTs.
Sorting and the BST model share commonalities beyond the obvious one that every BST can be used to sort. Notably, the dynamic finger bound, the working set bound, and the unified bound are virtually identical to measures which Petersson and Moffat place near the top of their hierarchy: Loc, Hist and Reg, respectively. The commonalities between open problems in both sorting and BSTs also extend to McIlroy’s perspective on sorting, based on whether a sorting algorithm has certain information-theoretic properties. For example, one of McIlroy’s desirable properties of a sorting algorithm was that it sorted a permutation and its inverse in the same number of comparisons. Similarly, in their pursuit of lower bounds in the BST model, Demaine et al noted in [demaine2009bst] that it was difficult to believe Wilber’s interleave bound was tight unless they could show it required the same number of accesses on a permutation and its inverse (the interleave bound was later shown not to be tight, as noted above, without settling the question of whether it performs optimally over inverse permutations).
Although BSTs can be used for sorting and hence an efficient BST algorithm on an input sequence gives an efficient sort for that input, this limits sorting to insertion sort and importantly seems to say nothing about efficient parallel sorting. Furthermore there seems to be little work that relates sorting costs back to the BST model. In this paper we are interested in better understanding the relationship between BSTs and sorting. We focus on sequences of unique keys since almost all the interesting results for sorting and BSTs pertain to the unique key setting. In this setting both BSTs and sorting are related to the question of the “complexity” of permutations. In particular, we are interested in the case of “low” complexity, where we can add fewer than points, and correspondingly fewer than comparisons for sorting and less than cost in the BST model.
Arborally Satisfied Point Sets and Sorting.
One of our most important tools in connecting BSTs and sorting is the geometric interpretation of BSTs [demaine2009bst, DSW05] and apply it to the sorting problem. In this interpretation an access sequence of keys is represented as an grid with time order (input order) on one axis (here the axis) and key order (output order) on the other axis. Points are added to the grid to account for all keys that must be visited when searching (or inserting) the keys one at a time from left to right. Demaine et al. [demaine2009bst], and Derryberry, Sleator, and Wang [DSW05] show that for any BST algorithm, the accesses plotted in the plane must satisfy the property that for every pair of points (both original and added points) there is a monotonic path from to consisting of horizontal and vertical segments with a point at each corner. Demaine et al refer such a set of points as being arborally satisfied, and show that any such set of size implies the sequence of keys can be searched (or inserted) in cost in the BST model. The geometric interpretation is convenient for sorting since it does not directly enforce an order of insertion.
As some evidence of the utility of the geometric approach, consider two standard sorting algorithms, quicksort and mergesort, and how they can be used to create arborally satisfied sets of points. In quicksort, start by taking the pivot, and adding a point across the whole row containing the pivot. Now recurse on the top and bottom halves. For each half, pick a pivot and add a point at all locations in the row that has a key in its part. Continue to the base case. This will add a point for each comparison in quicksort and hence points in expectation if pivots are picked randomly. It is not hard to verify the points added in this way are arborally satisfied—clearly any point in the bottom half can reach any point in the top half in a monotone path by going up to the pivot row, across (left or right) to the column of and then up to . Similarly, mergesort would add points across the middle column representing the final merge, and then for the left and right, add points along their middle columns for all points in those halves, corresponding to those merges. Recursing to the base case again gives an arborally satisfied set, and the points added correspond precisely to the comparisons made by mergesort. We can therefore interpret these two sorting algorithms as algorithms for adding points to arborally satisfy the input points.
1.1 Our Results.
In this paper we present specific results relating sorting and BSTs using arborally satisfied sets. We introduce the log-interleave bound, a measure of the information-theoretic complexity of a permutation , which is as an upper bound on the number of bits needed to encode . Similarly to the bounds discussed earlier, the log-interleave bound can be understood as both a measure of disorder and an upper bound in the BST model. In Section 3, we define the log-interleave bound and show that it is comparable to Reg, the most powerful known measure of disorder. We also note that the log-interleave bound has most of the information-theoretic properties that researchers look for in sorting algorithms.
Our main results on the log-interleave bound illustrate the connections between adaptive sorting and the BST model. In the statements of these results, we use the notation to refer to the log-interleave bound of a permutation . This will be defined formally in Section 3.
The first result is a proof that the log-interleave bound performs within a multiplicative factor of the optimal offline BST algorithm on any permutation. Somewhat similarly to Demaine et al’s proof of the closeness to optimality of tango trees [demaine2007tango], our proof shows closeness to optimality by comparing the log-interleave bound with the interleave bound, a lower bound in the BST model.
Theorem 3.2 For any permutation , .
The second result is a concrete step towards unification of sorting and the BST model: we introduce an offline algorithm in the BST model which performs accesses within for any access sequence .
Theorem 4 There exists an offline algorithm in the BST model which searches for a sequence using accesses.
The technique used to prove Theorem 4 comes from a paper by Demaine et al [demaine2009bst] on the geometric interpretation of binary search trees, which was discussed earlier in the section.
The final result is a parallel mergesort featuring a merge step which combines recent work on parallel split and join of BSTs [blelloch2016just] with a BST from [KaplanT96] and a novel analysis which shows that with this new merge step, the mergesort is optimal for the log-interleave bound.
Theorem LABEL:thm:_mergesort There exists a parallel mergesort which for any permutation performs work with polylogarithmic span.
Model of Computation.
Our results for the parallel algorithms are given for the binary-fork-join model [BlellochF0020]. In this model a process can fork two child processes, which work in parallel and when both complete, the parent process continues. Costs are measured in terms of the work (total number of instructions across all processes) and span (longest dependence path among processes). Any algorithm in the binary forking model with work and span can be implemented on a CRCW PRAM with processors in
time with high probability[ABP01, blumofe1999scheduling], so the results here are also valid on the PRAM, maintaining work efficiency.
A Note on Independent Work.
McIlroy’s paper on information-theoretic properties of sorting algorithms [mcilroy1993sorting] proposes a sequential sorting algorithm called mergesort with exponential search, which performs the same number of comparisons as our adaptive mergesort. However, since his algorithm uses a merge step which takes time, the overall cost of his algorithm is still . McIlroy is also aware that this mergesort is at least a factor of away from optimal, but the proof that we present in Section 3 is to our knowledge the first to show that the log-interleave bound is no more than -competitive.
1.2 Related Work
Adaptive Sorting in Practice.
Adaptive sorting algorithms are widely adopted in practice; surely the most widely adopted such algorithm is timsort, which is implemented as a built-in libraries for Python, Java, Swift, and Rust, among other languages [auger2018timsort]. The prevalance of the algorithm in libraries indicates that in practice input sequences often do have some order to them. Timsort is adaptive with respect to Runs—roughly, consecutive monotonic sequences in an unsorted list. Optimal algorithms for Inv—the number of inverted pairs in a list—are also commonly implemented in practice [elmasry2003adaptive, elmasry2008inversions]. Both Runs and Inv are theoretically weak measures, but algorithms which are optimal for stronger measures have not received much attention from the practical community.
Parallel BST Operations.
A third line of work which influences our algorithm design is that of parallel split, join, and merge algorithms for binary search trees. Our primary tools in this work are the work-optimal parallel split and join algorithms of Blelloch et al [blelloch2016just], which are themselves parallel versions of the join-based BST algorithms of Adams [adams1992implementing]. These are also related to Brown and Tarjan’s sequential, work-optimal algorithms for union, intersection, and difference on red-black trees [brown1979fast].
1.3 Open Problems.
This work takes steps towards a goal of unifying adaptive sorting and the BST model by transforming sorting algorithms into offline algorithms in the BST model. We have suggested a technique—offline arboral satisfaction algorithms—that can settle the question of whether a sorting algorithm can be transformed into a BST algorithm. A future goal would be to tightly characterize which sorting algorithms can be transformed into offline BST algorithms, or perhaps even online BST algorithms. Obviously, not all comparison-based sorting algorithms would fit such a claim; for example, an algorithm which simply guesses and checks for the correct permutation would not. However, it may be possible to arrive at such a tight characterization by restricting the model to avoid cases like this, perhaps by restricting operations to some kind of pointer machine.
There are also open problems about the log-interleave bound that are left unresolved. One question concerning information-theoretic properties is whether the log-interleave bound sorts a permutation and its inverse in the same number of comparisons. Perhaps the most obvious problem is creating an online BST algorithm which performs within the log-interleave bound on any query sequence, or proving that an existing BST, such as the splay tree or Demaine et al’s Greedy algorithm [demaine2009bst], satisfies the log-interleave bound.
We begin with some preliminaries. The preliminaries stated here are those which are strictly necessary to understand the results in the rest of the paper, but for those less familiar with the area, the extended background in Section LABEL:sec:_morebackground may be useful, especially in contextualizing the desirable properties of the log-interleave bound.
Throughout this paper, we will use the terms list, permutation, and access sequence interchangeably to refer to some ordering of the keys . The term access sequence is used in the literature on BSTs to denote a sequence of queries to a BST; unless otherwise stated, we presume an access sequence does not contain repeated keys.
2.1 Adaptive Sorting Preliminaries.
First, we provide definitions for what it means for a measure of disorder to be optimal, and for a measure of disorder to be superior to another measure. The definitions given here are paraphrased from [petersson1995adaptive], which are in turn partially paraphrased from [mannila1985presorted].
The term measure of disorder is used loosely to describe any measure which quantifies the disorder of a list in some intuitive way; for any measure of disorder , a higher value indicates more disorder. Intuitively, a measure of disorder should pair with a sorting algorithm which takes closer to comparisons when is higher and closer to comparisons when is lower. This concept is formalized below.
Let be a measure of disorder. Then, for any permutation over elements, let denote all permutations over elements such that .
Let be a measure of disorder and let be a comparison-based sorting algorithm. Algorithm is optimal with respect to if for any permutation over elements, performs comparisons. We refer to as the number of comparisons performed by on a permutation .
Let and be two measures of disorder, with optimal algorithms and , respectively. is superior to if for all permutations over elements, . Furthermore, is strictly superior to if is superior to and is not superior to .
Note that the notions of superiority are measured in terms of comparisons rather than total runtime; in some cases, the total runtime of an algorithm may be greater due to the required data structures.
Measures of Disorder.
Next, we will discuss some measures of disorder which are most relevant to this paper.
Inv. One of the measures of disorder commonly used in practical applications, Inv is defined as the number of inverted pairs present in a list—that is, the number of pairs such that comes before in the list, but is smaller than . An algorithm that is optimal for Inv sorts a permutation with inverted pairs in comparisons.
Runs. Runs is another measure of disorder used in practice. If a permutation consists of segments of length , where each segment is a monotonically increasing or decreasing sequence, then optimality over runs requires to be sorted using comparisons.
Reg. Readers who are unfamiliar with Reg or the unified bound may wish to read over the definitions of Loc and Hist in Section LABEL:sec:_morebackground before proceeding. The final measure presented here is based on the idea that an element should be cheap to insert if it is close in time or keyspace to a recently inserted element. The corresponding measure is known as Reg, since it seeks optimality for the overall region of elements recently inserted in time and space. Let denote the element inserted in the th place of a permutation . Let measure that number of elements between and in keyspace. This enables us to define a measure which takes advantage of closeness in keyspace or time; for insertion of an element , let and
There is a non-BST data structure proposed by Badoiu et al in [badoiu2007unified] which sorts a list in comparisons. Furthermore, Derryberry’s cache tree [derryberry2009thesis] is a BST which, when used in an insertion sort, uses comparisons.
Our work is also influenced by McIlroy’s perspective on the desirable properties of an adaptive sorting algorithm. He proposes seven desirable properties, such as that a sorting algorithm should sort a permutation and its inverse permutation in the same number of comparisons. These properties are discussed in detail in Section LABEL:sec:_morebackground.
2.2 Binary Search Tree Preliminaries.
We begin with a more explicit definition of the BST model. The BST model is a cost model for binary search trees with rotations. Typically when discussing algorithms and bounds in the BST model, it is assumed that the BST already contains the keys which will be queried (assume there are such keys), and the subject of interest is the amount of time it takes to access a sequence of keys, which may be repeated. In the BST model, accessing an element incurs unit cost for every node searched along the path to accessing it, plus unit cost per rotation. An algorithm for querying keys in a BST may be offline, meaning it can use knowledge of the whole access sequence, or online.
Wilber’s interleave bound.
Next, we define Wilber’s interleave bound, a lower bound on the cost of accessing any sequence in the BST model. Given an access sequence consisting of the keys , fix a static tree (meaning it will not be rotated) with the keys in at the bottom in the order they appear in , as seen in Figure 3. Calculate the interleave bound of as follows: query the keys in in sorted order. For each vertex , label each element of the sequence with R or L, depending on whether accessing in goes through the right or the left subtree of , respectively (if is in neither subtree, give it no label). The interleave bound of , denoted , is the number of switches between and in the labels of the access sequence. The interleave bound of the entire access sequence is calculated as follows:
See Figure 3 for an example calculation.
Arborally Satisfied Sets.
In [demaine2009bst], Demaine et al formalize a connection between binary search trees and points in the plane satisfying a certain property. One of Wilber’s lower bounds in the BST model [wilber1989bounds] is formulated using the following geometric description of a search sequence: plot the search sequence on points in a plane where one axis represents key values and the other axis represents the ordering of the search sequence (that is, time). In the context of sorting, these axes can also be referred to as input order and output order. In this work, we use the horizontal axis for time and the vertical access for keyspace.
In addition to plotting the search sequence on the plane, one can also plot the key values of the nodes which a BST algorithm accesses (for search or rotations) while searching for a node. When searching for an element which is inserted at time , the values of the nodes in the search path are plotted on the same vertical. Demaine et al proved that such a plot satisfies the following property:
Given a set of points in the plane, is arborally satisfied if for every two points that are not on the same vertical or horizontal, the rectangle defined by and contains at least one point on an edge adjacent to and at least one point on an edge adjacent to .
See Figure 4 for an example of an arborally satisfied set consisting of a search sequence and further accesses which arborally satisfy the set.
Instead of using a BST to arborally satisfy an access sequence, one can also use an algorithm that directly places points in the plane to produce an arborally satisfied set.
Given a set of points in the plane corresponding to an access sequence , an offline arboral satisfaction algorithm adds points to the plane to make an arborally satisfied set. An online arboral satisfaction algorithm also adds points in the plane to form an arborally satisfied set, but it cannot use knowledge of future accesses.
Demaine et al show that a BST algorithm is equivalent to an arboral satisfaction algorithm, but they also show a more surprising result: an arboral satisfaction algorithm is equivalent to a BST algorithm. Specifically, they show that an offline arboral satisfaction algorithm which requires accesses to arborally satisfy a search sequence can be transformed to an offline BST algorithm which requires accesses to search for the elements of a sequence . They also showed the analogous statement for offline BST algorithms.
3 The Log-Interleave Bound
In this section, we define the log-interleave bound and prove some of its desirable properties. Then, we go on to show that the log-interleave bound is within a multiplicative factor of a known lower bound in the BST model.
Defining the log-interleave bound.
The tools for calculating the log-interleave bound are similar to the interleave bound. Given an access sequence , fix a static tree , and for each vertex , label the elements of the sorted sequence according to whether they access the left or right subtree of . Then let be the labeled sequence. Let refer to the decomposition of into the smallest possible number of runs of consecutive Rs or Ls, where an element of refers to a run and has a size corresponding to the number of elements in that run. Then the log-interleave bound of is calculated as follows,
and the log-interleave bound of is calculated similarly to the interleave bound:
See Figure 3 for an example calculation of the log-interleave bound, and to see how it differs from the interleave bound.
3.1 Properties of the log-interleave bound
We first show that the log-interleave bound upper bounds on the number of bits to encode a permutation, meaning it is also an upper bound on the information content of permutations.
A permutation can be encoded in bits. The proof is by induction on the subtrees. Assume inductively that a permutation can be encoded in bits for some constant . Clearly this is true for the leaves, which contain a single element. For an internal node, we need to encode the permutation defined by each of its two children, and how the two permutations are interleaved. The costs of encoding the two children are accounted for in the bits for each child subtree inductively. To encode the interleaving we can use a code such as the gamma code to encode the length of each run. Such a code requires bits to encode , and hence can be charged to the log-interleave bound for that node. We might want to encode the overall size , which can be done easily in bits and is subsumed by other terms by choosing large enough .
Comparison with Reg.
Here, we show by two examples that the log-interleave bound is neither superior to Reg, nor is Reg superior to it. Recall that since Reg is almost identical to the unified bound (with the exception that the unified bound allows repeated keys), this result also holds for the unified bound. Furthermore, note that the unified bound is sometimes away from optimal, while in Theorem 3.2, we show that the log-interleave bound is never more than away from optimal.
Reg is not superior to the log-interleave bound.
Consider the permutation (sequence) obtained by breaking a sorted list into equally sized segments and then interleaving them in the natural way—that is, .
The log-interleave bound of this permutation is calculated as follows: letting the root of the tree be depth 0, note that for every balanced tree node of depth , the input is sorted, so there will simply be two runs one on the left and one on the right, each of size . Now consider the levels above . The number of nodes at each level is . Furthermore, the number of runs within each node is , and the size of each run is (i.e., the number of elements covered by the node, , divided by the number of runs, ), as illustrated in Figure 3. This gives a total log-interleave bound cost for the permutation across levels of
Both terms are dominated by the last term of the sum, which is , so the total cost is .
To show that , consider calculating for any element such that . Consider inserting element and searching for the element which minimizes . The closest inserted element to it in keyspace is ; it has and . Choosing an element further back in time will only increase and , so only elements inserted after are possible minimizers.
Consider increasing the previous candidate by 2 to —then, must increase by 1 since is now in between the current element and the minimizer. Increasing by 1 until hits increases by at least 1 each time.
Now, examine the situation where the minimizer is some . If , then every element is in between the target element and the minimizer, and there are such elements by our assumption that the first elements have already been inserted. Decreasing towards 1 only increases and , since now the set of in between elements includes, for example, .
Thus we have shown that after the first elements are inserted, each remaining element has a cost of to insert, so a Reg-optimal algorithm would perform comparisons.
For the next lemma, we will need to define a particularly useful permutation. The bit-reversal permutation on keys is generated by beginning with the sorted list , writing each key in binary, then reversing the bits of each key. For example, the bit-reversal permutation on 8 keys is . Wilber showed in [wilber1989bounds] that any BST algorithm takes time to query the bit-reversal permutation.
The bit-reversal permutation can also be constructed using the following algorithm: start with a sorted list of keys. Let , and place all keys such that
is odd in the second half, and all other keys in the first half. Recursively repeat this routine on each half, multiplyingby two each time. This construction illustrates that if we fix a static tree with keys of the bit-reversal permutation at the bottom, at each vertex of , querying the keys in sorted order will switch between left and right subtrees of on each query.
The log-interleave bound is not superior to Reg.
Consider the permutation obtained by splitting the sorted list into segments of equal size, and then permuting those segments according to the bit-reversal permutation.
Due to the properties of the bit-reversal sequence, the interleave bound of will be the same as for a list with elements permuted according to the bit-reversal sequence—that is, . In , every block is of size , so to calculate the log-interleave bound, we multiply by on all but the bottom levels. Thus, the log-interleave bound of is .
Since all but elements are next to their adjacent element, for all but these elements. In the worst case, at each of these elements. Then, letting be the cost of a Reg-optimal algorithm on ,
Other properties of the log-interleave bound.
As noted in Section 1, McIlroy invented an algorithm which he calls mergesort with exponential search, which performs the same number of comparisons (but in total time) as an algorithm optimal for the log-interleave bound. He shows that this mergesort is optimal over reversal, weak composition, weak segmentation, riffle shuffles, and runs, fulfilling all but one of his criteria. Strong segmentation fails due to the example described in the previous paragraph. Refer to Section LABEL:sec:_morebackground for a longer discussion of these properties.
Relation to Practical Measures of Disorder.
As noted in Section 1, the two measures of disorder that are most widely used in practical adaptive sorting are Runs and Inv. As mentioned in the previous paragraph, McIlroy’s work shows that log-interleave bound is optimal over Runs; here, we show that the log-interleave bound shares the same superiority relation with Inv that it does with Reg: it is neither superior to Reg, nor is Reg superior to it.
First, note that for an algorithm to be optimal for Inv, it must sort a permutation in comparisons [petersson1995adaptive].
The measure Inv is not superior to the log-interleave bound.
This follows from the fact that Reg is superior to Inv (Lemma 3.1), and Reg is not superior to the log-interleave bound.
The log-interleave bound is not superior to Inv.
Consider the permutation constructed by breaking the sorted list into sorted segments of size and then taking the last element of each segment and permuting these elements among each other according to the bit-reversal sequence, as illustrated in Figure 3.
First, we calculate the cost of sorting under —that is, an algorithm optimal for Inv. A rough upper bound suffices: note that each element that is in its sorted position incurs only unit cost. Upper-bounding the out-of-order elements with cost gives an upper bound of for sorting the sequence.
Now, to calculate the log-interleave bound, refer to Figure 3 for an illustration of one merge step; similarly to the proof of Lemma 3.1, due to the properties of the bit-reversal sequence, every merge step at a level of the tree above which the data is not perfectly sorted will have cost . Thus the overall cost under the log-interleave bound is .
3.2 Comparing the Log-Interleave Bound with Wilber’s Interleave Bound.
A natural question one might ask about a BST algorithm or an adaptive sorting algorithm is how far, in the worst case, is the cost ofthis algorithm from any known lower bounds? Or, in other words, how close is this algorithm to optimal? In this section, we will settle this question for the log-interleave bound in the BST model. Lemma 3.1 contains an example showing that on one particular sequence, the log-interleave bound is an multiplicative factor slower than another measure of disorder. Here, we will show that this separation from optimality is tight, culminating in the following theorem:
For any permutation , .
As mentioned earlier, refers to Wilber’s interleave bound.
The proof will proceed by first characterizing the situation where the log-interleave bound and the interleave bound differ the most from each other, and then proving that that difference is no more than , multiplicatively.
For a permutation , let be a vertex of the corresponding static tree such that . Then will differ from by the greatest amount when each “run” of L or R in the labeled sequence is the same size.
Let be the size of each run of or in the labeled sequence, then is maximized when each due to the concavity of the logarithm.
For a permutation , let be a vertex of the corresponding static tree such that . Furthermore, let the number of leaves below vertex be . Then for any and some constant ,
Before the proof, note that the expression on the right hand side captures the log-interleave bound of an access sequence whose runs of L and R are all the same size. The left-hand side is an expression which we will show is bounded by times the interleave bound.
Begin by assuming , the lower end of its range. Then the expression we are proving reads that:
Now, examine the cases where the two added terms on the left may each be smaller than the term on the right. If the first term is smaller:
This shows that when , the inequality is true. Now, when , the second term in the sum dominates. When the second term dominates, the expression reads
which is self-evidently true for all .
So far, we have shown that our inequality holds when is at its smallest, so we must examine the expression for other . To do this, examine the derivative of the right-hand expression with respect to ; here, the +1s in the logarithm are omitted since they do not affect whether the derivative increases or decreases:
As increases from to , the derivative begins positive, then decreases to zero, then continues decreasing. Hence, the two smallest points of the expression are at the endpoints, and . The case where is already covered above, so it remains to check when :
which is clearly true.
Now, these two lemmas are put together to prove Theorem 3.2.
4 Offline BST Algorithm
In this section, we present an offline BST algorithm that uses accesses for any permutatation . This algorithm makes use of a transformation devised by Demaine et al in [demaine2009bst] from points in the plane satisfying a certain property to an offline algorithm in the BST model. The transformation completes the proof of the following theorem:
There exists an offline algorithm in the BST model which searches for a sequence using accesses.
The offline arboral satisfaction algorithm can be found in Algorithm LABEL:algo:_arboralmergesort. We named this algorithm arboral mergesort, since it works by recursively dividing the input in half, arborally satisfying each half, and then combining the two halves by satisfying any unsatisfied rectangles between points in one half and points in the other. The merge routine, found in Algorithm LABEL:algo:_arboralmerge and named arboral merge, merges two arborally satisfied sets by using the keys of to split into “blocks” of keys in that remained consecutive after the merge and vice versa; then, it combines these blocks into one arborally satisfied set.
The arboral merge works as follows: let contain keys and contain keys. For a block in whose leftmost element was inserted at time , the algorithm adds accesses of the first and last element (in keyspace) of the block at time —that is, on the first vertical line in —and at the vertical line corresponding to time . It then adds any additional accesses needed on those lines to make the set arborally satisfied. Furthermore, it adds accesses of the first and last element at time and time .
A rough intuition behind this algorithm is that the purpose of the points added down the middle of the merge of is to join the two sets together, and the points added at the ends of the merged set serve to “rebalance” the set so that future merges retain its desired query times.
An example of this algorithm can be found in Figure 5.
Now we build up to the proof of Theorem 4. First, we show that the arboral mergesort returns an arborally satisfied set. Then, as a warmup, we prove that any split of a set that has been satisfied using the arboral mergesort requires at most additional access to remain arborally satisfied. Next, we show via example that as initially written, the algorithm does NOT have all the properties needed to prove that it only requires accesses for any permutation . Finally, we give a construction to amend the arboral mergesort so that has the desired property.
First, we show correctness of Algorithm LABEL:algo:_arboralmergesort.
Algorithm LABEL:algo:_arboralmergesort is correct; that is, it outputs an arborally satisfied set containing the access sequence .
It is sufficient to show that Algorithm LABEL:algo:_arboralmerge always returns an arborally satisfied set if its inputs are themselves arborally satisfied. Assume that two inputs are arborally satisfied. Without loss of generality, we will show that in the combined grid, remains arborally satisfied after the merge step.
We will consider an arbitrary “block”, i.e. a horizontal section of the grid that is arborally satisfied by Algorithm LABEL:algo:_satisfy during the merge step. In either case, the block itself is arborally satisfied by the definition of Algorithm LABEL:algo:_satisfy. Now we consider any other unsatisfied rectangles that may be created by the additional accesses made by Algorithm LABEL:algo:_satisfy.
Case 1: the block contains points only in . In this case there will be points placed on the left and right (horizontal) boundaries of . The points will correspond to the largest and smallest element in the block; since the blocks above and below will also have points on the left and right boundaries placed at their largest and smallest element, any rectangle between a point in and a point in the block we consider must contain one of the aforementioned boundary points of the slices above and below. Thus it cannot create any unsatisfied rectangles in .
Case 2: the block contains points only in . Since is arborally satisfied, we need only consider unsatisfied rectangles created by the additional accesses we added. However, similarly to Case 2, due to the fact that each block has accesses at its boundary, all rectangles are arborally satisfied.