1 Introduction
Binary search trees (BSTs) and heaps are the canonical comparisonbased implementations of the wellknown dictionary and priority queue data types.
In a balanced BST all standard dictionary operations (insert, delete, search) take time, where is the size of the dictionary. Early research has mostly focused on structures that are kept (approximately) balanced throughout their usage. (AVL, redblacktrees, and randomized treaps are important examples, see e.g., [11, § 6.2.2]). These data structures rebalance themselves when necessary, guided by auxiliary data stored in every node.
By contrast, Splay trees (Sleator, Tarjan, 1983 [17]) achieve amortized time per operation without any explicit balancing strategy and with no bookkeeping whatsoever. Instead, Splay trees readjust the search path after every access, in a way that depends only on the shape of the search path, ignoring the global structure of the tree. Besides the amortized time, Splay trees are known to satisfy stronger, adaptive properties (see [9, 3] for surveys). They are, in fact, conjectured to be optimal on every sequence of operations (up to a constant factor); this is the famous “dynamic optimality conjecture” [17]. Splay trees and data structures of a similar flavor (i.e., local restructuring, adaptivity, no auxiliary data) are called “selfadjusting”.
The efficiency of Splay trees is intriguing and counterintuitive. They rearrange the search path by a sequence of double rotations (“zigzig” and “zigzag”), bringing the accessed item to the root. It is not hard to see that this transformation results in “approximate depthhalving” for the nodes on the search path; the connection between this depthhalving and the overall efficiency of Splay trees is, however, far from obvious.
An arguably more natural approach for BST readjustment would be to turn the search path, after every search, into a balanced tree.^{1}^{1}1The restriction to touch only the search path is natural, as the cost of doing this is proportional to the search cost. (A BST can be changed into any other BST with a linear number of rotations [16].) This strategy combines the idea of selfadjusting trees with the more familiar idea of balancedness. Indeed, this algorithm was proposed early on by Sleator (see e.g., [19, 1]). We refer to BSTs maintained in this way as pathbalanced BSTs (see Figure 1).
Pathbalanced BSTs turn out to be surprisingly difficult to analyse. In 1995, Balasubramanian and Raman [1] showed the upper bound of on the cost of operations in pathbalanced BSTs. This bound has not been improved since. Thus, pathbalanced BSTs are not known to match the amortized cost (let alone the stronger adaptive properties) of Splay. This is surprising, because broad classes of BSTs are known to match several guarantees of Splay trees [19, 2], pathbalanced BSTs, however, fall outside these classes.^{2}^{2}2Intuitively, pathbalance is different, and more difficult to analyse than Splay, because it may increase the depth of a node by an additive , whereas Splay may increase the depth of a node by at most . In a precise sense, pathbalance is not a local transformation (see [2]). Without evidence to the contrary, one may even conjecture pathbalanced BSTs to achieve dynamic optimality; yet our current upper bounds do not even match those of a static
balanced tree. This points to a large gap in our understanding of a natural heuristic in the fundamental BST model.
In this paper we show that the amortized time of an access^{3}^{3}3We only focus on successful search operations (i.e., accesses). The results can be extended to other operations at the cost of technicalities. For simplicity, we assume that the keys in the tree are unique. in a pathbalanced BST is
. The result, probably not tight, comes close to the informationtheoretic lower bound of
. Closing the gap remains a challenging open problem.Priority queues support the operations insert, deletemin, and possibly meld, decreasekey and others. Pairing heaps, a popular priority queue implementation, were proposed in the 1980s by Fredman, Sedgewick, Sleator, and Tarjan [5] as a simpler, selfadjusting alternative to Fibonacci heaps [6]. Pairing heaps maintain a multiary tree whose nodes (each with an associated key) are in heap order. Similarly to Splay trees, pairing heaps only perform keycomparisons and simple local transformations on the underlying tree, with no auxiliary data stored. Fredman et al. showed that in the standard pairing heap all priority queue operations take time. They also proposed a number of variants, including the particularly natural multipass pairing heap. In multipass pairing heaps, the crucial deletemin operation is implemented as follows. After the root of the heap (i.e., the minimum) is deleted, repeated pairing rounds are performed on the new toplevel roots, reducing their number until a single root remains. In each pairing round, neighboring pairs of nodes are linked. Linking two nodes makes the one with the larger key the leftmost child of the other (Figure 2).
Pairing heaps perform well in practice [18, 14, 12]. However, Fredman [4] showed that all of their standard variants (including the multipass described above) fall short of matching the theoretical guarantees of Fibonacci heaps (in particular, assuming cost for deletemin, the average cost of decreasekey may be , in contrast to the guarantee for Fibonacci heaps). The exact complexity of the standard pairing heap on sequences of intermixed deletemin, insert, and decreasekey operations remains an intriguing open problem, with significant progress through the years (see e.g., [8, 15]). However, for the multipass variant, even the basic question of whether deleting the minimum takes amortized time remains open, the best upper bound to date being the originally shown by Fredman et al. Similarly to the case of pathbalanced BSTs, we have thus a basic combinatorial transformation on trees, whose complexity is not wellunderstood.
In this paper we show that in multipass pairing heaps deletemin^{4}^{4}4To keep the presentation simpler, we only focus on deletemin operations, omitting the extension of the result to other operations. takes amortized time , the first improvement since the original paper of Fredman et al. The improvement is, from a practical perspective, not significant. Nonetheless, it reduces the gap to the theoretical optimum from to less than for any fixed .
The reader may notice that the old bounds for multipass pairing heaps and pathbalanced BSTs are the same. The two data structures are, indeed, quite similar: if one views multipass pairing heaps as binary trees (see e.g., [10, § 2.3.2]), the multipass readjustement is equivalent to balancing the rightspine of a binary tree.^{5}^{5}5We note that the previous analysis of pathbalanced BSTs [1] did not use this correspondence. By connecting the two data structures, we also simplify (to some extent) the proof of [1]. The multipass analysis, however, does not immediately transfer to pathbalanced BSTs; the fact that the BST search path may be arbitrary (not necessarily rightleaning) complicates the argument for pathbalanced BSTs.
Our analysis of multipass pairing heaps (§ 2) is based on a new, finegrained scaling of the sumoflogs potential function used by Sleator and Tarjan in the analysis of Splay trees, and by Fredman et al. in the analysis of pairing heaps. At a high level, we argue that certain link operations are informationtheoretically efficient, and that such links happen sufficiently often. The subsequent, rather intricate analysis notwithstanding, we believe that the ideas of the proof may have further applications in the analysis of data structures.
2 Multipass pairing heaps
A pairing heap is a multiary heap, storing a key in each node, with the regular (min)heapcondition: the key of a node is smaller than the keys of its children. Priority queue operations are implemented using the unitcost linking step. Given nodes , “hangs” the node with the larger key as the leftmost child of the other. The operations insert, meld, and decreasekey can be implemented in a straightforward way using a single link (we refer to [5] for details). The only nontrivial operation is deletemin. Here, after deleting the root, we are left with a number of toplevel nodes, which we combine into a single tree via a sequence of links. In multipass pairing heaps we achieve this by performing repeated pairing rounds, until a single toplevel node remains (i.e., the new root of the heap). A single pairing round is as follows. Let be the toplevel nodes, ordered lefttoright, before the round. For all we perform . Observe that if
is odd, then the rightmost node is unaffected in the current round. The number of rounds is
, where is the number of children of the (deleted) root.^{6}^{6}6The function is base everywhere, the base logarithm is written as . (See Figure 2.)We now analyse deletemin operations implemented by multipass pairing heaps. Let be the number of children of the deleted root, defined to be the real cost of the operation (observe that the number of links is exactly ). Let be the size of the heap before the operation. We use the binary tree view of multiary heaps, where the leftmost child and next sibling pointers are interpreted as left child and right child. A single link operation is shown in Figure 3. Let , , denote the sizes of subtrees , , and , respectively.
We define a potential function that refines the SleatorTarjan “sumoflogs” potential [17]. Let , over all nodes of the heap , where
where denotes the size of the subtree rooted at , and is the parent of .^{7}^{7}7Using instead, would essentially recover the original “sumoflogs” potential. Such an “edgebased” potential function was used earlier, e.g., in [7, 13]. Note that both subtrees and parents are meant in the binary tree view.
For convenience, define the functions
With this notation, , and . Clearly, both and are positive, monotone increasing, and concave, for all , respectively, .
By simple arithmetic, the increase in potential due to a single link (as in Figure 3) is:
(1) 
For a suitably large constant (for concreteness let ), we consider the quantities , , and , i.e., the scaled sizes of the subtrees , , and . We distinguish different kinds of links, depending on the ordering of the three quantities (breaking ties arbitrarily). We first look at the cases when or is the largest (called respectively type(1) and type(2) links), and show that the possible increase in potential due to such links is small. In particular, for type(1) links, is dominated by a term , and for type(2) links the positive and negative contributions cancel out, leaving . The proofs use standard (although somewhat delicate) analysis; we defer most of the calculations to Appendix B.
[B.1] A type(1) link () increases the potential by at most , where the term is a constant independent of , , , , and .
[B.2]
A type(2) link () increases by at most .
The case when is the greatest of the three quantities (called type(3) link) is the most favorable. Here, the potential of , before the linking is (roughly) the logarithm of (very large) divided by ; after the linking, the potential becomes (essentially) the logarithm of the ratio between and (much smaller), resulting in a significant saving in potential. We use this saving to “pay” for the operations. First we make the following, easier claim.
[B.3] A type(3) link () can not increase .
It remains to balance the decrease in potential due to type(3) links and the increase in potential due to all other links. First, we show that almost all links are type(3).
There are at most type(1) and type(2) links within a pairing round.
Proof.
Let , , denote the subtreesizes corresponding to the th link from left to right, see Figure 3(right). Let the subsequences , , , be the subtreesizes corresponding to type(1) and type(2) links. Observe that . If the th link is of type(1) or type(2), then , since in each of these cases or . Since , and the claim follows. ∎
All type(1) and type(2) links within a single pairing round increase the potential by at most .
Proof.
Look at a single round of pairing. Let , , () be as in the proof of Lemma 2 and recall that . If the th link is type(1), then by Lemma 3, the increase in potential is at most .
Otherwise, if the th link is type(2), then by Lemma 3, the increase in potential is at most , which we can write as , for a suitable constant .
Let denote , or , corresponding to the th link (according to its type). We have (for a fixed constant ), since the sum of the terms telescopes, and the additive (or ) terms appear at most times.
The total increase in potential is at most . By the concavity of , is maximized if all of the arguments of are equal. We thus obtain a bound on the total increase in potential in the pairing round.
The last proof yields, in fact, the following stronger claim.
All type(1) and type(2) links within the last pairing rounds increase the potential by at most .
Proof.
Observe that for , the th to the last pairing round has at most links. Thus, as in Lemma 2, we obtain:
Note that the second inequality holds since . The sum of this expression over all levels is . (Using the fact that converges to a constant.) ∎
Now we estimate more carefully the decrease in potential due to type(3) links. Let
and be nodes as denoted in Figure 3. We want to express the potentialchange in terms of and (before the link operation). Recall that and .Among type(3) links () we distinguish two subtypes: type(3A) (), and type(3B) (). We have the following two (symmetric) observations:
[B.4] A type(3A) link () decreases the potential by at least
It follows that for some constant , if , then .
Proof.
Let and . Then, recalling Equation (1):
We have , and . Note, .
Collecting constant terms, we have:
As , we further simplify: .
It is now sufficient to show:
(We defer the detailed calculations to B.4.) ∎
[B.5] A type(3B) link () decreases the potential by at least
It follows that for some constant , if , then .
There exists a constant () such that all type(3A) links with and all type(3B) links with decrease the potential by at least . We now define the category of a node with respect to its value. Intuitively, nodes of the same category are those that, when linked, release the most potential. Let us denote . Using the notation of function composition, let
The category of a node is based on the values . Note that , where the depends on , since (using the star notation) . [Category] Let be a node. For , we let if:
If we say that is of category .
The following crucial observations connect categories and savings in potential.
Let link be type(3). If , then the link decreases the potential by at least .
Proof.
In each pairing round there are at most nodes of category .
Proof.
Let be of category , then . Denoting , ), we get . Therefore, , an occurrence that can happen at most times in each round (by the same argument as in Lemma 2). ∎
Let denote the “winner” of linking and (neither of category ), i.e., is the one with the smaller key. Then .
As seen in Figure 2, a deletemin operation transforms the “spine” of the heap (in binary view) into a balanced tree. We denote this tree by . Each level of corresponds to a pairing round; specifically, level of consists of nodes at distance from the leaves, containing the losers of the th pairing round. The following lemma captures the potential reduction that yields the main result.
Let be a subtree of of depth , whose leaves correspond to consecutive link operations. If contains only type(3) links and no links involving nodes of category , then the total decrease in potential caused by the links of is at least .
Proof.
Assume towards contradiction that there is no link between two nodes of the same category in . By Lemma 2 in each round the minimal overall category increases by at least 1, leaving us with two nodes of maximal category in the last round, a contradiction. By Lemma 2, a link between nodes of equal category decreases the potential by at least . ∎
The amortized time of deletemin in multipass pairing heaps is .
Proof.
Let the real cost (number of link operations) be . Note that there are at most pairing rounds.
Thus, if , then there are at most rounds. Using Lemma 2 we get that the first pairing rounds increase the potential by at most . Also, as shown in Lemma 2, the total increase in potential for the last levels is . Thus, the total potential increase is at most + .
To analyse the case , we use the potential decrease of type(3) links. First, we look at the first pairing rounds.
By Lemma 2, the links in every complete subtree of of depth , in which there are only type(3) links and no category nodes, decrease the potential by at least .
In the first levels of we can find disjoint subtrees of this size. In these levels there are at most type(1),(2) links, or links containing category nodes (Lemmas 2 and 2). Thus, at least of the subtrees answer the conditions of Lemma 2, decreasing the potential by at least . Also, the total increase in potential caused by type(1),(2) links is at most (Lemma 2). Therefore, the first levels give us a decrease in potential of at least .
Note that by using the same argument on the next levels, we get a decrease in potential of at least , where is the number of links in level . Thus, levels which contain links only decrease the potential.
We repeat this argument until we reach a level in containing links. Now, applying the same argument as for the first case, we get that the total increase in potential for the last levels (starting from the level of links) is at most .
Summarizing, the total amortized time (in both cases) is at most
Scaling the potential by , we get that the amortized time is . ∎
3 Pathbalanced binary search trees
Consider the operation of accessing a node in a BST with nodes (we refer interchangeably to a node and its key). Let denote the search path to (i.e., the path from the root of to ). The pathbalance method rearranges into a complete balanced BST (with all levels complete, except possibly the lowest). Subtrees hanging off are reattached in the unique way given by the keyorder (Figure 1). There are multiple ways to implement this transformation such that the number of pointer moves and pointer changes is linear in the length of the search path. For instance, we may first rotate the search path into a monotone path, then apply a multipass transformation (described next) to this monotone path.
Multipass transformation.
A multipass transformation of a monotone path (of which the deepest node might not be a leaf) converts into a balanced tree (in which the last level may be incomplete) by a sequence of pairing rounds. In each pairing round we rotate every other edge in a prefix of (i.e., a subpath of the shallowest nodes on ). Each rotation pushes one node off . We denote by the path remaining of after pairing rounds. The pairing rounds are defined as follows. We assume that the path consists of right child pointers; in the case it consists of left child pointers everything is symmetric.
Let denote the length of (i.e., the number of nodes on ). In the first round we do just enough rotations so that the length of the path after the round (i.e., ) is one less than a power of . Specifically, we do rotations where is the smallest integer such that . In the second round we do rotations on , and in round we do rotations on . We maintain the invariant that after rounds all the nodes that were pushed off (excluding those that were pushed off at the first round) are arranged in balanced binary trees of height , hanging as children of the nodes of .
The proof of the following theorem is analogous to the proof of Theorem 2 (one can verify that all steps of the proof still hold for the slightly modified pairing rounds of the multipass transformation, replacing rotations by links).
For every monotone path with , the change in caused by applying a multipass transformation on is bounded by where is the size of the subtree of the root of .
Warmup: a simplified pathbalance.
We first look at an easiertoanalyse variant of pathbalance, where, instead of a complete balanced tree, we build an almost balanced tree out of the search path , as follows: we first make the accessed item the root, then turn the parts of containing items smaller (resp. larger) than into balanced subtrees rooted at the left (resp. right) child of . The depth of this tree is at most one larger than the depth of a complete balanced tree built from .
For the purpose of the analysis, we view the simplified pathbalance transformation as a twostep process (Figure 5). The actual implementation may be different but the analysis applies as long as the transformation takes time .
Step 1. Rotate the accessed element all the way to the root. (Observe that after this step, is split into two monotone paths, to the left of consisting only of “right child” pointers, and to the right of , consisting only of “left child” pointers.)
Step 2. Apply a multipass transformation to and to .
We show that the amortized time of an access using simplified pathbalance is . We use the same potential function as in § 2, and we assume the twostep implementation described above. We first state an easy observation.
Let be a path in rooted at a node , then , where and is the size of the subtree of .
Proof.
Denote . Let be the subtreesizes of the nodes on from the deepest node to . Then
due to ’s concavity and since the terms sum to . ∎
We proceed with the analysis. We argue that rotating to the root (Step 1) increases by at most . To see this, observe first, that the potential of nodes hanged on the nodes of excluding , can only decrease. This is because their subtree remains the same, whereas the subtree of their parent (a node on the search path) can only lose elements, (see Figure 5). The two children of may increase the potential by at most .
Analysis of pathbalance.
The original pathbalance heuristic (where we insist on building a complete balanced tree) is trickier to analyse. Here, instead of moving the accessed item to the root, we move the median item of the search path to the root. Here, “median” is meant with respect to the ordering of keys; is, in general, not the node with median depth on . It is instructive to prove the earlier result first, by reusing parts of the Fredman et al. proof for multipass. We do this in Appendix C. In the remainder of this section we prove the new, stronger result.
The amortized time of search using pathbalance is .
For the purpose of the analysis, we view the pathbalance transformation as a sequence of recursive calls on search paths in some subtree of . The total real cost is proportional to the original length of the search path to which we denote by . We define a threshold , and distinguish between recursive calls on paths shorter than (“short paths”) and recursive calls on paths longer than (“long paths”).
A long path is processed as follows. We rotate the median of the nodes on to the root, splitting into two paths of equal lengths. One of these paths contains the path from to in , and the other path, which is monotone, contains either the elements smaller than on or the elements larger than on (depending upon whether is in the right or left subtree of ). In the sequel we assume without loss of generality that the monotone part contains all elements larger than and denote it by . We denote the other (nonmonotone) path that ends with by . We perform a multipass transformation on , and make a recursive call on (i.e., becomes the of the next recursive call); see Figure 6.
A short path is transformed into a balanced binary tree in two phases, as follows. In the first phase, we rotate up the median of until it becomes the root of the subtree rooted at the shallowest node of . This decomposes into a monotone path and a general path , one starting at the left child of and the other at the right child of . We repeat this recursively with the median of , and so on, until we get a general path of length . After this transformation, the medians form a path, each having the next median as one child and a monotone path as the other child. The lengths of these monotone paths decrease exponentially by a factor of . In the second phase we apply a multipass transformation on each of the monotone paths, obtaining a complete balanced tree; see Figure 7.
Before we analyse each case, we argue that Theorem 3 also holds with a modified potential (defined below). As we only use the new potential from now on, there is no risk of confusion. The modification consists in changing the exponent of the logarithmic term in the denominator from to , and changing the additive constant inside the to make sure is still increasing everywhere.
Formally, , where , and . As earlier, is the size of the subtree rooted at , and is the parent of . For convenience, we define the functions , and . As before, , and .
We show in Appendix E that the entire analysis in § 2 extends to this new potential. Therefore, Theorem 3 holds also for the modified potential function .
Now, the analysis of transforming long paths is straightforward. For short paths, we need two new observations.
The total increase in potential for performing multipass transformation on a path of length where is the size of the subtree of the root of , is at most
The proof is identical to that of Lemma 2. As before, the sum can be bounded as , but here we use the quantity explicitly inside another sum where the exponent in the denominator will be crucial. The next observation can be shown in a way similar to Lemma 3.
[Appendix D] Given a search path of length , the total increase in due to recursively rotating all medians of to the root is .
We are ready to prove Theorem 3. We split the proof into three cases according to the length of the search path, denoted by .
Short paths (). Notice that . Recall that in the first phase, we repeatedly rotate up the medians, decomposing the path into monotone paths of lengths , where . By Lemma 3 the total increase in potential due to this transformation is at most .
In the second phase, we do a multipass transformation on each of these monotone paths. By Lemma 3, a multipass transformation on a monotone path of length increases by at most , for some fixed . Thus, the multipass transformations increase the potential by at most
The first equality holds since the term appears in the above sum exactly times (). Thus, the total increase in is, in this case, .
Longish paths (). Notice that .
We perform recursive calls and a final call on a search path of length . The final call increases by at most , by the analysis in the previous case. The recursive calls consist of rotating the current median up to the root and applying the multipass transformation on a monotone path. As before, rotating the median up increases by at most ). Also, each multipass transformation is performed on a path of length . By Theorem 3, the increase in potential is at most . Therefore, the recursive calls increase by at most , which also bounds the total increase in .
Long paths (). We look at the potential change due to the first recursive call. Again, rotating the median to the root increases by at most . The path splits into and , of which is monotone. By Theorem 3, the multipass transformation on decreases by .
By the same argument, decreases during all of the subsequent recursive calls on paths of size .
We continue until we have a recursive call on a path of size at most , which, by the previous case, increases by at most . Thus, we obtain that the total decrease in in this case is at least .
Combining the three cases, after scaling the potential by , we conclude that the amortized time of the access is , as required.
Appendix A Additional figures
Appendix B Additional proofs for § 2
We start with a multipart technical lemma, to be used in the proofs of other claims.


For every , it holds that

For every , it holds that

For every it holds that

For every , it holds that

has only one maximum point at in , and two global minima.

Fix . If , then:
Proof.
Part (i):
By Lagrange theorem, , for . Due to the concavity of , we get .
Part (ii):
It is enough to show , which holds since
where the inequality holds since is concave.
Part (iii):
Taking the derivative:
Part (iv):
It is equivalent to prove , i.e., that is monotone decreasing. This holds since ( is concave).
Part (v):
Because of symmetry around , it suffices to prove that has only one minimum in . This minimum is at . A plot of this function is shown in Figure 8. We omit the tedious analytical derivation.
Part (vi):
For we verified the inequality by computer (the maximal value is , at ). Thus, assume . Denote . Using Lemma B(i), we get:
Comments
There are no comments yet.