# Improved bounds for multipass pairing heaps and path-balanced binary search trees

We revisit multipass pairing heaps and path-balanced binary search trees (BSTs), two classical algorithms for data structure maintenance. The pairing heap is a simple and efficient "self-adjusting" heap, introduced in 1986 by Fredman, Sedgewick, Sleator, and Tarjan. In the multipass variant (one of the original pairing heap variants described by Fredman et al.) the minimum item is extracted via repeated pairing rounds in which neighboring siblings are linked. Path-balanced BSTs, proposed by Sleator (Subramanian, 1996), are a natural alternative to Splay trees (Sleator and Tarjan, 1983). In a path-balanced BST, whenever an item is accessed, the search path leading to that item is re-arranged into a balanced tree. Despite their simplicity, both algorithms turned out to be difficult to analyse. Fredman et al. showed that operations in multipass pairing heaps take amortized O(n·n / n) time. For searching in path-balanced BSTs, Balasubramanian and Raman showed in 1995 the same amortized time bound of O(n·n / n), using a different argument. In this paper we show an explicit connection between the two algorithms and improve the two bounds to O(n· 2^^∗n·^∗n), respectively O(n· 2^^∗n· (^∗n)^2 ), where ^∗(·) denotes the very slowly growing iterated logarithm function. These are the first improvements in more than three, resp. two decades, approaching in both cases the information-theoretic lower bound of Ω(n).

## Authors

• 1 publication
• 42 publications
• 13 publications
• 15 publications
• 5 publications
10/12/2021

### Embedding perfectly balanced 2-caterpillar into its optimal hypercube

A long-standing conjecture on spanning trees of a hypercube states that ...
12/20/2019

### Improved Upper and Lower Bounds for LR Drawings of Binary Trees

In SODA'99, Chan introduced a simple type of planar straight-line upward...
08/12/2020

### Soft Sequence Heaps

Chazelle [JACM00] introduced the soft heap as a building block for effic...
01/03/2018

### Slowing Down Top Trees for Better Worst-Case Bounds

We consider the top tree compression scheme introduced by Bille et al. [...
10/17/2019

### Engineering Top-Down Weight-Balanced Trees

Weight-balanced trees are a popular form of self-balancing binary search...
07/15/2019

### Splaying Preorders and Postorders

Let T be a binary search tree. We prove two results about the behavior o...
01/23/2020

### Sorting Permutations with Fixed Pinnacle Set

We give a positive answer to a question raised by Davis et al. ( Discret...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Binary search trees (BSTs) and heaps are the canonical comparison-based implementations of the well-known dictionary and priority queue data types.

In a balanced BST all standard dictionary operations (insert, delete, search) take time, where is the size of the dictionary. Early research has mostly focused on structures that are kept (approximately) balanced throughout their usage. (AVL-, red-black-trees, and randomized treaps are important examples, see e.g., [11, § 6.2.2]). These data structures re-balance themselves when necessary, guided by auxiliary data stored in every node.

By contrast, Splay trees (Sleator, Tarjan, 1983 [17]) achieve amortized time per operation without any explicit balancing strategy and with no bookkeeping whatsoever. Instead, Splay trees re-adjust the search path after every access, in a way that depends only on the shape of the search path, ignoring the global structure of the tree. Besides the amortized time, Splay trees are known to satisfy stronger, adaptive properties (see [9, 3] for surveys). They are, in fact, conjectured to be optimal on every sequence of operations (up to a constant factor); this is the famous “dynamic optimality conjecture” [17]. Splay trees and data structures of a similar flavor (i.e., local restructuring, adaptivity, no auxiliary data) are called “self-adjusting”.

The efficiency of Splay trees is intriguing and counter-intuitive. They re-arrange the search path by a sequence of double rotations (“zig-zig” and “zig-zag”), bringing the accessed item to the root. It is not hard to see that this transformation results in “approximate depth-halving” for the nodes on the search path; the connection between this depth-halving and the overall efficiency of Splay trees is, however, far from obvious.

An arguably more natural approach for BST re-adjustment would be to turn the search path, after every search, into a balanced tree.111The restriction to touch only the search path is natural, as the cost of doing this is proportional to the search cost. (A BST can be changed into any other BST with a linear number of rotations [16].) This strategy combines the idea of self-adjusting trees with the more familiar idea of balancedness. Indeed, this algorithm was proposed early on by Sleator (see e.g., [19, 1]). We refer to BSTs maintained in this way as path-balanced BSTs (see Figure 1).

Path-balanced BSTs turn out to be surprisingly difficult to analyse. In 1995, Balasubramanian and Raman [1] showed the upper bound of on the cost of operations in path-balanced BSTs. This bound has not been improved since. Thus, path-balanced BSTs are not known to match the amortized cost (let alone the stronger adaptive properties) of Splay. This is surprising, because broad classes of BSTs are known to match several guarantees of Splay trees [19, 2], path-balanced BSTs, however, fall outside these classes.222Intuitively, path-balance is different, and more difficult to analyse than Splay, because it may increase the depth of a node by an additive , whereas Splay may increase the depth of a node by at most . In a precise sense, path-balance is not a local transformation (see [2]). Without evidence to the contrary, one may even conjecture path-balanced BSTs to achieve dynamic optimality; yet our current upper bounds do not even match those of a static

balanced tree. This points to a large gap in our understanding of a natural heuristic in the fundamental BST model.

In this paper we show that the amortized time of an access333We only focus on successful search operations (i.e., accesses). The results can be extended to other operations at the cost of technicalities. For simplicity, we assume that the keys in the tree are unique. in a path-balanced BST is

. The result, probably not tight, comes close to the information-theoretic lower bound of

. Closing the gap remains a challenging open problem.

Priority queues support the operations insert, delete-min, and possibly meld, decrease-key and others. Pairing heaps, a popular priority queue implementation, were proposed in the 1980s by Fredman, Sedgewick, Sleator, and Tarjan [5] as a simpler, self-adjusting alternative to Fibonacci heaps [6]. Pairing heaps maintain a multi-ary tree whose nodes (each with an associated key) are in heap order. Similarly to Splay trees, pairing heaps only perform key-comparisons and simple local transformations on the underlying tree, with no auxiliary data stored. Fredman et al. showed that in the standard pairing heap all priority queue operations take time. They also proposed a number of variants, including the particularly natural multipass pairing heap. In multipass pairing heaps, the crucial delete-min operation is implemented as follows. After the root of the heap (i.e., the minimum) is deleted, repeated pairing rounds are performed on the new top-level roots, reducing their number until a single root remains. In each pairing round, neighboring pairs of nodes are linked. Linking two nodes makes the one with the larger key the leftmost child of the other (Figure 2).

Pairing heaps perform well in practice [18, 14, 12]. However, Fredman [4] showed that all of their standard variants (including the multipass described above) fall short of matching the theoretical guarantees of Fibonacci heaps (in particular, assuming cost for delete-min, the average cost of decrease-key may be , in contrast to the guarantee for Fibonacci heaps). The exact complexity of the standard pairing heap on sequences of intermixed delete-min, insert, and decrease-key operations remains an intriguing open problem, with significant progress through the years (see e.g., [8, 15]). However, for the multipass variant, even the basic question of whether deleting the minimum takes amortized time remains open, the best upper bound to date being the originally shown by Fredman et al. Similarly to the case of path-balanced BSTs, we have thus a basic combinatorial transformation on trees, whose complexity is not well-understood.

In this paper we show that in multipass pairing heaps delete-min444To keep the presentation simpler, we only focus on delete-min operations, omitting the extension of the result to other operations. takes amortized time , the first improvement since the original paper of Fredman et al. The improvement is, from a practical perspective, not significant. Nonetheless, it reduces the gap to the theoretical optimum from to less than for any fixed .

The reader may notice that the old bounds for multipass pairing heaps and path-balanced BSTs are the same. The two data structures are, indeed, quite similar: if one views multipass pairing heaps as binary trees (see e.g., [10, § 2.3.2]), the multipass re-adjustement is equivalent to balancing the right-spine of a binary tree.555We note that the previous analysis of path-balanced BSTs [1] did not use this correspondence. By connecting the two data structures, we also simplify (to some extent) the proof of [1]. The multipass analysis, however, does not immediately transfer to path-balanced BSTs; the fact that the BST search path may be arbitrary (not necessarily right-leaning) complicates the argument for path-balanced BSTs.

Our analysis of multipass pairing heaps (§ 2) is based on a new, fine-grained scaling of the sum-of-logs potential function used by Sleator and Tarjan in the analysis of Splay trees, and by Fredman et al. in the analysis of pairing heaps. At a high level, we argue that certain link operations are information-theoretically efficient, and that such links happen sufficiently often. The subsequent, rather intricate analysis notwithstanding, we believe that the ideas of the proof may have further applications in the analysis of data structures.

In § 3 we show our result for path-balanced BSTs. Informally, we decompose the path-balancing operation into several stages, each of which resembles the multipass transformation, allowing us to adapt and reuse the result of § 2.

## 2 Multipass pairing heaps

A pairing heap is a multi-ary heap, storing a key in each node, with the regular (min)heap-condition: the key of a node is smaller than the keys of its children. Priority queue operations are implemented using the unit-cost linking step. Given nodes , “hangs” the node with the larger key as the leftmost child of the other. The operations insert, meld, and decrease-key can be implemented in a straightforward way using a single link (we refer to [5] for details). The only nontrivial operation is delete-min. Here, after deleting the root, we are left with a number of top-level nodes, which we combine into a single tree via a sequence of links. In multipass pairing heaps we achieve this by performing repeated pairing rounds, until a single top-level node remains (i.e., the new root of the heap). A single pairing round is as follows. Let be the top-level nodes, ordered left-to-right, before the round. For all we perform . Observe that if

is odd, then the rightmost node is unaffected in the current round. The number of rounds is

, where is the number of children of the (deleted) root.666The function is base everywhere, the base logarithm is written as . (See Figure 2.)

We now analyse delete-min operations implemented by multipass pairing heaps. Let be the number of children of the deleted root, defined to be the real cost of the operation (observe that the number of links is exactly ). Let be the size of the heap before the operation. We use the binary tree view of multi-ary heaps, where the leftmost child and next sibling pointers are interpreted as left child and right child. A single link operation is shown in Figure 3. Let , , denote the sizes of subtrees , , and , respectively.

We define a potential function that refines the Sleator-Tarjan “sum-of-logs” potential [17]. Let , over all nodes of the heap , where

 ϕ(x)=H(x)log2(2+H(x)),  and  H(x)=log(s(p(x))s(x)),

where denotes the size of the subtree rooted at , and is the parent of .777Using instead, would essentially recover the original “sum-of-logs” potential. Such an “edge-based” potential function was used earlier, e.g., in [7, 13]. Note that both subtrees and parents are meant in the binary tree view.

For convenience, define the functions

 f(x)=logx/log2(2+logx),   and  g(x)=x/log2(2+x).

With this notation, , and . Clearly, both and are positive, monotone increasing, and concave, for all , respectively, .

By simple arithmetic, the increase in potential due to a single link (as in Figure 3) is:

 ΔΦ=f(a+b+1a)+f(a+b+1b)+f(a+b+c+2a+b+1)+f(a+b+c+2c)−f(a+b+c+2a)−f(a+b+c+2b+c+1)−f(b+c+1b)−f(b+c+1c). (1)

For a suitably large constant (for concreteness let ), we consider the quantities , , and , i.e., the scaled sizes of the subtrees , , and . We distinguish different kinds of links, depending on the ordering of the three quantities (breaking ties arbitrarily). We first look at the cases when or is the largest (called respectively type-(1) and type-(2) links), and show that the possible increase in potential due to such links is small. In particular, for type-(1) links, is dominated by a term , and for type-(2) links the positive and negative contributions cancel out, leaving . The proofs use standard (although somewhat delicate) analysis; we defer most of the calculations to Appendix B.

[B.1] A type-(1) link () increases the potential by at most , where the term is a constant independent of , , , , and .

[B.2]

A type-(2) link () increases by at most .

The case when is the greatest of the three quantities (called type-(3) link) is the most favorable. Here, the potential of , before the linking is (roughly) the logarithm of (very large) divided by ; after the linking, the potential becomes (essentially) the logarithm of the ratio between and (much smaller), resulting in a significant saving in potential. We use this saving to “pay” for the operations. First we make the following, easier claim.

[B.3] A type-(3) link () can not increase .

It remains to balance the decrease in potential due to type-(3) links and the increase in potential due to all other links. First, we show that almost all links are type-(3).

There are at most type-(1) and type-(2) links within a pairing round.

###### Proof.

Let , , denote the subtree-sizes corresponding to the -th link from left to right, see Figure 3(right). Let the subsequences , , , be the subtree-sizes corresponding to type-(1) and type-(2) links. Observe that . If the -th link is of type-(1) or type-(2), then , since in each of these cases or . Since , and the claim follows. ∎

All type-(1) and type-(2) links within a single pairing round increase the potential by at most .

###### Proof.

Look at a single round of pairing. Let , , () be as in the proof of Lemma 2 and recall that . If the -th link is type-(1), then by Lemma 3, the increase in potential is at most .

Otherwise, if the -th link is type-(2), then by Lemma 3, the increase in potential is at most , which we can write as , for a suitable constant .

Let denote , or , corresponding to the -th link (according to its type). We have (for a fixed constant ), since the sum of the terms telescopes, and the additive (or ) terms appear at most times.

The total increase in potential is at most . By the concavity of , is maximized if all of the arguments of are equal. We thus obtain a bound on the total increase in potential in the pairing round.

 ΔΦ≤2m⋅g(α⋅lognm)=2αlognlog2(2+α⋅(logn)/m)=O(logn).\qed

The last proof yields, in fact, the following stronger claim.

All type-(1) and type-(2) links within the last pairing rounds increase the potential by at most .

###### Proof.

Observe that for , the -th to the last pairing round has at most links. Thus, as in Lemma 2, we obtain:

 ΔΦ≤2αlognlog2(2+α⋅(logn)/m)≤2αlognlog2(α⋅(logn)/2j)=2αlogn((loglogn+logα)−j)2.

Note that the second inequality holds since . The sum of this expression over all levels is . (Using the fact that converges to a constant.) ∎

Now we estimate more carefully the decrease in potential due to type-(3) links. Let

and be nodes as denoted in Figure 3. We want to express the potential-change in terms of and (before the link operation). Recall that and .

Among type-(3) links () we distinguish two subtypes: type-(3A) (), and type-(3B) (). We have the following two (symmetric) observations:

[B.4] A type-(3A) link () decreases the potential by at least

 Ω(1)⋅HAlog2(2+HB)−O(1).

It follows that for some constant , if , then .

###### Proof.

Let and . Then, recalling Equation (1):

 −ΔΦ =f(1+y1+xy)+f(1+1/y+x)+f(1+1xy)+f(1+xy) −f(1+y)−f(1+1y)−f(1+1x+1xy)−f(1+xyy+1)−O(1).

We have , and . Note, .

Collecting constant terms, we have:

 −ΔΦ≥f(1+xy)+f(1+x)−f(1+y)−f(1+xyy+1)−O(1).

As , we further simplify: .

It is now sufficient to show:

 f(1+xy)−f(1+y)≥Ω(1)⋅log(1+x+1/y)log2(2+log(1+xy))−O(1).

(We defer the detailed calculations to B.4.) ∎

[B.5] A type-(3B) link () decreases the potential by at least

 Ω(1)⋅HBlog2(2+HA)−O(1).

It follows that for some constant , if , then .

There exists a constant () such that all type-(3A) links with and all type-(3B) links with decrease the potential by at least . We now define the category of a node with respect to its value. Intuitively, nodes of the same category are those that, when linked, release the most potential. Let us denote . Using the notation of function composition, let

 h(0)(x)=x,h(i)(x)=h(h(i−1)(x)).

The category of a node is based on the values . Note that , where the depends on , since (using the star notation) . [Category] Let be a node. For , we let if:

 H(u.left)∈(h(i)(logn),h(i−1)(logn)].

If we say that is of category .

The following crucial observations connect categories and savings in potential.

Let link be type-(3). If , then the link decreases the potential by at least .

###### Proof.

Note that if then

 H(u.left)≥h(i)(logn)≥d⋅log2(2+H(v.left)),
 H(v.left)≥h(i)(logn)≥d⋅log2(2+H(u.left)).

Thus, by Corollary 2, the claim follows. ∎

In each pairing round there are at most nodes of category .

###### Proof.

Let be of category , then . Denoting , ), we get . Therefore, , an occurrence that can happen at most times in each round (by the same argument as in Lemma 2). ∎

Let denote the “winner” of linking and (neither of category ), i.e., is the one with the smaller key. Then .

###### Proof.

Let , ) as in Figure 3. We have that , , and .

Clearly , finishing the proof. ∎

As seen in Figure 2, a delete-min operation transforms the “spine” of the heap (in binary view) into a balanced tree. We denote this tree by . Each level of corresponds to a pairing round; specifically, level of consists of nodes at distance from the leaves, containing the losers of the -th pairing round. The following lemma captures the potential reduction that yields the main result.

Let  be a subtree of  of depth , whose leaves correspond to consecutive link operations. If  contains only type-(3) links and no links involving nodes of category , then the total decrease in potential caused by the links of  is at least .

###### Proof.

Assume towards contradiction that there is no link between two nodes of the same category in . By Lemma 2 in each round the minimal overall category increases by at least 1, leaving us with two nodes of maximal category in the last round, a contradiction. By Lemma 2, a link between nodes of equal category decreases the potential by at least . ∎

The amortized time of delete-min in multipass pairing heaps is .

###### Proof.

Let the real cost (number of link operations) be . Note that there are at most pairing rounds.

Thus, if , then there are at most rounds. Using Lemma 2 we get that the first pairing rounds increase the potential by at most . Also, as shown in Lemma 2, the total increase in potential for the last levels is . Thus, the total potential increase is at most + .

To analyse the case , we use the potential decrease of type-(3) links. First, we look at the first pairing rounds.

By Lemma 2, the links in every complete subtree of of depth , in which there are only type-(3) links and no category- nodes, decrease the potential by at least .

In the first levels of we can find disjoint subtrees of this size. In these levels there are at most type-(1),(2) links, or links containing category- nodes (Lemmas 2 and 2). Thus, at least of the subtrees answer the conditions of Lemma 2, decreasing the potential by at least . Also, the total increase in potential caused by type-(1),(2) links is at most (Lemma 2). Therefore, the first levels give us a decrease in potential of at least .

Note that by using the same argument on the next levels, we get a decrease in potential of at least , where is the number of links in level . Thus, levels which contain links only decrease the potential.

We repeat this argument until we reach a level in containing links. Now, applying the same argument as for the first case, we get that the total increase in potential for the last levels (starting from the level of links) is at most .

Summarizing, the total amortized time (in both cases) is at most

 k+O(logn⋅log∗n)−(k2log∗n−log∗n⋅logn).

Scaling the potential by , we get that the amortized time is . ∎

## 3 Path-balanced binary search trees

Consider the operation of accessing a node in a BST with nodes (we refer interchangeably to a node and its key). Let denote the search path to (i.e., the path from the root of to ). The path-balance method re-arranges into a complete balanced BST (with all levels complete, except possibly the lowest). Subtrees hanging off are re-attached in the unique way given by the key-order (Figure 1). There are multiple ways to implement this transformation such that the number of pointer moves and pointer changes is linear in the length of the search path. For instance, we may first rotate the search path into a monotone path, then apply a multipass transformation (described next) to this monotone path.

#### Multipass transformation.

A multipass transformation of a monotone path (of which the deepest node might not be a leaf) converts into a balanced tree (in which the last level may be incomplete) by a sequence of pairing rounds. In each pairing round we rotate every other edge in a prefix of (i.e., a subpath of the shallowest nodes on ). Each rotation pushes one node off . We denote by the path remaining of after pairing rounds. The pairing rounds are defined as follows. We assume that the path consists of right child pointers; in the case it consists of left child pointers everything is symmetric.

Let denote the length of (i.e., the number of nodes on ). In the first round we do just enough rotations so that the length of the path after the round (i.e., ) is one less than a power of . Specifically, we do rotations where is the smallest integer such that . In the second round we do rotations on , and in round we do rotations on . We maintain the invariant that after rounds all the nodes that were pushed off (excluding those that were pushed off at the first round) are arranged in balanced binary trees of height , hanging as children of the nodes of .

The proof of the following theorem is analogous to the proof of Theorem 2 (one can verify that all steps of the proof still hold for the slightly modified pairing rounds of the multipass transformation, replacing rotations by links).

For every monotone path with , the change in caused by applying a multipass transformation on is bounded by where is the size of the subtree of the root of .

#### Warm-up: a simplified path-balance.

We first look at an easier-to-analyse variant of path-balance, where, instead of a complete balanced tree, we build an almost balanced tree out of the search path , as follows: we first make the accessed item the root, then turn the parts of containing items smaller (resp. larger) than into balanced subtrees rooted at the left (resp. right) child of . The depth of this tree is at most one larger than the depth of a complete balanced tree built from .

For the purpose of the analysis, we view the simplified path-balance transformation as a two-step process (Figure 5). The actual implementation may be different but the analysis applies as long as the transformation takes time .

Step 1. Rotate the accessed element all the way to the root. (Observe that after this step, is split into two monotone paths, to the left of consisting only of “right child” pointers, and to the right of , consisting only of “left child” pointers.)

Step 2. Apply a multipass transformation to and to .

We show that the amortized time of an access using simplified path-balance is . We use the same potential function as in § 2, and we assume the two-step implementation described above. We first state an easy observation.

Let be a path in rooted at a node , then , where and is the size of the subtree of .

###### Proof.

Denote . Let be the subtree-sizes of the nodes on from the deepest node to . Then

 Φ(P)=ℓ−1∑k=1f(ak+1ak)=ℓ−1∑k=1g(logak+1ak)≤ℓ⋅g(logs(r)ℓ)=O(logs(r)),

due to ’s concavity and since the terms sum to . ∎

We proceed with the analysis. We argue that rotating to the root (Step 1) increases by at most . To see this, observe first, that the potential of nodes hanged on the nodes of excluding , can only decrease. This is because their subtree remains the same, whereas the subtree of their parent (a node on the search path) can only lose elements, (see Figure 5). The two children of may increase the potential by at most .

For nodes on the search path, we look at the potential after the transformation. We have two separate paths (see Figure 6 middle), and by Lemma 3 the potential of each path is bounded by . This concludes the analysis for Step 1.

In Step 2, as we apply the multipass transformation to both and , Theorem 3 applies. Thus, is at most where is defined in Theorem 3. The claim on the amortized running time follows by scaling by and adding it to the actual cost (the length of ). This concludes the proof.

#### Analysis of path-balance.

The original path-balance heuristic (where we insist on building a complete balanced tree) is trickier to analyse. Here, instead of moving the accessed item to the root, we move the median item of the search path to the root. Here, “median” is meant with respect to the ordering of keys; is, in general, not the node with median depth on . It is instructive to prove the earlier result first, by re-using parts of the Fredman et al. proof for multipass. We do this in Appendix C. In the remainder of this section we prove the new, stronger result.

The amortized time of search using path-balance is .

For the purpose of the analysis, we view the path-balance transformation as a sequence of recursive calls on search paths in some subtree of . The total real cost is proportional to the original length of the search path to which we denote by . We define a threshold , and distinguish between recursive calls on paths shorter than (“short paths”) and recursive calls on paths longer than (“long paths”).

A long path is processed as follows. We rotate the median of the nodes on to the root, splitting into two paths of equal lengths. One of these paths contains the path from to in , and the other path, which is monotone, contains either the elements smaller than on or the elements larger than on (depending upon whether is in the right or left subtree of ). In the sequel we assume without loss of generality that the monotone part contains all elements larger than and denote it by . We denote the other (non-monotone) path that ends with by . We perform a multipass transformation on , and make a recursive call on (i.e., becomes the of the next recursive call); see Figure 6.

A short path is transformed into a balanced binary tree in two phases, as follows. In the first phase, we rotate up the median of until it becomes the root of the subtree rooted at the shallowest node of . This decomposes into a monotone path and a general path , one starting at the left child of and the other at the right child of . We repeat this recursively with the median of , and so on, until we get a general path of length . After this transformation, the medians form a path, each having the next median as one child and a monotone path as the other child. The lengths of these monotone paths decrease exponentially by a factor of . In the second phase we apply a multipass transformation on each of the monotone paths, obtaining a complete balanced tree; see Figure 7.

Before we analyse each case, we argue that Theorem 3 also holds with a modified potential (defined below). As we only use the new potential from now on, there is no risk of confusion. The modification consists in changing the exponent of the logarithmic term in the denominator from to , and changing the additive constant inside the to make sure is still increasing everywhere.

Formally, , where , and . As earlier, is the size of the subtree rooted at , and is the parent of . For convenience, we define the functions , and . As before, , and .

We show in Appendix E that the entire analysis in § 2 extends to this new potential. Therefore, Theorem 3 holds also for the modified potential function .

Now, the analysis of transforming long paths is straightforward. For short paths, we need two new observations.

The total increase in potential for performing multipass transformation on a path of length where is the size of the subtree of the root of , is at most

 logk∑j=1O(logn)(loglogn+1−j)3.

The proof is identical to that of Lemma 2. As before, the sum can be bounded as , but here we use the quantity explicitly inside another sum where the exponent in the denominator will be crucial. The next observation can be shown in a way similar to Lemma 3.

[Appendix D] Given a search path of length , the total increase in due to recursively rotating all medians of to the root is .

We are ready to prove Theorem 3. We split the proof into three cases according to the length of the search path, denoted by .

Short paths (). Notice that . Recall that in the first phase, we repeatedly rotate up the medians, decomposing the path into monotone paths of lengths , where . By Lemma 3 the total increase in potential due to this transformation is at most .

In the second phase, we do a multipass transformation on each of these monotone paths. By Lemma 3, a multipass transformation on a monotone path of length increases by at most , for some fixed . Thus, the multipass transformations increase the potential by at most

 loglogn∑j=1j∑i=1α⋅logn(loglogn+1−i)3=loglogn∑s=1α⋅logns2=O(logn).

The first equality holds since the term appears in the above sum exactly times (). Thus, the total increase in is, in this case, .

Longish paths (). Notice that .

We perform recursive calls and a final call on a search path of length . The final call increases by at most , by the analysis in the previous case. The recursive calls consist of rotating the current median up to the root and applying the multipass transformation on a monotone path. As before, rotating the median up increases by at most ). Also, each multipass transformation is performed on a path of length . By Theorem 3, the increase in potential is at most . Therefore, the recursive calls increase by at most , which also bounds the total increase in .

Long paths (). We look at the potential change due to the first recursive call. Again, rotating the median to the root increases by at most . The path splits into and , of which is monotone. By Theorem 3, the multipass transformation on decreases by .

By the same argument, decreases during all of the subsequent recursive calls on paths of size .

We continue until we have a recursive call on a path of size at most , which, by the previous case, increases by at most . Thus, we obtain that the total decrease in in this case is at least .

Combining the three cases, after scaling the potential by , we conclude that the amortized time of the access is , as required.

## Appendix B Additional proofs for § 2

We start with a multi-part technical lemma, to be used in the proofs of other claims.

1. For every , it holds that

2. For every , it holds that

3. For every it holds that

 f′(x)≥13xlog2(2+logx).
4. For every , it holds that

5. has only one maximum point at in , and two global minima.

6. Fix . If , then:

 f(a+b+1a)+f(a+b+1b)≤0.95.
###### Proof.

Part (i):

By Lagrange theorem, , for . Due to the concavity of , we get .

Part (ii):

It is enough to show , which holds since

 g(x+y)=g(x)+∫y0g′(x+t)≤g(x)+∫y0g′(t)=g(x)+g(y),

where the inequality holds since is concave.

Part (iii):

Taking the derivative:

 f′(x) =1x⋅ln2⋅log2(2+logx)−2logxlogx+2x⋅(ln2)2⋅log3(2+logx) ≥1x⋅ln2⋅log2(2+logx)−2x⋅(ln2)2⋅log3(2+logx) =1xlog2(2+logx)[1ln2−2(ln2)2⋅log(2+logx)] ≥1xlog2(2+logx)[1ln2−2(ln2)2⋅log(2+logγ)]≥0.335xlog2(2+logx).

Part (iv):

It is equivalent to prove , i.e., that is monotone decreasing. This holds since ( is concave).

Part (v):

Because of symmetry around , it suffices to prove that has only one minimum in . This minimum is at . A plot of this function is shown in Figure 8. We omit the tedious analytical derivation.

Part (vi):

For we verified the inequality by computer (the maximal value is , at ). Thus, assume . Denote . Using Lemma B(i), we get:

 f(a+b+1a)+