Binary search trees (BSTs) are among the best-studied structures in computer science, supporting the efficient storage and retrieval of items from a totally ordered set. The possible set of items, i.e. the “search space” of the BST is typically assumed to be the set of integers . One may also take as the search space the collection of nodes on a path, with the obvious ordering between nodes.
This view suggests a broad generalization of BSTs where the underlying search space is, instead of a path, a general tree. The goal of a search is to locate a certain node of this tree. Searching for node proceeds via oracle calls, where is “compared” to some stored node . Let denote the underlying tree structure. The oracle either answers (in which case the search can stop), or identifies the connected component that contains node , after the removal of node from . The search then continues recursively within the identified connected component.
We may view a search strategy of this kind as a secondary tree on the nodes of , built as follows. The root of is an arbitrary node of . The children of node are the roots of trees built recursively on the connected components of . We refer to such a tree as an STT (search tree on tree) on . Oracle-calls are assumed to take constant time, the time for searching in is thus proportional to the length of the search path from the root of to . See Figure 1 for illustration. Note that is rooted, while is unrooted. Further note that the edge-sets of and may differ, and that the number of children of every node in is at most the degree of in . It is easy to see that in the special case where is a path, the STT is, in fact, a BST.
One can further generalize STTs to allow searching in arbitrary graphs. Search trees on graphs (and trees) have been extensively studied in various settings. Given a graph , the minimal height of a search tree on is known as the treedepth of (see e.g. [NdM12, § 6] for a comprehensive treatment). In other contexts, search trees on graphs have been studied as tubings [CD06], vertex rankings [DKKM94, BDJ98], ordered colorings [KMS95], or elimination trees [Liu90, PSL90, AH94, BGHK95] with applications in matrix factorization, see e.g. [DER17, § 12]. In polyhedral combinatorics, search trees on trees are seen as vertices of a tree associahedron, a special case of graph associahedra [CD06, Dev09, Pos09], and a generalization of the classical associahedron. The associahedron (whose vertices correspond to BSTs or other equivalent Catalan-structures) is a central and well-studied object of combinatorics and discrete geometry, see e.g. the recent book [MHPS12] or survey [CSZ15] for a broad overview of this remarkable structure, its history, and further references.
Finding a search tree of minimal height on a graph is, in general, NP-hard [Pot88], but solvable in polynomial time for some special classes of graphs [AH94, DKKM94]. In particular, the minimum height search tree on a tree can be found in linear time by Schäffer’s algorithm [Sch89], rediscovered multiple times during the past decades. An STT of logarithmic depth (analogously to a balanced BST) can be obtained via centroid decomposition, an idea that goes back to the 19-th century work of Camille Jordan [Jor69].
In the context of searching, minimum height is a very limited form of optimality, only bounding the worst-case cost of a single search. For a given distribution of searches, the shape of the optimal tree may be very different from a minimum height tree. In the special case of BSTs, finding the optimal search tree for a given distribution is a well-understood problem. Knuth’s textbook dynamic programming algorithm solves this task in time [Knu71], and linear-time constant approximations have also been known for half a century [Meh75, Meh77].
Recently, Bose, Cardinal, Iacono, Koumoutsos, and Langerman [BCI20] studied STTs in the context of searching and explored whether the techniques developed for BSTs extend to STTs. They remark that no analogue of Knuth’s algorithm is known for STTs, and it is not even clear whether the optimum search tree problem is polynomial-time solvable in this broader setting. Intuitively, the main difficulty is that, whereas BSTs consist of subtrees built over polynomially many candidate sets (corresponding to contiguous intervals of the search space), STTs consist of subtrees built over subtrees of the search space, whose number is in general exponential.
Our first result is a polynomial-time approximation-scheme (PTAS) for the optimal STT problem. In the special case of -approximating the optimum we obtain a simple algorithm. To our knowledge, no constant-factor approximation was previously known.
Let be a search sequence over nodes of a tree , and let be the minimum cost of serving in a fixed search tree on . For every integer we can find in time a search tree on that serves with cost at most .
The result combines a number of observations. The first, due to Bose et al. is that a restricted class of STTs called in [BCI20] Steiner-closed trees (defined in § 2), contains a tree of cost at most twice the optimum. In § 3 we generalize Steiner-closed trees to a class that approximates the optimum with arbitrary accuracy. Finally, we show that, if we restrict attention to search trees from our restricted class, then the number of admissible subproblems becomes polynomial, and thus an optimization similar to Knuth’s algorithm can be carried out.
Optimal trees are still far from the full story of efficient search. For BSTs, the standard rotation primitive allows one to restructure the tree between searches, such as to adapt to regularities in the search sequence. Tree restructuring has led to a rich theory of adaptive BST algorithms that achieve improved performance on broad families of search sequences. The most prominent data structure of this kind is the Splay tree, introduced by Sleator and Tarjan [ST85]. Splay trees react to searches by local re-arrangements on the search path, with no apparent concern for global structure; such strategies are also called self-adjusting. Splay trees have powerful adaptive properties [ST85, Tar85, CMSS00, Col00, Sun89, Pet08, LT19a, LT19b], for instance, they asymptotically match the cost of the optimal tree, without a priori knowledge of the search distribution (a property known as static optimality, shown by Sleator and Tarjan [ST85]). The stronger dynamic optimality conjecture (one of the long-standing open questions of theoretical computer science) speculates that Splay trees are competitive with any self-adjusting strategy on any search sequence [ST85].
The dynamic optimality conjecture has inspired four decades of research, leading to powerful adaptive algorithms, instance-specific upper and lower bounds, and structural insights about the BST model (see [Iac13, Koz16, LT19a] for recent surveys). In recent work, Bose, Cardinal, Iacono, Koumoutsos, and Langerman [BCI20] initiated the study of adaptive STTs; it is thus very natural to ask, which of the results obtained for BSTs in the past decades can be generalized to the broader setting of STTs.
The rotation primitive readily extends from BSTs to STTs (Figure 2), and this opens the way for adaptive STT strategies. Bose et al. [BCI20] show that (surprisingly) a lower bound from the BST model due to Wilber [Wil89] can be extended to STTs. Building on this result, they obtain an STT analogue of Tango trees [DHIP07]. Like Tango trees for BSTs, the structure of Bose et al. is -competitive with the optimal adaptive STT strategy.
Bose et al. note several difficulties in achieving an arguably more natural goal: adapting Splay trees to the STT setting. Conjectured to be -competitive, Splay trees are in many ways preferable to Tango trees. They have several proven distribution-sensitive properties (including static optimality) and are simple and efficient, both in theory and in practice.
For BSTs, another well-studied adaptive strategy is Greedy, introduced independently by Lucas [Luc88] and Munro [Mun00]. Greedy can be viewed as a powerful offline algorithm, that (essentially) re-arranges the search path in order of future search times. Strikingly, Demaine et al. [DHI09] have shown that Greedy can be turned into an online algorithm with only a constant-factor slowdown. More recently, with a better understanding of its behaviour, Greedy emerged as another promising candidate for dynamic optimality (see e.g. [CGK15a, IL16, GG19]).
There appears to be a major difficulty in transferring techniques from BSTs to STTs, in particular, in generalizing Splay and Greedy. An essential feature of the BST model is that any tree of size can be transformed into any other tree of size with rotations [STT88, Pou12]. This fact affords a great flexibility in designing and analyzing algorithms, as the cost of restructuring a subtree can be charged to the cost of its traversal, and the actual details of the rotations can be abstracted away. By contrast, as shown recently by Cardinal, Langerman and Pérez-Lantero [CLP18], the rotation-diameter of STTs is . This fact makes it unclear how direct analogues of Splay and Greedy may work in the STT model.
We overcome this barrier by showing that the rotation-diameter is, in fact, linear, as long as we restrict ourselves to the already mentioned class of Steiner-closed trees.
Given two Steiner-closed search trees and on the same tree of size , we can transform into through a sequence of at most rotations. Moreover, starting with a pointer at the root, we can transform into through a sequence of pointer moves and rotations at the pointer. All intermediate trees are Steiner-closed.
The proof of Theorem 2 can be seen as mimicking a corresponding classical argument for BSTs. For BSTs, the fact that the rotation-diameter is linear can be shown by rotating both trees to a canonical path shape [CW82]. For Steiner-closed STTs we show that both trees can be rotated (with a linear number of rotations) to the canonical shape of the underlying tree .
Steiner-closed STTs thus appear to form a connected, small-diameter core of tree associahedra, preserving useful properties of BSTs, while remaining good approximators of depth. By restricting ourselves to Steiner-closed trees, we regain part of the toolkit from BSTs. In particular, linear rotation distance allows us to implement natural transformations of the search path.
As our main result, we define the SplayTT algorithm, a generalization of Splay to the setting of STTs. If the underlying search space is a path, SplayTT becomes the classical Splay tree. SplayTT keeps the search tree at all times in a Steiner-closed shape. Maintaining this property under dynamic updates poses a number of technical difficulties. The main challenge is that the “search path” may (counter-intuitively) contain branchings of degree higher than two when viewed in the underlying search space; a condition that does not arise in classical BSTs. We deal with this issue, by first splaying the higher degree branching nodes of the search path, followed by splaying the searched node itself.
We expect SplayTT to have distribution-sensitive properties that extend those of Splay trees to the STT setting. As a first result in this direction, we prove that SplayTT satisfies the analogue of static optimality for Splay trees.
Let be a tree of size and let be a sequence of searches over the nodes of . Let denote the minimum cost of serving in a static search tree on . Then the cost of SplayTT for serving is .
Despite the similarity between Splay and SplayTT, extending static optimality from Splay to SplayTT is not trivial. One of the obstacles already noted in [BCI20] is that Shannon entropy, a natural measure of BST-efficiency cannot accurately capture the cost in the STT setting. The classical analysis of Splay trees via the access lemma [ST85] appears closely tied to this quantity. To avoid this pitfall, we sidestep the access lemma and prove the static optimality of SplayTT directly, through a combinatorial argument.
We remark that the additive term of Theorem 3 is independent of the length of the search sequence and depends only on the tree size. In fact, under the mild assumption that every node is searched at least once, the additive term can be removed. Theorem 3 is stronger then Theorem 1 in the sense that SplayTT does not require a priori knowledge of . Just like in the BST setting, the static optimality of SplayTT also implies logarithmic amortized cost (SplayTT is competitive with every tree, in particular, with the centroid decomposition tree). For STTs, however, static optimality is a significantly stronger claim; if the underlying search space is of small treedepth, e.g. if it is a star, then the amortized cost of searches can even be constant.
The strongest form of optimality for a self-adjusting search tree (and indeed, for any algorithm in any model) is instance-optimality. In the case of search trees this is usually understood as matching the cost of the optimal adaptive strategy on every search sequence, up to some constant factor. As mentioned, this (conjectured) property of Splay trees is called dynamic optimality. Since the generalization of Splay to SplayTT appears quite natural, we propose the following conjecture that subsumes classical dynamic optimality.
SplayTT is dynamically optimal in the STT model.
A natural question is whether Greedy BST can be similarly extended to the STT setting. The linear-time transformation between Steiner-closed trees (Theorem 2) suggests a straightforward generalization, but the analysis of that algorithm appears to require the development of further tools, which we leave as future work.
Further related work.
As mentioned, concepts related to STTs and more broadly to search trees on graphs have been studied in various contexts by different communities. Apart from the work of Bose et al. [BCI20], that is closest in spirit to ours, earlier work has largely focused on minimum height, i.e. the problem of computing the treedepth, or considered different models where queries are for edges, see e.g. [BFN99, LN01, LM11, MOW08, OP06, CJLM14, CKL16]. Other related, but not directly comparable work includes searching in posets [LS85b, LS85a, CDKL04, HIT11], searching with weighted queries [DKUZ17], searching with an oracle that identifies a shortest-path-edge towards the target [EKS16], or searching with errors (stochastic or adversarial) [BK93, FRPU94, KK07, FGI07, BKR18]. Search trees on graphs and trees have also been motivated with practical applications including file system synchronisation, information retrieval, and software testing [BFN99, MOW08, LN01].
We use standard terminology on trees and graphs. A subtree of an undirected tree is a connected subgraph. The set of nodes of a tree is denoted . The subgraph of induced by node set is denoted . By we denote the forest obtained by deleting node in tree . We say that separates and , if and fall into different connected components of , or equivalently if is on the path between and in . The convex hull of a set of nodes , denoted is the subtree of induced by the union of all paths between nodes in .
For a rooted tree and a node we denote by the subtree of rooted at . The search path of in is the unique path from the root to . The number of nodes on the search path of in is . Denoting the root of as , we have .
Search trees on trees.
Here we mostly follow the notation of Bose et al. [BCI20].
Definition 1 (Search tree on tree (STT)).
Given an unrooted tree , a search tree on is a rooted tree with , where the subtrees of are search trees on the connected components of , for all children of .
Note that the ordering of children in an STT is irrelevant. See Figure 1 for illustration. The following observation is a direct consequence of the definition.
If is a search tree on , then is a subtree of for all . Furthermore, is a search tree on .
Definition 2 (Rotation in STTs).
Consider a node with parent in a search tree on . A rotation of the edge in , alternatively called a rotation at node in results in a tree obtained as follows:
and swap places,
if has a child whose subtree contains a node adjacent to in , then becomes the child of ,
all other children of and preserve their parent.
The following observation is immediate.
If is a search tree on , then the tree obtained from by an arbitary rotation is a search tree on .
Bose et al. introduced an important property of STTs that also plays an essential role in our results. We review this concept next.
Definition 3 (Steiner-closed set [Bci20]).
A set of nodes is Steiner-closed, if every node in is connected to exactly two nodes of .
Observe that for all , the nodes in are connected to at least two nodes of . Therefore, if is not Steiner-closed, then there is a node that is connected to at least three nodes of . See Figure 3 for illustration. The following observation is immediate.
If is a path in , then is a Steiner-closed set.
We next define Steiner-closed STTs.
Definition 4 (Steiner-closed STT [Bci20]).
An STT on is Steiner-closed, if for all , the set of nodes in the search path of is a Steiner-closed set.
One can obtain a canonical Steiner-closed STT by using the underlying search space itself.
Let be an unrooted tree, and let be the rooted tree obtained by picking an arbitrary root in . Then is a Steiner-closed STT on .
Steiner-closed STTs are useful in particular due to the following observation.
Lemma 1 ([Bci20, Lemma 4.2]).
Given an STT on , we can find, in polynomial time, a Steiner-closed STT of so that , for all .
Note that Lemma 1 is stated by Bose et al. for maximum depth, but it is explicitly observed in their proof that during the transformation from to the depth of every node at most doubles. An algorithm with running time is implicit in the proof of Bose et al., with standard data structuring. We omit this proof and prove a more general statement in § 3.
Static and dynamic STT model.
We now discuss the cost model of STTs, as a straightforward extension of the BST cost model (see e.g. [Wil89, DHIP07]). Let be an STT on . The cost of searching in is . The cost of serving a sequence of searches in is .
If re-arrangements of the tree are allowed, the model is as follows. An algorithm starts with an initial search tree on , and denotes the state of the tree after the -th search. At the start of the -th search, a pointer is at the root of and can perform an arbitrary number of steps of (1) rotating the edge between the node at the pointer and its parent, and (2) moving the pointer from the current node to its parent or to one of its children. When serving the search , the pointer must visit, at least once, the node .
Both types of operations have the same unit cost. An additional unit cost is charged for performing each search. The cost of an algorithm for executing , denoted , is thus the total number of pointer moves and rotations plus an additive term . An algorithm is offline if it knows the entire sequence in advance, and online if it receives only after the -th search has finished.
In both the static and the dynamic case we only account for operations in the model. Algorithms that are to be considered efficient should, however, also spend polynomial time outside the model (i.e. for deciding which rotations and pointer moves to perform).111Whether unbounded computation outside the model can improve the competitiveness of online algorithms is an intriguing theoretical question for both BSTs and STTs. In case of our generalization of Splay, the time spent outside the model is linear in the model cost.
3 Almost optimal search trees on trees
Let be a tree on nodes and consider a search sequence with the function denoting the frequencies of searches, so that each node appears times in .
We want to find a search tree on in which is served with the smallest possible cost. More precisely, we want such as to minimize
Observe that the ordering of the searches in is, in this case, irrelevant, and the cost depends only on the frequencies .
In this section we show how to find, in polynomial time, a search tree on , whose cost is , for arbitrarily small , i.e. we give a polynomial-time approximation scheme (PTAS) for the optimal STT problem. The result is based on -cut trees, a generalization of Steiner-closed trees that we introduce (Steiner-closed trees correspond to the special case ). Before presenting the algorithm, we need some definitions.
Cuts and boundaries.
The cut in of a nonempty set of nodes , denoted or simply is the set of (directed) pairs of nodes , where , and , and is an edge of . In words, the cut is the set of edges connecting the remainder of the tree to , indicating the direction. Observe that if and only if . Moreover, uniquely determines the set , and given , we can find through a linear time traversal of .
The boundary or simply is the set of nodes outside that define the cut. More precisely, if and only if , for some . Observe that if is connected (i.e. a subtree), then . We call this quantity the boundary size of . To simplify notation, for subtrees of we let denote .
Definition 5 (-cut tree).
For , an STT on is a -cut tree, if for all , the boundary size of in is at most .
It is easy to verify that -cut trees are exactly the STTs obtained by rooting at some vertex (as in Observation 4). A more involved argument (Appendix C) shows that -cut trees are exactly the Steiner-closed trees.
As the number of possible cut edges in a tree is , the following observation is immediate, implying that the number of possible subtrees of a -cut tree is polynomial, rather than exponential.
The number of subsets with boundary size at most is .
The following technical lemma relates the boundary sets before and after the removal of a node, and will be useful in the remainder of the section.
Let be a tree, let be a subtree of , and let . Let be the set of neighbors of in , and let be the connected components of . Then:
Let for some . Then there is an edge with . Either , or the unique path from to in contains , and therefore . If , then for all , as otherwise and were connected in . It follows that , so .
Conversely, let . If , then for all . Otherwise, let , and let such that . By assumption, . Let be the connected component of that contains . Then .
The rest of this section is dedicated to the proof of Theorem 1 and is organized as follows. In § 3.1 we show that an optimal -cut STT approximates an optimal general STT by a factor of roughly . In § 3.2 we generalize Knuth’s dynamic programming algorithm and show that, due to creftypecap 5, an optimal -cut STTs can be found in polynomial time.
3.1 -cut trees approximate depth
In this subsection we show that an arbitrary STT can be transformed into a a -cut STT , so that the depth of every node increases by a factor of no more than (roughly) . The proof is based on a similar idea as the proof of Lemma 1 by Bose et al. [BCI20]: problematic nodes are fixed one-by-one, carefully controlling the depth-increase of every node. The cases require, however, a number of further ideas. In particular, we make use of the leaf centroid of a tree, defined next.
Definition 6 (leaf centroid).
Let be a tree on nodes, having leaves. A non-leaf node is a leaf centroid of if every connected component of has at most leaves, one of which is adjacent to in , and at most that are also leaves of .
The existence of a leaf centroid follows by a similar argument as the existence of the classical centroid (see e.g. [Sla78, Wan15]): start at an arbitrary non-leaf node and as long as has a component with more than leaves of the original tree, move along the edge towards . By standard data structuring, a leaf centroid can be found in linear time; for completeness we give a proof in Appendix A.
The following observation connects leaf-sets with boundaries.
Let be a subtree of . Then the set of leaves of is .
Let , and suppose that is not a leaf of . Then is on the path between two nodes . Let . As is connected, there is a path between and that lies completely within . The path between and consists of exactly this path, with prepended and appended. This means that , a contradiction.
Conversely, let . If , then must lie on a path between two nodes . This means that is not a leaf of .
We proceed with the main lemma of this subsection.
Given a search tree on , for arbitrary we can find in time a -cut search tree on , so that for all , where
Algorithm 1 transforms into (with the call , where ). The basic idea is the following: for a given node of (initially the root), we check whether has a boundary size smaller than . If yes, we simply recurse on the subtrees. Otherwise, we transform by replacing the root with a node (by rotating to the top), such as to minimize the maximum boundary size of the subtrees rooted at the children. (Note that when the boundary size is exactly , node may happen to be in which case no rotation is necessary.) After the transformation, we recurse on the children of the new tree.
The rest of this subsection is dedicated to the proof of Lemma 3. We remark that we only need Algorithm 1 as an existence proof for good -cut trees, and its running time does not affect the running time of our approximation algorithm.
We consider the boundary size of the subtrees in each recursive call.
Let and let be a child of in . Then .
By Lemma 2, we have , so .
Observe that . The set is a connected component of the forest . Let be the set of nodes such that for some . Each is contained in , so it is a leaf of by creftypecap 6. Moreover, all are in the same component of (the one that contains ). As is a leaf centroid of , we have .
Finally, observe that by Lemma 2 and the definition of , so .
We now bound the increase of depth due to the transformation for each node in . Intuitively, when following the search path in the resulting tree , we have a newly added node (compared to ) whenever the boundary size of the current tree is , which by Lemma 5 can only happen every steps. We proceed with the formal proof.
Let be a search tree on with root and let . Let be the search path in of an arbitrary node and let be the search path of in . Let be the indices of nodes in that are not in . We want to show that the number of such nodes is
As and , this shows the bound stated in Lemma 3.
for all .
Let for . As is not in , at some point, Algorithm 1 must have rotated up. This means that in some recursive call, is the leaf centroid that is rotated up in creftypecap 13 and, in particular, . Similarly, . Let .
As , we also have . As such, we can uniquely assign “direct predecessor” nodes in to each node in . This proves the upper bound for .
In each recursive call we compute the boundary size of in linear time, e.g. by finding the ancestors of in and then traversing from . Furthermore, we may rotate one node to the root (of ), which requires linear time. As each node corresponds to exactly one recursive call (which returns a subtree rooted at ), the total running time is . This concludes the proof of Lemma 3.
When is even, Lemma 3 can be slightly strengthened to obtain , at the cost of a slightly more involved procedure. In particular, this extends the statement to the case. Without the improvement, the running time stated in Theorem 1 would be instead of .
Intuitively, the improvement comes from the observation that the root-replacement of creftypecap 13 in Algorithm 1 is too cautious. When matches the boundary size condition with equality, it may be too early to rotate the replacement-root to the top, as the boundary size may recover as we go further down, if happens to split the tree in a reasonably balanced way. We defer the details of this small improvement to Lemma 17, Appendix B. With the improved bound and the observation that -cut trees are exactly the Steiner-closed trees (Appendix C), the result of this subsection directly generalizes Lemma 1.
3.2 Finding an optimal -cut STT
Let be a -cut STT on that serves the search sequence of length with minimal cost among all -cut STTs. Let be the root of , let be the subtrees rooted at children of , and let be the subsequence of consisting of searches to nodes of . Then, , where , for all .
By definition, the trees are -cut trees. Our strategy is to find and to recursively compute -cut STTs on the components of that achieve cost at most for their respective sequences , i.e. for the relevant frequencies , for all . We then return the tree obtained by letting the roots of be children of the root .
The cost equation translates into the straightforward dynamic program Algorithm 2. We call those sets of nodes admissible for which is connected and that have boundary size at most . We call a node an admissible root of if the node sets of all connected components of are admissible.
Procedure OPT-STT in Algorithm 2 computes an optimal -cut tree for an admissible set , and the relevant search frequencies . The initial call is OPT-STT. Only the root and the total cost are returned, the full tree can be reconstructed by collecting the roots from the recursive calls with standard bookkeeping.
Line 4–5 is the base case (a tree of a single node). In Line 6 the admissible root of the current subset is selected, and in Lines 7–9 the optimal subtrees are found. In Line 10 the total cost is computed with the chosen root and the roots of the optimal subtrees as its children. The first term counts the number of times the root is accessed, and the second term adds the cost of accesses in the subtrees. The correctness of the algorithm follows from the preceding discussion.
The dynamic program is over all nonempty admissible subsets of . In a preprocessing step we enumerate all these sets, indexed by their cuts. As only cuts of size at most are relevant, we can iterate through them by traversing the tree with pointers. For each cut, we do another traversal of the tree, enumerating the set of nodes in the corresponding admissible subset. Observe that some cuts lead trivially to an empty set of nodes (when the cut-edges point away from each other), and some cuts contain redundant edges. We can easily detect and remove these cases.
We now discuss the finding of admissible roots (creftypecap 6).
Let be an admissible set and assume .
If , then every node is an admissible root of .
If , the set of admissible roots of is .
(i) Let , let , and let be a connected component of . Then, by Lemma 2, , and thus . Thus, is an admissible root of .
(ii) Let .
Let be an admissible root. If two or more boundary nodes of are neighbors of , then is in . Otherwise, at least one boundary node is not a neighbor of , so by Lemma 2, there must be some connected component of , such that . Then, by assumption, there is some (otherwise, by Lemma 2, has boundary size and is not admissible). Node is either a neighbor of or a boundary node of a component of . The path between and must pass through , implying .
Conversely, assume . Let be two distinct nodes. Then is on the path between and . Thus, and are in different connected components of . Now Lemma 2 implies that the boundary of each connected component of is a proper subset of , and thus has boundary size at most .
Given Lemma 7, the enumeration of admissible roots in Line is straightforward, via a traversal of the subtree . In case (i) we traverse the entire tree , in case (ii) we traverse the tree of paths from some boundary node to the other boundary nodes (found e.g. with a breadth-first search). The cuts of the components can be found in linear time by straightforward data structuring.
In the preprocessing stage we enumerate cuts, and for each cut we do a linear-time traversal to find the corresponding admissible set, all within time .
The recursive calls of OPT-STT are for smaller admissible sets. Therefore, during the preprocessing phase we sort the admissible sets by size, and in the dynamic programming table we fill in the entries by increasing order of size. It remains to show that filling in one entry takes time , from which the overall running time of follows.
Lines 4 and 5 take time. In Line 6 we iterate over all admissible roots, which, by the preceding discussion, takes time . In Line 7 we read out the connected components indexed by their cuts, computed during preprocessing. Line 10 takes time, as the first term can be precomputed for all admissible sets, and the second term is collected from the recursive calls.
Line 9 is nested in two loops (iterating through possible root nodes, and through each connected component after the removal of the root). Nonetheless, it is easy to see that it is executed at most twice for each edge in (once for each orientation). The total number of recursive calls is therefore at most , as is the cost of taking the minimum in Line 11.
Using Lemma 17 and setting , we obtain:
It is tempting to try extending the approximation algorithm (with some ratio ) to the the easiest case, i.e. when the STT is a rooted version of . Unfortunately, -cut trees cannot give an -approximation of the STT optimum. To see this, take to be a path, and observe that every rooted version of has average depth , whereas a BST on (which is, in particular, a -cut tree) has maximum depth .
4 Rotations in Steiner-closed trees
As discussed in § 1, an essential feature of the classical BST model is that the rotation-distance between two trees of size is .222The number of rotations needed to transform one size- BST into another is at most and there are pairs of trees requiring this many rotations for all [STT88, Pou12]. As BSTs are trivially Steiner-closed, a similar lower bound for the rotation distance of Steiner-closed STTs follows. In particular, if we only do rotations on the search path, as in most natural algorithms, then the cost of rotations can be charged to the cost of searching, i.e. of simply traversing the search path. In STTs the situation is different, as there are pairs of trees of size that are rotations apart [CLP18].
STTs are in bijection with vertices of tree associahedra, whose edges correspond to rotations. Our result can be interpreted as follows. While the skeleton of a tree associahedron (for a tree of size ) has diameter , its vertices corresponding to Steiner-closed trees induce a connected subgraph of diameter .
We show the first half of the statement first, i.e. we allow rotations at arbitrary nodes of the tree. Denote by the rooted tree obtained from by setting the root. By Observation 4, is a Steiner-closed STT on .
Denote and . We split the sequence of rotations from to into three parts. First we rotate from to , then from to , and finally from to . We start with the easier, second part.
There is a sequence of at most rotations that transforms into , for arbitrary nodes . All intermediate trees are Steiner-closed.
Let be the search path of in , with and . We rotate at the nodes (in this order).
As , we clearly make at most rotations. We show inductively that after rotating at , for all , the obtained tree is . The claim follows, as the last rotation is at .
Consider the tree after rotating at . By the inductive claim, is the root, and since is an edge of , node is the child of the root. The next rotation brings to the root, making it the parent of . All other nodes whose parent changes must be in the subtree of delimited by and , but since and are connected by an edge in , there are no such nodes. Thus, the edge-set of the tree is not changed by the rotation and equals the edge set of . Since all intermediate trees are of the form , they are in particular, Steiner-closed, by Observation 4.
We next describe the rotation sequence from to . The analogous construction can be applied (in reverse) for rotating from to . We construct a sequence of trees , where , and , so that is obtained from by a single rotation, for all .
For the remainder of this section, let denote the fact that is the ancestor of in the target tree .
Given a tree in the sequence, the top tree is a subset of nodes , defined as follows: the root is in , and any non-root is in if and only if its parent in is in , and . In words, the top tree forms a maximal root-containing subtree of , whose nodes preserve the ancestor/descendant relations of . Intuitively, is a set of nodes “in the right order”.
Observe that implies that , i.e. that we have reached the target. Indeed, the root of and must be the same (), as otherwise the parent of in could not be its ancestor in , contradicting . The same argument applies recursively to the subtrees built on the components of in and .
We now construct from . Let be an arbitrary node whose parent is in . (If there is no such node then and we are done.) In words, is an edge of the current tree that hangs just below the top tree. We let be the tree obtained by rotating the edge in . The crucial observation is that with this rotation, the top tree size increases with the addition of node and possibly other nodes.
We prove, inductively, that the following invariants hold throughout the sequence.
For all , we have:
Lemma 9, together with Lemma 8 will imply Theorem 2. To see this, observe that implies , so by invariant (ii), must hold. The total number of rotations in the three parts is thus at most . By invariant (iii) all intermediate trees are Steiner-closed.
Proof of Lemma 9..
We first show that the invariants hold when , i.e. for .
Invariant (i) holds, since, by definition . Invariant (ii) holds, since contains the root and the children of the root, so . Invariant (iii) holds, as we require to be Steiner-closed.
We next show that the invariants hold as we go from to .
By our choice of , the parent of cannot be the root , as in that case would impy . As the rotation of leaves the root in place, invariant (i) is maintained.
To show invariant (ii), we argue that with the rotation , node enters the top tree and no node leaves the top tree, i.e. .
Let denote the parent of in . After the rotation , becomes the parent of , and becomes the parent of (see Figure 4).
We will show , and claim that this implies . Indeed, observe that (as and the search path of does not change from to ), which together with implies that .
For other nodes , the search path may change from to only with the addition of (e.g. tree in Figure 4) or with the removal of (e.g. trees and in Figure 4). Nodes of the first kind that are in remain in as and . Nodes of the second kind have as ancestor in , therefore are not in . Some of them may, in fact, enter , e.g. tree in Figure 4. All other nodes in have their search paths unchanged, and thus remain in .
Proof of .
We know that . Observe that would imply , contradicting our choice of .
Suppose that and . Then there is a proper lowest common ancestor of and in . Let denote the set of nodes on the search path of in . If , then separates and , contradicting the fact that appear in this order on a search path. Otherwise we have in , and since has degree at least in , it must be the case that , as otherwise would not be Steiner-closed. But implies that is an ancestor of in , and since separates and , the fact that appear in this order on a search path is a contradiction.
Suppose that and . Then there is a proper lowest common ancestor of and in . By the earlier argument, must be an ancestor of in , and since separates and , the fact that appear in this order on a search path is a contradiction.
Suppose that . Then separates and , contradicting the fact that appear in this order on a search path. The only remaining case is .
It remains to show invariant (iii), which we separate into a lemma.
If is Steiner-closed then is Steiner closed.
Again, we need to consider only nodes whose set of ancestors changes due to the rotation. These are of the following type (see Figure 4 for an illustration):
Observe that nodes in a subtree rooted at some child of in that changes parent to in are not affected, as they have the same ancestors in and (tree in Figure 4).
We argue that the search paths of all three types of nodes remain Steiner-closed, which proves the claim.
(1) Nodes and are in , so the nodes on their search path in are on a path in (and therefore in ), so by creftypecap 3 they form Steiner-closed sets.
(2) Let be a node of this type, let be the set of nodes on its search path in , and let be the set of nodes on its search path in . Since and separates and in , we have , so and the Steiner-closed property of the search path does not change.
(3) Let be a node of this type, let be the set of nodes on its search path in , and let be the set of nodes on its search path in