1 Introduction
A Search Tree (ST) evolves upon insertions/deletions according to the nodes’ key field value. Such field works as a kind of reference to help placing a new unique value in the tree rooted by the key value . For a Binary Search Tree (BST), is placed to the left or right subtrees of if and , respectively. This definition forces the reference value to always be the key field of previously inserted nodes. Hence, if a sequence consisted of ‘bad’ keys is given as input, the resulting worstcase search performance is linear, missing the opportunity to build a height tree.
A classical way to preventing BST to unbalance consists in walking the insertion/deletion path back to the root to check/update heightrelated informations and trigger node rotation(s) if necessary, the socalled selfbalanced BSTs e.g. AVL [1], “RedBlack” [4]. Those tasks increase programming complexity in comparison to BSTs. Also, they might cause the selfbalanced BSTs to perform worse than a common BST when the sequence of insertion keys leads the latter to be “naturally” balanced. Indeed, Knuth [5] shows that BSTs requires only comparisons if keys are inserted in a random order.For this dilemma, he suggests a ‘balanced attitude’ considering selfbalanced trees for large – due to the BST’s ‘annoying (linear) possibility’ – and BSTs for lower because of its reduced overhead and simpler programming.
In this paper we wonder about the condition(s) under which (if any) it is possible to design a rotationfree BST with height regardless of the size sequence of insertion keys. To achieve that we generalize the definition of “Search Tree” by enabling reference values other than the key values of prior inserted nodes. We refer to them as hidden reference values because they guide search procedures but requires no kind of permanent registration. To derive the proposed’s tree worstcase search, insertion and deletion time complexities we consider the RandomAccess Machine (RAM) model [2] under the same assumptions taken by AVL and RedBlack. With that, after the maximum number of keys in the tree is determined, the maximum number of bits to represent a key is computed as a constant bounded to . Otherwise, if is not known in advance, can not be assumed as constant. Thus, the constant time assumed for comparison between keys “clearly becomes an unrealistic scenario” [3] and the complexity no more hold for the selfbalanced BSTs.
Given an input key, we rely on to calculate hidden reference values for each node over the search path. These values correspond to the ideal sequence of insertion keys to build a balanced BST with at most nodes. Because new incoming keys are placed in the tree based on those reference values – rather than on the values of prior inserted keys– the “search tree” property does not hold. However, the resulting tree can be viewed as a “search tree” in the sense that the hidden reference value of an arbitrary node is always greater (less) than any key value in its own left (right) subtree^{1}^{1}1The insertion case where the input key is equal to the hidden reference value is a matter of design choice.. For this reason we term this data structure as the Hidden Binary Search Tree (HBST). We present elementary algorithms to maintain the HBST’s height bounded to . The algorithms assume neither special order keys nor any kind selfbalancing rotation procedure.
2 The Reference Hidden BST Algorithm
Let be the number of nodes of a BST, each of which uniquely identified by a key from the integer interval ^{2}^{2}2.. In several practical scenarios, is determined in advance when the data structure programmer determines a data type with bits for the nodes’ key field. From the interval , one can build a balanced BST following an idea reminiscent to the Merge sort algorithm. This resulting BST is illustrated in Fig. 1 for . Firstly, the algorithm takes the interval as input and choose its midpoint to insert in the BST. The same idea applies recursively to the root’s left and right subtrees with the intervals and , respectively. Note that and in the first iteration.
The idea just described may not seem to be valuable because the insertion sequence is not known a priori. However, one can benefit from it if the “search tree” property can be relaxed (actually generalized) to include reference values other than prior inserted keys. These reference values need not to be stored in nodes. They can be computed, for example, taking as reference an ideal insertion sequence to guide which subtree the search must follow in each iteration (or recursive call). Then, for a given reference search value, all nodes at its left subtree have value less than it whereas all nodes in the right subtree have values greater than it. The insertion case where the input key is equal to the hidden reference value is a matter of design choice. A binary tree that satisfy the search property this way we name as Hidden Binary Search Tree (HBST).
An HBST built from the insertion sequence with is illustrated in Fig. 2 where values equal to the reference value are insert to the right. The first insertion is the trivial case. After that, the second input consists of the key along with the interval^{3}^{3}3. . The algorithm check that there is a node in the current level (the root, in this case), and calculates the hidden search reference value (shown in the center of the node’s interval) from the given interval doing . The same idea applies recursively to the root’s left and right subtrees with the intervals and , respectively. Note that in the first iteration and the interval signs are merely illustrative.
2.1 HBST’s Search Property
In BST or variants thereof, the ST property is always checked considering the same field of different nodes, usually the key field. That said, it is clear the ST property does not hold in HBST, as one can easily see in Fig. 2. However, the hidden reference tree associated to the reference values of the interval , does satisfy the property. Besides that, and most important, if is found to be the root’s hidden reference value in the HBST (sub)tree , then the HBST’s insertion rule for an arbitrary key mandates that must be inserted to the left subtree of if or to the right, otherwise.
3 HBST: First Elementary Functions in C
In this Section we present the first elementary functions insert, search and lazyDel for inserting, searching and deleting a given input key in the HBST. The deletion function employs a lazy strategy: the node is only removed logically (key field assigned to flag 1) such that the space can be reused later by the insertion function. Without loss of generality, the insertion function assumes the given new key is not already in the tree.
A ‘hard deletion’ function (not shown here) works just like in standard BSP unless the node to be removed has two children. In HBST there is no need to find the minimum from right subtree nor maximum from left subtree: the substitute can be any descendant leaf node. We choose mnemonic name for the nodes’ fields just as in a typical BST. The remainder set of assumptions for them are embedded as comment in the code itself.
All functions calculate the hidden reference values considering the quantity of bits implied by the data type chosen for the key field, a bit integer in the case. Since the interval to calculate the hidden reference value halves from one recursive call (or iteration) to another, the size of the interval decreases at least by one order of magnitude e.g., , , , . This assures the number of iterations is bounded by the number of bits of the chosen data type. One variation of the insertion algorithm may consider calculating a specific upperinterval per iteration instead of passing them across iterations (recursions). In this case, if the root subtree in the th iteration is then its hidden upperbound is the largest value possible to generate with the minimum number of bits required to represent , i.e. . With this a new node can be inserted inbetween prior inserted nodes.
/* Assumptions: * unique key values, HBSTNode typical BST structure, r is * valid ref pointer to the root, alocateNewNode is * a function to allocate and connect new node. * keys are signed 32bit integer (negatives discarded) * i.e. B = 32, n<=2^32. First call: min=0, max=2^32, in C: 1<<32. */ #include <stdlib.h> HBSTNode *insert(HBSTNode **r, int newKey, unsigned int min, unsigned int max) { if (*r==NULL) //alocate new node as in a BST. Return pointer to it or NULL return alocateNewNode(r, newKey); // OPTIONAL: make use of space released by function lazyDel // for simplicity we assume newKey is not currently in the tree. if ((*r)>key == 1) { (*r)>key == newKey; return *r; } unsigned int hiddenRef = (min + max)/2; if (newkey < hiddenRef) return insert(&((*r)>left), newKey, min, hiddenRef); else return insert(&((*r)>right), newKey, hiddenRef, max); } HBSTNode *search(HBSTNode *r, int key, unsigned int min, unsigned int max) { if (r == NULL  min > max) return NULL; if (r>key == key) return r; unsigned int hiddenRef = (min + max)/2; if (key < hiddenRef) return search(r>left, key, min,hiddenRef); else return search(r>right, key, hiddenRef, max); } /* * Assume dynamic environment. Avoid expensive * memory management. Employ mem reuse. */ HBSTNode *lazyDel(HBSTNode *r, int key, unsigned int min, unsigned int max) { HBSTNode *killMe = search(r, key, min, max); if (killMe == NULL  r>key == 1) return NULL; killMe>key = 1; return killMe; }
4 Complexity
The worstcase order of growth for the HBST performance is dominated by the search strategy common to all functions presented in Section 3. can be readily obtained by the recurrence equation (1), where is the input parameter max. Recall that we are assuming the RAM model [2] in which the word size can not grow arbitrarily after is chosen [3]. This is the same assumption under which AVL and RedBlack get running time is , where the first term represent the constant time to perform a comparison between integers and the second the height of tree. As in binary search, AVL and RedBlack, each round of HBST function solves approximately half the input size at a time cost. Once the number of bits is assigned to the key field, maximum asymptotic height of HBST is logarithmic on the input (Eq. 2) or, alternatively, linear on (Eq. 3). This performance requires no kind of balancing procedure nor “good” insertion sequences.
(1)  
(2)  
(3) 
4.1 Practical Considerations
The hidden tree underlying an HBST is nothing but a balanced tree composed of all values from , where . Since the resulting tree is balanced, its height is . This reveals the three worstcase is bounded to . Considering a practical example in which the key field is declared as a 64bit integer, the data structure supports no more than distinct keys and a comparison between two keys takes . For any quantity , a balanced BST has its complexity bounded to while HBST is bounded to , i.e. iterations to find/insert/delete a key in this case. Thus, HBST’s performance may degrade if, for example,
grows on demand i.e. the value of the (really) largest key can not be estimated in advance. In this case, a single comparison takes
time and some kind of technology to increase ondemand is required.5 Conclusion and Future Work
In this work we showed that it is possible to relaxe the definition of search tree while keeping almost unchanged the main elementary functions of a typical Binary Search Tree (BST). We achieved that by generalizing the “search tree” property allowing it to considering values other than the key field of prior inserted nodes. This concept based the design of the “Hidden Binary Search Tree (HBST)”, a balanced rotationfree tree data structure. To successfully build a search path, HBST compares the input key against “hidden” reference values of a reference balanced BST ideally built on the interval , where is the size of nodes’ keys in bits and is the size of input, i.e. size of insertion keys.
We presented search, insertion and deletion algorithms that keep the HBST’s height bounded to . Since is dimensioned according to the insertion sequence size in such a way that , HBST achieves logarithmic worstcase running time under the assumption that is fixed once is given. This is the same assumption under which AVL and RedBlack BSTs achieve worstcase time. In fact, as pointed out by [3], if can grow arbitrarily, a ‘simple’ key comparison (as assumed by AVL and RedBlack) becomes unrealistic, preventing those trees’ complexities to be solely explained by . Under this same assumption, HBST achieves time with no need to perform any kind of balancing/rotation (as required by AVL and RedBlack) nor to assume special order on the input, as required by BST to achieve logarithmic performance.
An important question left open in this work is about the feasibility of a linear time inorder traversal on HBST. Regarding the presented functions, lots of interesting refinements can be performed such as adaptive according to the given key value, topdown insertion and hybrid BSTHBST structures. Finally, it would be interesting to check whether the “hidden search” property can improve the performance of other kind of structures such as external data structures (e.g. Btree) and priority queues.
6 Acknowledgements
I would like to thank Edimar Bauer, our teaching advisor for the courses of algorithms and data structures during 2017/2. Thank you for embracing the idea in a so enthusiastic way, pointing out improvements and performing lots of tests!
References
 [1] G. M. AdelsonVelsky and E. M. Landis. An algorithm for the organization of information. In Proceedings of the USSR Academy of Sciences, volume 146, pages 263–266, 1962.

[2]
Stephen A. Cook and Robert A. Reckhow.
Timebounded random access machines.
In
Proceedings of the Fourth Annual ACM Symposium on Theory of Computing
, STOC ’72, pages 73–80, New York, NY, USA, 1972. ACM.  [3] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009.
 [4] L. J. Guibas and R. Sedgewick. A dichromatic framework for balanced trees. In 19th Annual Symposium on Foundations of Computer Science (sfcs 1978), pages 8–21, Oct 1978.
 [5] Donald E. Knuth. The Art of Computer Programming, Volume 3: (2Nd Ed.) Sorting and Searching. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USA, 1998.