Given a string of integers in , a range minimum query returns the index of the smallest integer in . A range minimum data structure consists of a preprocessing algorithm and a query algorithm. The preprocessing algorithm takes as input the string , and constructs the data structure, whereas the query algorithm takes as input the indices and, by accessing the data structure, returns . The range minimum problem is one of the most fundamental problems in stringology, and as such has been extensively studied, both in theory and in practice (see e.g. Fischer and Heun  and references therein).
Range minimum data structures fall into two categories. Systematic data structures store the input string , whereas non-systematic data structures do not. A significant amount of attention has been devoted to devising RMQ data structures that answer queries in constant time and require as little space as possible. There are succinct systematic structures that answer queries in constant time and require fewer than bits in addition to the bits required to represent Fischer and Heun . Similarly, there are succinct non-systematic structures that answer queries in constant time, and require bits Fischer and Heun , Davoodi et al. .
The Cartesian tree of is a rooted ordered binary tree with nodes. It is defined recursively. Let be the index of the smallest element of (if the smallest element appears multiple times in , let be the first such appearance). The Cartesian tree of is composed of a root node whose left subtree is the Cartesian tree of , and whose right subtree is the Cartesian tree of . See Figure 1. By definition, the character corresponds to the ’th node in an inorder traversal of (we will refer to this node as node ). Furthermore, for any nodes and in , their lowest common ancestor in corresponds to in . It follows that the Cartesian tree of completely characterizes in terms of range minimum queries. Indeed, two strings return the same answers for all possible range minimum queries if and only if their Cartesian trees are identical. This well known property has been used by many RMQ data structures including the succinct structures mentioned above. Since there are distinct rooted binary trees with nodes, there is an information theoretic lower bound of bits for RMQ data structures. In this sense, the above mentioned bits data structures Fischer and Heun , Davoodi et al.  are nearly optimal.
1.1 Our results and techniques
In this work we present RMQ data structures whose size can be sublinear in the size of the input string that answer queries in time. This is achieved by using compression techniques, and developing data structures that can answer RMQ/LCA queries directly on the compressed objects. Since we aim for sublinear size data structures, we focus on non-systematic data structures. We consider two different approaches to achieve this goal. The first approach is to use string compression to compress , and devise an RMQ data structure on the compressed representation. This approach has also been suggested in [Abeliuk et al., 2013, Section 7.1] in the context of compressed suffix arrays. See also [Davoodi et al., 2012, Theorem 2], [Fischer and Heun, 2011, Theorem 4.1], and Barbay et al.  for steps in this direction. The second approach is to use tree compression to compress the Cartesian tree , and devise an LCA data structure on the compressed representation. To the best of our knowledge, this is the first time such approach has been suggested. Note that the two approaches are not equivalent. For example, consider a sorted sequence of an arbitrary subset of different integers from . As a string this sorted sequence is not compressible, but its Cartesian tree is an (unlabeled) path, which is highly compressible. In a nutshell, we show that the tree compression approach can exponentially outperform the string compression approach. Furthermore, it is never worse than the string compression approach by more than an factor, and this factor is unavoidable. We next elaborate on these two approaches.
Using string compression
In Section 2.1, we show how to answer range minimum queries on a grammar compression of the input string . A grammar compression is a context-free grammar that generates only . The grammar is represented as a straight line program (SLP) . I.e., the right-hand side of each rule in either consists of the concatenations of two non-terminals or of a single terminal symbol. The size of the SLP is defined as the number of rules in . Ideally, . Computing the smallest possible SLP is NP-hard Charikar et al. , but there are many theoretically and practically efficient compression schemes for constructing Charikar et al. , Jez and Lohrey , Goto et al. , Takabatake et al.  that reasonably approximate the optimal SLP. In particular, Rytter Rytter  showed an SLP of depth (the depth of an SLP is the depth of its parse tree) whose size is larger than the optimal SLP by at most a multiplicative factor.
In Abeliuk et al. , it was shown how to support range minimum queries on with a data structure of size in time proportional to the depth of the SLP . Bille et al. Bille et al. [2015b] designed a data structure of size that supports random-access to (i.e. retrieve the ’th symbol in ) in time (i.e. regardless of the depth of the SLP ). We show how to simply augment their data structure within the same size bound to answer range minimum queries in time (i.e., how to avoid the logarithmic overhead incurred by using the solution of Abeliuk et al.  on Rytter’s SLP).
Given a string of length and an SLP-grammar compression of , there is a data structure of size that answers range minimum queries on in time.
Using tree compression
In Section 2.2, we give a data structure for answering LCA queries on a compressed representation of the Cartesian tree . By the discussion above, this is equivalent to answering range minimum queries on . We use DAG compression of the top-tree of the Cartesian tree of . We now explain these concepts.
A top-tree Alstrup et al.  of a tree is a hierarchical decomposition of the edges of into clusters. Each cluster is a connected subgraph of with the property that any two crossing clusters (i.e., clusters whose intersection is nonempty and neither cluster contains the other) share at most two vertices; the root of the cluster (called the top boundary node) and a leaf of the cluster (called a bottom boundary node). Such a decomposition can be described by a rooted ordered binary tree , called a top-tree, whose leaves correspond to clusters with individual edges of , and whose root corresponds to the entire tree . The cluster corresponding to a non-leaf node of is obtained from the clusters of its two children by either identifying their top boundary nodes (horizontal merge) or by identifying the top boundary node of the left child with the bottom boundary node of the right child (vertical merge). See Figure 1.
A DAG compression Downey et al.  of a tree is a representation of by a DAG whose nodes correspond to nodes of . All nodes of with the same subtree are represented by the same node of the DAG. Thus, the DAG has two sinks, corresponding to the two types of leaf nodes of (a single edge cluster, either left or right), and a single source, corresponding the root of . If is the parent of and in , then the node in the DAG representing the subtree of rooted at has edges leading to the two nodes of the DAG representing the subtree of rooted at and the subtree of rooted at . Thus, repeating rooted subtrees in are represented only once in the DAG. See Figure 1.
A top-tree compression Bille et al. [2015a] of a tree is a DAG compression of ’s top-tree . Bille et al. Bille et al. [2015a] showed how to construct a data structure whose size is linear in the size of the DAG of and supports navigational queries on in time linear in the depth of . In particular, given the preorder numbers of two vertices in , their data structure can return the preorder number of in . We show that their data structure can be easily adjusted to work with inorder numbers instead of preorder, so that, given the inorder numbers of two vertices in one can return the inorder number of in . This is precisely when is taken to be the Cartesian tree of .
Given a string of length and a top-tree compression of the Cartesian tree , we can construct a data structure of size that answers range minimum queries on in time.
We already mentioned that, on some RMQ instances, top-tree compression can be much better than any string compression technique. As an example, consider the string . Its Cartesian tree is a single (rightmost, and unlabeled) path, which compresses using top-tree compression into size . On the other hand, since , is uncompressible with an SLP. By Theorem 1.2, this shows that the tree compression approach to the RMQ problem can be exponentially better than the string compression approach. In fact, for any string over an alphabet of size , any SLP must have while for top-trees Bille et al. [2015a]. In Section 3.1 we show that, for small alphabets, cannot be much larger nor much deeper than for any SLP .
Given a string of length over an alphabet of size , for any SLP-grammar compression of there is a top-tree compression of the Cartesian tree with size and depth .
Given a string of length over an alphabet of size , let denote the smallest possible SLP-grammar compression of . There is a top-tree compression of the Cartesian tree of with size at most , , and there is a data structure of size that answers range minimum queries on in time.
Finally, observe that can be larger than by an multiplicative factor which can be large for large alphabets. It is tempting to try and improve this. However, in Section 3.2 we prove a tight lower bound, showing that this factor is unavoidable.
For every sufficiently large and , there exists a string of integers in that can be described with an SLP of size , such that any top-tree compression of the Cartesian tree of is of size .
2 RMQ on Compressed Representations
2.1 Compressing the string
Given an SLP compression of , Bille et al. Bille et al. [2015b] presented a data structure of size that can report any in time. The proof of Theorem 1.1 is a rather straightforward extension of this data structure to support range minimum queries.
The key technique used in Bille et al. [2015b] is an efficient representation of the heavy path decomposition of the SLP’s parse tree. For each node in the parse tree, we select the child of that derives the longer string to be a heavy node. The other child is light. Heavy edges are edges going into a heavy node and light edges are edges going into a light node. The heavy edges decompose the parse tree into heavy paths. The number of light edges on any path from a node to a leaf is where denotes the length of the string derived from . A traversal of the parse tree from its root to the ’th leaf enters and exists at most heavy paths. Bille et al. show how to simulate this traversal in time on a representation of the heavy path decomposition that uses only space (note that we cannot afford to store the entire parse tree as its size is which can be exponentially larger than ). We do not go into the internals of their representation but it is important to note that for each heavy path encountered during the traversal their structure computes the total size (number of leaves) of all subtrees hanging with light edges from the left (respectively right) of between the entry point and exit point in . This is achieved with a binary search tree (called an interval biased search tree) that ensures that collecting these values (as well as finding the entry and exit points) on all encountered heavy paths telescopes to a total of time (rather than ).
In order to extend their structure to support range minimum queries we need only the following two changes: (1) in the interval biased search tree, apart from storing for each node the number of leaves in its subtree, we also store the location of the minimum value leaf. This means that apart from accumulating subtree sizes we can also compare their minimums. (2) for each heavy path in their representation we add a standard linear-space constant query-time RMQ data structure Bender and Farach-Colton  over the left (respectively right) hanging subtree minimums. This RMQ structure will be queried only on the unique heavy path containing the lowest common ancestor of the ’th and ’th leaves in the parse tree.
2.2 Compressing the Cartesian tree
We next prove Theorem 1.2, i.e. how to support range minimum queries on using a compressed representation of the Cartesian tree Vuillemin . Recall that the Cartesian tree of is defined as follows: If the smallest character in is (in case of a tie we choose a leftmost position) then the root of corresponds to , its left child is the Cartesian tree of and its right child is the Cartesian tree of . By definition, the ’th character in corresponds to the node in with inorder number (we will refer to this node as node ). Observe that for any nodes and in , the lowest common ancestor of these nodes in corresponds to in . This implies that without storing explicitly, one can answer range minimum queries on by answering LCA queries on . In this section, we show how to support LCA queries on on a top-tree compression Bille et al. [2015a] of . The query time is which can be made using the (greedy) construction of Bille et al. Bille et al. [2015a] that gives . We first briefly restate the construction of Bille et al., and then extend it to support LCA queries.
The top-tree of a tree (in our case will be the Cartesian tree ) is a hierarchical decomposition of into clusters. Let be a node in with children .222Bille et al. considered trees with arbitrary degree, but since our tree is a Cartesian tree we can focus on binary trees. Define to be the subtree of rooted at . Define to be the forest without . A cluster with top boundary node can be either (1) , (2) , or (3) . For any node in a cluster with top boundary node , deleting from the cluster all descendants of (not including itself) results in a cluster with top boundary node and bottom boundary node . The top-tree is a binary tree defined as follows (see Figure 1):
The root of the top-tree is the cluster itself.
The leaves of the top-tree are (atomic) clusters corresponding to the edges of . An edge of is a cluster where is the top boundary node. If is a leaf then there is no bottom boundary node, otherwise is a bottom boundary node. If is the right child of then we label the cluster as and otherwise as .
Each internal node of the top-tree is a merged cluster of its two children. Two edge disjoint clusters and whose nodes overlap on a single boundary node can be merged if their union is also a cluster (i.e. contains at most two boundary nodes). If and share their top boundary node then the merge is called horizontal. If the top boundary node of is the bottom boundary node of then the merge is called vertical and in the top-tree is the left child and is the right child.
Bille et al. Bille et al. [2015a] proposed a greedy algorithm for constructing the top-tree: Start with clusters, one for each edge of , and at each step merge all possible clusters. More precisely, at each step, first do all possible horizontal merges and then do all possible vertical merges. After constructing the top-tree, the actual compression is obtained by representing the top-tree as a directed acyclic graph (DAG) using the algorithm of Downey et al. . Namely, all nodes in the top-tree that have a child with subtree will point to the same subtree (see Figure 1). Bille et al. Bille et al. [2015a] showed that using the above greedy algorithm, one can construct of size that can be as small as (when the input tree is highly repetitive) and in the worst-case is at most . Dudek and Gawrychowski Dudek and Gawrychowski  have recently improved the worst-case bound to by merging in the ’th step only clusters whose size is at most for some constant . Using either one of these merging algorithms to obtain the top-tree and its DAG representation , a data structure of size can then be constructed to support various queries on . In particular, given nodes and in (specified by their position in a preorder traversal of ) Bille et al. showed how to find the (preorder number of) node in time. Therefore, the only change required in order to adapt their data structure to our needs is the representation of nodes by their inorder rather than preorder numbers.
The local preorder number of a node in and a cluster in is the preorder number of in a preorder traversal of the cluster . To find the preorder number of in time, Bille et al. showed it suffices if for any node and any cluster we can compute in constant time from or (the local preorder numbers of in the clusters and whose merge is the cluster ) and vice versa. In Lemma 6 of Bille et al. [2015a] they show that indeed they can compute this in constant time. The following lemma is a modification of that lemma to work when and are local inorder numbers.
Lemma 2.1 (Modified Lemma 6 of Bille et al. [2015a])
Let be an internal node in corresponding to the cluster obtained by merging clusters and . For any node in , given we can tell in constant time if is in (and obtain ) in (and obtain ) or in both. Similarly, if is in or in we can obtain in constant time from or .
Proof. We show how to obtain or when is given. Obtaining from or is done similarly. For each node , we store a following information:
(): the first (last) node visited in an inorder traversal of that is also a node in .
(): the first (last) node visited in an inorder traversal of that is also a node in .
the number of nodes in and in .
, where is the common boundary node of and .
Consider the case where is obtained by merging and vertically (when the bottom boundary node of is the top boundary node of ), and where includes vertices that are in the left subtree of this boundary node, the other case is handled similarly:
if then is a node in and .
if then is a node in and . For the special case when then is also the bottom boundary node in and .
if then is a node in visited after visiting all the nodes in then .
When is obtained by merging and horizontally (when and share their top boundary node and is to the left of ):
if then is a node in and .
if then is a node in and . For the special case when then is also the top boundary node in and .∎
3 Compressing the String vs. the Cartesian Tree
In this section we compare the sizes of the SLP compression and the top-tree compression .
3.1 An upper bound
We now show that given any SLP of height , we can construct a top-tree compression based on (i.e. non-greedily) such that and the height of is . Using , we can then answer range minimum queries on in time as done in Section 2.2. Furthermore, we can construct using Rytter’s SLP Rytter  as . Then, the height of is and the size of is larger than the optimal SLP by at most a multiplicative factor. Combined with Rytter’s SLP, and since every unlabeled tree has a top-tree compression of size and height Bille et al. [2015a], we obtain Theorem 1.3.
Consider a rule in the SLP. We will construct a top-tree (a hierarchy of clusters) of (i.e. of the Cartesian tree of the string derived by the SLP variable ) assuming we have the top-trees of and of . We show that the top-tree of contains only new clusters that are not clusters in the top-trees of and of , and that the height of the top-tree is only larger than the height of the top tree of or the top tree of . To achieve this, for any variable of the SLP, we will make sure that certain clusters (associated with its rightmost and leftmost paths) must be present in its top-tree. See Figure 2.
We first describe how the Cartesian tree of the string derived by variable can be described in terms of the Cartesian trees and . We label each node in a Cartesian tree with its corresponding character in the string. These labels are only used for the sake of this description, the actual Cartesian tree is an unlabeled tree. By definition of the Cartesian tree, the labels are monotonically non-decreasing as we traverse any root-to-leaf path. Let (respectively ) denote the path in starting from the root and following left (respectively right) edges. Since we break ties by taking the leftmost occurrence of the same character we have that the path is strictly increasing (the path is just non-decreasing).
Let be the label of the root of . To simplify the presentation we assume that the label of the root of is smaller or equal to (the other case is handled similarly). Split by deleting the edge connecting the last occurrence of on with its right child (again, for simplicity of presentation we assume without loss of generality that this node exists). The resulting two subtrees are the Cartesian trees and of a prefix and a suffix of of whose concatenation is . Split by deleting the edge connecting the root to its left child. The resulting two subtrees are the Cartesian trees and of a prefix and a suffix of . The Cartesian tree of the concatenation is obtained as follows. Compute recursively the Cartesian tree of the concatenation of and , and attach as the left child of the rightmost leaf in . Then attach as the right child of the rightmost leaf in . See Figure 2.
We move on to describing the clusters of the top-tree. For a node with label appearing in we define to be the subtree rooted at the node’s right child. We do this for all nodes except for the first node of (i.e. the root of ). Next consider the path . For every label there can be multiple vertices with label that are consecutive on . We define to be the union of all vertices of that have label together with the subtrees rooted at their left children. Again, we treat the first node of (i.e. the root of ) differently: if its label is then does not include this vertex (the root) nor its left subtree. See Figure 2 (left).
We define the top-tree recursively by describing how to obtain the clusters for the top-tree of the Cartesian tree from the top-trees of and . For each variable (say ) of the SLP of , we require that in the top-tree of there is a cluster for every and every . We will show how to construct all the and clusters of by merging clusters of and while introducing only new clusters, and with increase in height. First observe that for every we have that so we already have these clusters. Next consider the clusters . Let denote the label of the root of . It is easy to see that for every and that for every . Therefore, the only new cluster we need to create is .
The cluster is composed of the following components: First, it contains the cluster . Then, the root of (denoted , and whose label is ) is connected as the right child of the bottom boundary node of . The right child of in is the top boundary node of and all of is contained in . The left child of in is the top boundary node of a single new cluster consisting of existing clusters.
The cluster consist of all clusters and the clusters for . More precisely, let denote the smallest number larger than such that appears in . Starting from top to bottom, first contains a leftmost path that is a prefix of . More precisely, it is the prefix of containing all nodes with labels for . For each such node, its right subtree is the cluster . After this leftmost path then continues with a rightmost path that is a subpath of consisting of all nodes in with labels for . Here is the smallest number greater or equal to such that appears in . In this way, keeps alternating between subpaths of and of (along with the subtrees hanging from these subpaths). Overall, composes to clusters consisting of single edges, clusters , and clusters . We merge these clusters into the single cluster by first doing a horizontal merge for every with a single edge cluster and then greedily doing vertical merges for all clusters of the path. This adds new clusters and adds to the height of the cluster’s hierarchy. Finally, we obtain by merging , , and .
To conclude, once we have all clusters of the SLP’s start variable, we merge them into a single cluster (i.e. obtain the top-tree of the entire Cartesian tree of ) by greedily merging all its clusters (introducing new clusters and increasing the height by ) similarly to the above. This concludes the proof of Theorem 1.3.
3.2 A lower bound
We now prove Theorem 1.5. That is, for every sufficiently large and we will construct a string of integers in that can be described with an SLP of size , such that any top-tree compression of the Cartesian tree of is of size .
Let us first describe the high-level intuition. The shuffle of two strings and is defined as . It is not very difficult to construct a small SLP describing a collection of many strings and of length , and choose pairs such that every SLP describing all shuffles of and contains nonterminals. However, our goal is to show a lower bound on the size of a top-tree compression of the Cartesian tree, not on the size of an SLP. This requires designing the strings and so that a top-tree compression of the Cartesian tree of roughly corresponds to an SLP describing the shuffle of and .
Let and be a parameter such that . We start with constructing distinct auxiliary strings over , each of the same length . We construct every such string except for , so that Cartesian trees corresponding to s are all distinct. The total number of s is and there is an SLP of size that contains a nonterminal deriving every . Next, let denote the string . By construction, Cartesian trees corresponding to s are all distinct, and all s are of the same length.
The second and the third step are symmetric. We construct strings of the form:
for every . There are such strings , and there is an SLP of size that contains a nonterminal deriving every .
Similarly, we construct strings of the form:
Finally, for every we concatenate their corresponding strings in the lexicographical order on the pairs to obtain . The total size of an SLP that generates is . It remains to analyze the size of a top-tree compression of the Cartesian tree of .
We first need to understand the structure of . Because all strings are separated by s, the Cartesian tree of consists of a right path of length and the Cartesian tree of attached as the left subtree of the -th node of the path. The Cartesian tree of a string consists of a path of length starting at the root and consisting of nodes such that is the left child of and is the right child of . For every , the right subtree of is the Cartesian tree of and the left subtree of is the Cartesian tree of . See Figure 3.
We define a zigzag to be an edge such that is the left child of . Furthermore, for some and , the right subtree of should be the Cartesian tree of , while the left subtree of should be the Cartesian tree of .
The Cartesian tree of contains distinct zigzags. Furthermore, any zigzag occurs in exactly one such tree.
If is a top-tree compression of a tree with distinct zigzags then .
Proof. We associate each distinct zigzag with a smallest cluster of that contains it. We claim that each cluster obtained by merging clusters and is associated with zigzags. Consider a zigzag associated with . Hence, is not contained in nor in . We consider two cases.
and are merged horizontally. Then and share the top boundary node , and in fact . It follows that is the only zigzag in that is not in nor in .
and are merged vertically. Then the top boundary node of is the bottom boundary node of . Then either , , or is a node of the Cartesian tree of some attached as the right subtree of or the left subtree of . Each of the first two possibilities gives us one zigzag associated with that is not in nor in . In the remaining two possibilities, because the size of the Cartesian tree of every is the same, we can determine or , respectively, by navigating up from as long as the size of the current subtree is too small, and proceed as in the previous two cases.∎
Combining Proposition 3.1 and Lemma 3.2 we conclude that . Recall that the size of an SLP that generates is , where and is parameter such that . Given a sufficiently large and , we first choose . Observe that then indeed holds because of the assumption . We construct a string generated by an SLP of size , and any top-tree compression of the Cartesian tree of has size . This concludes the proof of Theorem 1.5.
- Abeliuk et al.  A. Abeliuk, R. Cánovas, and G. Navarro. Practical compressed suffix trees. Algorithms, 6(2):319–351, 2013.
- Alstrup et al.  S. Alstrup, J. Holm, K. de Lichtenberg, and M. Thorup. Maintaining information in fully-dynamic trees with top trees. ACM Transactions on Algorithms, 1:243–264, 2003.
- Barbay et al.  J. Barbay, J. Fischer, and G. Navarro. LRM-trees: Compressed indices, adaptive sorting, and compressed permutations. Theor. Comput. Sci., 459:26–41, 2012.
- Bender and Farach-Colton  M. A. Bender and M. Farach-Colton. The LCA problem revisited. In 4th Latin American Symposium on Theoretical Informatics (LATIN), pages 88–94, 2000.
- Bille et al. [2015a] P. Bille, I. L. Gørtz, G. M. Landau, and O. Weimann. Tree compression with top trees. Inf. Comput., 243:166–177, 2015a.
- Bille et al. [2015b] P. Bille, G. M. Landau, R. Raman, K. Sadakane, S. R. Satti, and O. Weimann. Random access to grammar-compressed strings and trees. SIAM J. Comput., 44(3):513–539, 2015b.
- Charikar et al.  M. Charikar, E. Lehman, D. Liu, R. Panigrahy, M. Prabhakaran, A. Sahai, and A. Shelat. The smallest grammar problem. IEEE Trans. Information Theory, 51(7):2554–2576, 2005.
- Davoodi et al.  P. Davoodi, R. Raman, and S. R. Satti. Succinct representations of binary trees for range minimum queries. In 18th Annual International Computing and Combinatorics Conference (COCOON), pages 396–407, 2012.
- Downey et al.  P. J. Downey, R. Sethi, and R. E. Tarjan. Variations on the common subexpression problem. J. ACM, 27(4):758–771, 1980.
Dudek and Gawrychowski 
B. Dudek and P. Gawrychowski.
Slowing down top trees for better worst-case compression.
29th Annual Symposium on Combinatorial Pattern Matching (CPM), pages 16:1–16:8, 2018.
- Fischer and Heun  J. Fischer and V. Heun. Space-efficient preprocessing schemes for range minimum queries on static arrays. SIAM Journal on Computing, 40(2):465–492, 2011.
- Goto et al.  K. Goto, H. Bannai, S. Inenaga, and M. Takeda. Fast q-gram mining on SLP compressed strings. J. Discrete Algorithms, 18:89–99, 2013.
- Jez and Lohrey  A. Jez and M. Lohrey. Approximation of smallest linear tree grammar. In 31st International Symposium on Theoretical Aspects of Computer Science (STACS), pages 445–457, 2014.
- Rytter  W. Rytter. Application of Lempel-Ziv factorization to the approximation of grammar-based compression. Theor. Comput. Sci., 302(1-3):211–222, 2003.
- Takabatake et al.  Y. Takabatake, T. I, and H. Sakamoto. A space-optimal grammar compression. In 25th Annual European Symposium on Algorithms (ESA), pages 67:1–67:15, 2017.
- Vuillemin  J. Vuillemin. A unifying look at data structures. Commun. ACM, 23(4):229–239, 1980.