1 Introduction
This note is to publicly answer to a paper recently accepted to SWAT 2020 [1] that claims to have solved an error in our papers [3, 2] by proposing a solution with worst performances. As apparently our work has aroused interest, to make it more readable, here we detail the case that deals with the lattice fat vertex case, which in the original paper was left to the reader since it is the dual version of forests of thin vertices, mutatis mutandis. In the following section, we describe in detail sections 4.2 (Cluster collection) and 5 (Data Structure and Space Complexity) in [3] to show the implementation of the data structure.
2 Cluster Collection
We now revise some part of the paper to make clear the implementation that gave rise to the misunderstanding. We only deal with the case and the theorems that concern fat vertices: refer to [3, 2] for the decomposition induced by lattice good vertices and thin vertices and the corresponding proof of correctness.
Dummy nodes are considered in our work in the case of fat nodes. Recalling Definition 4.1 in [3] in which a classification of nodes is given.
Definition 1.
A vertex is:

good if

fat if it is not good and one of the following two conditions holds:

; or

;


thin if it is neither good nor fat and
Loosely speaking, a fat vertex is a vertex such that all its children are too small in size to induce a good cluster (cluster of the right size, that is of size ). Then, we group the clusters induced by the children, in an order that we will describe in the following, until the cardinality of each group is within and . To manage these groups in a way similar to the other blocks derived from good and thin vertices, we add a fictitious vertex (the dummy node) as a top as shown in Figure 1. In the following we describe the dummy clusters construction.
As said above, dummy nodes are considered in the case of fat nodes. By definition, a fat node is the top of a large set of small clusters: If an external node (external to the cluster induced by the fat node, hence, external to all small clusters induced by its children) is connected to two or more nodes of the cluster then, by lattice property, these nodes can’t belong to an antichain. Let a fat vertex (refer to Figure 2): is the parent (w.l.o.g we are considering an upward orientation of the edges, as in the figure) of a set of nodes that induce small clusters. We order these vertices in not increasing order of the sizes of the cluster they induce, that is, , such that .
Before explaining how dummy clusters are built observe that, differently from other clusters (good or thin), the clusters collection induced by dummy nodes may have overlapping internal trees. More precisely, with reference to Figure 3, it may happen that an internal tree rooted at is also connected to , then a node belonging the external tree rooted at is connected to through . This case is managed as in the case of forest of thin clusters where the overlapping structure were the external trees.
We define the internal vertex set of a small cluster , denoted by , the union of all the internal vertex sets of each vertex . By the above observation the could not be strictly contained in , but it could happen that , for . We extend the notation to a dummy cluster by:
We, now, have all the necessary definitions to explain the decomposition of a cluster induced by a fat vertex into a collection of clusters induced by dummy nodes. We create the first dummy cluster starting from and adding small clusters in not increasing order of sizes to until one of the following conditions does hold:

, or (the dummy node is not considered in counting the size of the cluster)

where and is the number of small clusters added to .
All the other clusters induced by dummy nodes are built in the same way, obtaining a partition of the cluster , in clusters induced by dummy nodes . The advantage of this decomposition is that the external trees of all vertices are all disjoint due to lattice property see Figure 3.
3 Implementation: Data Structure and Space Complexity
In this section we describe in detail the data structure in [3] and analyze its complexity paying special attention to the case of fat nodes which perhaps has not been treated in sufficient detail but, except for the relationships between the fat node and its children, the analysis is similar to the forests of thin vertices.
Figure 4 shows the data structure implementing our decomposition strategy in order to achieve a constant time reachability queries.
Data structure A stores for each vertex , the identifiers of unique cluster to each it belongs to that could be: 1) a good cluster; 2) a cluster induced by a dummy node , if is not null; or, 3) a thin cluster if is not null. Note that, since the cluster collection is a partition, only one of the three cases may occur.
If is either a good or a thin cluster, data structure for each data structure stores, for each double tree of the double tree decomposition of , ’s coordinates, whenever belongs to ; otherwise, it contains a null value.
If is a cluster induced by a dummy node , then is indexed on the cluster decomposition in dummy clusters of the cluster induced by the fat node to which belongs to. It stores for each dummy cluster io the identifier of a vector, which stores connectivity information between and . If is not connected to then it stores a null value.
Data structure is again a set of lookup tables each one associated to a vertex . For each forest , if then data structure maintains the identifier of a fourth kind of table, , which stores connectivity information between and . If is not connected to then it stores a null value.
Data structure D is a set of lookup tables, each one associated to a vertex and a cluster forest . Table exists if and only if . For each cluster in the cluster forest , the corresponding field in the lookup table stores the identifier of the unique double tree associated to to which belongs as external vertex, and ’s coordinates with respect to this double tree representation.
4 Space complexity
Consider the overall clusters (good and dummy) and cluster forests sequence
where:

are the clusters induced by good vertices;

are the clusters induced by dummy vertices;

are cluster forests induced by thin vertices.
From [3] we have that the sub sequences and together with the corresponding double tree decomposition requires –space.
Let us now analyze the subsequence of clusters induced by dummy vertices. The proof proceed as for the dual case of cluster forests, mutatis mutandis, but we rewrite here for sake of precision.
Recall that clusters induced by dummy nodes are generated choosing small clusters in not increasing order until one of the following conditions does hold:

, or

where and is the number of small clusters added to .
If we denote then the overall space complexity of data structure is since only vertices in dummy clusters have the corresponding lookup table, each one of size . Hence, the second condition is used to bound each term of the summation.
We now to show that the way we group small clusters of a fat cluster in dummy clusters, that is the number of dummy nodes for each fat vertex is less than .
Obviously, if it is always possible to generate a dummy cluster satisfying both conditions then the space complexity of the overall data structure is . Unfortunately, the second condition could prevent from generating collection of dummy clusters, that is each dummy cluster could be of size less than . The following technical lemmas show how to manage this case.
Lemma 1.
Let be a dummy cluster where are the children of a fat vertex , then:
Proof.
The proof easily follows observing that, dummy clusters are generated by adding small clusters in not increasing order of size. ∎
In the following, we denote the size of a small cluster belonging to a dummy cluster induced by as follows:
(1) 
where .
In fact, the ordered sequence of clusters composing a dummy cluster, has monotone and not increasing sizesand, by hypothesis, each size is less than .
Hence,
(2) 
Let us suppose that the th generated dummy cluster of a fat node satisfies the following conditions:

;

;
and let be the last small cluster added. Then we have:
Lemma 2.
where
Proof.
From Lemma 1, if then the number of internal vertices belonging to , with , related to is at most . As a consequence, we have:
(3) 
(4) 
Hence, by condition , we get:
(5) 
where, the last inequality follows from dummy cluster termination condition.
Additionally,
(6) 
and, from relation 5 above:
(7) 
hence,
Dividing both terms by , we have:
(8) 
The left hand side of inequality 8 is, by definition, the size of . ∎
With reference to the sequence of dummy nodes , let , we have:
Lemma 3.
.
Proof.
The proof trivially follows from Lemma 2 observing that clusters are taken in not increasing order of size ∎
From the above technical lemmas it easily follows:
Lemma 4.
The decomposition strategy returns a collection of clusters induced by dummy nodes.
From Lemma 4, we have:
Theorem 1.
The data structure for the representation of dags satisfying the lattice property has a space complexity and allows to perform reachability operation in constant time.
References
 [1] J. Ian Munro, Bryce Sandlund, and Corwin Sinnamon. Spaceefficient data structures for lattices. ArXiv, abs/1902.05166, 2019.
 [2] M. Talamo and P. Vocca. A data structure for lattice representation. Theoretical Computer Science, 175(2):373 – 392, 1997.
 [3] Maurizio Talamo and Paola Vocca. An efficient data structure for lattice operations. SIAM Journal on Computing, 28(5):1783–1805, 1999.