1 Introduction
In modern algorithm development, we observe two drastically opposing trends. Even though memory capacities are increasing and their prices are drastically reducing daybyday, input data sizes that are being stored are growing at a much faster pace, and this is due to the ongoing digital transformation of business and society in general. There are many application areas, e.g., social networks, web mining, and video streaming systems, where already there exists a tremendous amount of data and it is only increasing. In these domains, most often, a natural representation of the underlying data sets is in the form of graphs, and with each passing day, these graphs are becoming massive. To process such huge graphs and extract useful information from them, we need to answer the following two concrete questions among others: (1) can we store these massive graphs in compressed form using the minimum amount of space? and (2) can we build spaceefficient indexes for these huge graphs so that we can extract useful information about them by executing efficient query algorithms on the index itself? The field of succinct data structures aims to exactly answer these questions satisfactorily, and it has been one of the key contributions to the algorithm community in the past two decades, both theoretically and practically. More specifically, given a class of certain combinatorial objects, say , from a universe , the main objective here is to store any arbitrary member using the informationtheoretic lower bound of bits (in addition to bits) ^{1}^{1}1Throughout the paper, we use logarithm to the base 2. along with efficient support of a relevant set of operations on .
There exists already a large body of work representing various combinatorial structures succinctly along with fast query support. For example, succinct data structures for rooted ordered trees [15, 16, 20, 21], chordal graphs [17], graphs with treewidth at most [10], separable graphs [2], interval graphs [1] etc., are some examples of these data structures. Following similar trend, in this work we provide succinct data structures for seriesparallel multigraphs [23], blockcactus graphs [13] and leaf power graphs [4]. We defer the definitions of the graph classes to the individual sections where their succinct data structure is proposed. These graphs are important because not only they are theoretically appealing to study but they also show up in important practical application domains, e.g., seriesparallel graphs are used to model electrical networks, cacti are useful in computational biology etc. To the best of our knowledge, our work provides succinct data structures with optimal query support for the first time in the literature (although there exists a succinct data structure for simple seriesparallel graphs [2], such a structure is not known for seriesparallel multigraphs).
1.1 Previous work
SeriesParallel (SP) graphs. The informationtheoretic lower bound (ITLB) for encoding a simple SPgraph with vertices is bits [3] whereas the ITLB for encoding an SP multigraph with edges is bits [26]. Since an SP graph is separable, one can obtain a succinct representation of any SP graph by using the result of Blelloch and Farzan [2] while supporting some navigation queries efficiently. However, this only works for simple SP graphs [18] since one cannot store the lookup table for all possible micrographs (containing multi SP graphs with any fixed number of vertices) within the limited space (as the number of edges is not bounded)^{2}^{2}2Note that one can encode SP multigraphs by encoding the underlying simple graph using Blelloch and Farzan’s encoding, along with a bit string of size to represent the multiplicities of the edges. However, the space usage is not succinct in this case.. Also since simple SP graphs are exactly the class of graphs with treewidth , one can use the data structure of Farzan and Kamali [10] for representing SP graphs but again, this only works for simple SP graphs. For multigraph case with edges, Uno et al. [26] present an encoding for SP multigraphs taking at most bits without supporting any navigational queries efficiently.
BlockCactus and 3Leaf Power graphs. The ITLB for encoding a blockcactus graph and a leaf power graph with vertices are [28] and [6] bits respectively. Note that the class of Blockcactus graphs contains both cactus and block graph classes. As any cactus graph is planar, and hence separable, one can again use the result of Blelloch and Farzan [2] to encode them optimally with supporting the navigation queries efficiently. However, this approach doesn’t work for block or blockcactus graphs since they are not separable.
1.2 Our Main Contribution
We design succinct data structures for (i) seriesparallel multigraphs in Section 3 and (ii) blockcactus graphs in Section 4, and finally (iii) 3leaf power graphs in Section 5 to support the following queries. Given a graph and two vertices , (i) returns the number of edges incident to in , (ii) returns true if and are adjacent in , and false otherwise, and finally (iii) returns the set of all (distinct) vertices that are adjacent to in . The following theorem summarizes our main results on these graphs.
Theorem 1.1
There exists a succinct data structure that supports and queries in time, and query in time, for (1) seriesparallel multigraphs, (2) blockcactus graphs, and (3) leaf power graphs.
The reason for considering these three (seemingly unrelated) graph classes is that any graph in each of these three classes has a corresponding treebased representation  and hence these graphs can be encoded succinctly by encoding the corresponding tree. In what follows, we briefly discuss a high level idea on how to succinctly represent the graphs of our interest. Roughly speaking, given a graph ( could be seriesparallel, blockcactus or leaf power), we first convert it to a labeled tree which can be used to decode . We then represent by encoding using the tree covering (TC) algorithm of Farzan and Munro [11], which supports various tree navigation queries in time. However, we cannot obtain directly the succinct representation of with efficient navigation queries from the tree covering of . More specifically, the tree covering algorithm first decomposes the input tree and encodes each decomposed tree separately. Thus, a lot of information of can be lost in each of the decomposed trees. For example, decomposed trees may not even belong to the graph class that we originally started with in the first place (and this is in stark contrast to the situation while designing succinct data structures for trees). Thus, we need to apply nontrivial local changes (catering to each graph class separately) to these decomposed trees and argue that (i) these changes convert them again back to the original graph class, without consuming too much space, and (ii) navigation queries on can be supported efficiently as tree queries on . As a consequence, one salient feature of our approach is that for the graphs we consider in this paper, it is not necessary to know the exact informationtheoretic lower bound, to design succinct data structures for them if we only know the asymptotic lower bound of the number of nonisomorphic graphs of with a given number of vertices. Note that the overall idea of ‘encoding the graph as a treebased representation and using the TC algorithm to encode the tree to support the navigation operations on the graph’ is subsequently used in [5] to obtain succinct representation for graphs of small cliquewidth. The other main contribution of this paper is to construct suitable treebased encodings and showing how to adapt the TC representation to support the operations.
2 Preliminaries and Main Techniques
Throughout our paper, we assume familiarity with succinct/compact data structures (as given in [19]), basic graph theoretic terminology (as given in [8]), and graph algorithms (as given in [7]). All the graphs in our paper are assumed to be connected and unlabeled, i.e., we can number the vertices arbitrarily. Moreover, we assume the usual model of computation, namely a bit word RAM model where is the size of the input ( is the number of vertices in the case of graphs, and the number of edges in the case of multigraphs). We start by sketching a modification to the tree covering algorithm of Farzan and Munro [12].
2.1 Tree covering
The high level idea of the tree covering algorithm is to decompose the tree into subtrees called minitrees (in the rest of the paper, we use subtree to denote any connected subgraph of a given tree), and further decompose the minitrees into yet smaller subtrees called microtrees. The microtrees are small enough to be stored in a compact table. The root of a minitree can be shared by several other minitrees. To represent the tree, we only have to represent the connections and links between the subtrees. In what follows, we summarize the main result of Farzan and Munro [12] in the following theorem:
Theorem 2.1 ([12])
For a rooted ordered tree with nodes and a positive integer , we can obtain a tree covering satisfying (1) each subtree contains at most nodes, (2) the number of subtrees is , (3) each subtree has at most one outgoing edge, apart from those from the root of the subtree.
For each subtree after the decomposition of Theorem 2.1, the unique node that has an outgoing edge is called the boundary node of the subtree, and the edge is called the boundary edge of the subtree. The subtree may have multiple outgoing edges from its root node (in this case, we call it a shared root node), and those edges are called root boundary edges.
To obtain a succinct representation, we first apply Theorem 2.1 with , to obtain minitrees (here and in the rest of the paper, we ignore all floors and ceilings which do not affect the main result). The tree obtained by contracting each minitree into a vertex is referred to as the tree over minitrees. If more than one minitree shares a common root, we create a dummy node in the tree and make the nodes corresponding to the minitrees as children of the dummy node. We also set the parent of the dummy node as the node corresponding to the parent minitree. (See Figure 1 for an example.) This tree has vertices and therefore can be represented in bits using a pointerbased representation. Then, for each minitree, we again apply Theorem 2.1 with parameter to obtain microtrees in total. The tree obtained from each minitree by contracting each microtree into a node, and adding dummy nodes for microtrees sharing a common root (as in the case of the tree over minitrees) is called the minitree over microtrees. Each minitree over microtrees has vertices, and can be represented by bit pointers. For each nonroot boundary edge of a microtree , we encode from which vertex of it comes out and the rank among all children of the vertex. One can encode the position where the boundary edge is inserted in bits. Note that in our modified tree decomposition, each node in the tree is in exactly one microtree.
For each microtree, we define its representative as its root node if it is not shared with other microtrees, or the next node of the root node in preorder if it is shared. Then we mark bits of the balanced parentheses representation [16] of the entire tree corresponding to the representatives. If we extract the marked bits, it forms a balanced parentheses (BP) and it represents the minitree over microtrees. The positions of marked bits are encoded in bits because there are marked bits in the BP representation of bits. The BP representation is partitioned into many variablelength blocks, each of which is of length . We can decode each block in constant time.
To support basic tree navigational operations such as parent, th child, child rank, degree, LCA (lowest common ancestor), level ancestor, depth, subtree size, leaf rank, etc. in constant time, we use the data structure of [20]. Note that we slightly change the data structure because now each block is of variable length. We need to store those lengths, but it is done by using the positions of the marked bits.
The total space for all minitrees over microtrees is bits. Finally, the microtrees are stored as twolevel pointers (storing the size, and an offset within all possible trees of that size) into a precomputed table that contains the representations of all possible microtrees. The space for encoding all the microtrees using this representation can be shown to be bits.
2.2 Graph Representation Using Tree covering
This section describes the highlevel idea to obtain succinct encodings for the graph classes that we consider. Let be one of the graph classes among seriesparallel multigraphs, blockcactus graphs, and 3leaf power graphs. Then the following properties hold.

For any connected graph , there exists a labeled tree of nodes, such that can be uniquely decoded from .
By the above properties, one can represent any graph by encoding the tree covering of (with and ). Unfortunately, tree covering on does not directly give a succinct encoding of since the number of all nonisomorphic graphs in can be much smaller than the number of all nonisomorphic labeled trees of the same size (for example, multiple labeled trees can correspond to the same graph). To solve this problem, we maintain a precomputed table of all nonisomorphic graphs in of size at most , along with their corresponding trees in canonical representation. By representing each microtree as an index of the corresponding graph in the precomputed table, we can store all the microtrees of in succinct space. If a microtree does not have a corresponding graph in (i.e., there is no corresponding graph in the precomputed table), we first extend to where of size at most by adding some dummy nodes, and encode as the index of , along with the information about dummy nodes. Since we only add a small number of (at most ) dummy nodes for each microtree, all the additional information can be stored within succinct space. In the following sections, we describe how to add such dummy nodes for seriesparallel, blockcactus, and 3leaf power graphs.
Finally, for the case when is not connected, we extend the above idea as follows. We first encode all the connected components of separately, and encoding the sizes of the connected components using the encoding of [9, 25] using at most additional bits. This implies we can still encode in succinct space even if is not connected. In the rest of this paper, we assume that all the graphs are connected.
3 SeriesParallel Graphs
Seriesparallel graphs [23] (SP graphs in short) are undirected multigraphs which are defined recursively as follows.

A single edge is an SP graph. We call its two end points as terminals.

Given two SP graphs with terminals and with terminals ,

their series composition, the graph made by identifying , is an SP graph with terminals ; and

their parallel composition, the graph made by identifying , and , is an SP graph with terminals and .

From this construction, we can obtain the binary tree representing an SP graph as follows.
Each leaf of the binary tree corresponds to an edge of .
Each internal node of has a label S (or P), which represents an SP graph
made by the series (or parallel) composition of the two SP graphs represented by the two child subtrees of .
We convert it into a multiary SP tree
by merging vertically consecutive nodes with identical labels into a single node. More precisely, while scanning all the nodes in in bottomup, we contract every edge if and have the same labels.
Then all the internal nodes at the same depth have the same labels, and the labels alternate between the levels.
See Figure 1 for an example.
Note that any two nonisomorphic SP graphs have different SP trees.
Succinct representation. Let and be the number of vertices and edges of , respectively. Then has leaves, and nodes. First, we construct the SP tree from an SP graph . If the root of is a P node, we add a dummy parent labeled S with three children, and make the original root as the middle child of . The first and the last children of correspond to dummy edges. If the root of is an S node, we also add two leaves as the leftmost and rightmost children of the root, corresponding to dummy edges. We refer to this modified tree as . Let be the number of nodes in . Then we apply the tree covering algorithm with parameters and .
It is obvious that each microtree without dummy leaf nodes represents an SP graph. For each graph corresponding to a microtree, we use a linear time algorithm [27] to obtain a canonical representation of the microtree. Note that if the graphs corresponding to two microtrees are isomorphic, then those two microtrees have the same canonical representation. We create a table to store all nonisomorphic SP graphs with at most vertices, and encode each microtree as a pointer into this table. To reconstruct the original graph from the graphs corresponding to the microtrees, we need additional information to combine these graphs. More specifically, assume an SP graph consists of a series composition of graphs and , whose terminals are , and respectively. Then one can construct two different graphs and by (i) connecting and or (ii) and . Thus, for each micro tree, we add one extra bit to store this information.
For each S node of , we assign an inorder number [24] (we only assign inorder numbers for S nodes). Inorder numbers in a rooted tree are given during a preorder traversal from the root. If a node is visited from one of its children and another child of is visited next, we assign one inorder number to . If a node has children, we assign inorder numbers to it. (Unary nodes are not assigned any inorder number.) If a node has more than one inorder number, we use the smallest value as its representative inorder number. Now we consider two operations (i) : return the th inorder rank of S node (given as preorder number), and (ii) : given an inorder rank of an S node, return where is the preorder number of the node with inorder rank and is the number such that is the th inorder number of the node. The following describes how to support both queries in time using bits of additional space.
One can observe that for each microtree (or minitree) of , all the inorder numbers corresponding to the S nodes in form two intervals and . Note that all the intervals corresponding to the minitrees or microtrees partition the interval where is the largest inorder number in . We construct a dictionary that stores the right end points of all the intervals corresponding to the minitrees, where with each element of the dictionary, we store a pointer to the minitree corresponding to that interval as the satellite information. The number of elements in this dictionary is with universe size at most , and hence can be represented as an FID [22] using bits to support membership, rank and select queries in time. The satellite information can also be stored in bits, to support time access. For each minitree , we also construct a dictionary that stores the right end points of all the intervals corresponding to its microtrees, where with each element we associate a pointer to the corresponding microtree as the satellite information. The space usage of the dictionaries corresponding to all the minitrees adds up to bits in total.
In addition, for each minitree of , we store its corresponding intervals and using bits in total. We call the two values and as the offsets corresponding to . Also, for each microtree contained in the minitree and , we store if , and otherwise (i.e., offsets with respect to the minitree intervals they belong to). Since all the endpoints of these intervals are at most , we can store all such intervals using bits in total. The total space usage is bits.
To compute , we first find the microtree which contains the node . Then, we decode the interval corresponding using the interval stored at as well as the offsets corresponding to , and return the th smallest value within the interval. To compute , we first microtree that contains the answer by the rank queries on and . Finally, we compute the answer within the microtree in time using the intervals stored with .
Next, we assign labels to the vertices of the graph. Any vertex in the graph corresponds to a common terminal of two SP graphs which are combined by series composition. For each vertex , let be an S node in which represents such series composition. Then we assign one inorder number of as the label of (note that any two subgraphs which have a common terminal correspond to the subtree at the consecutive child nodes of ). For example, vertex 5 in the graph corresponds to the common terminal of the following two subgraphs: (i) the subgraph consisting of the edge from 4 to 5, and (ii) the subgraph corresponding to the subtree rooted at the minitree (consisting of a single P node), which contains the four edges and . Note that the inorder number 5 is assigned to the S node corresponding to the minitree , when we traverse from subtree corresponding to (i) to the subtree corresponding to (ii) (during the preorder traversal of T).
Also, we define a label for each node of
, which is an ordered pair
of the two terminals of the subgraph corresponding to the subtree rooted at that node. We call and the left and the right label of the node . The label of a P node can be computed in time as follows. (1) If is the leftmost child of its parent , then is equal to the smallest inorder number of , given when is visited from . To obtain , we traverse the SP tree up from until we reach an S node such that does not belong to the leftmost subtree of . We can compute the node in time as follows. If is in the same microtree as , then we can find using a table lookup. Otherwise, if is in the same minitree as , then we store with the root of the microtree containing . Finally, if is not in the same minitree, then we explicitly store with the root of the minitree containing . (2) If is the rightmost child of its parent , then is equal to the inorder number of , given when is visited the last time before visiting . To obtain , we traverse the SP tree up from until we reach an S node such that does not belong to the rightmost subtree of . We use a similar data structure as in (1) to compute the answer. (3) In all other cases, and are the inorder numbers of the parent of , defined immediately before visiting from , and immediately after visiting the next sibling of from , respectively.The label of an S node is the same as its parent P node (we don’t assign a label to the root S node). The label of a leaf can be determined by the same algorithm for P or S nodes depending on whether its parent is an S or P node. Note that, from the above definition, the label of a P node is the same as the label of any of its child S nodes. For an S node , suppose be its children, and be the left and the right labels. Then it holds that , and the label of is .
We also define and for each vertex of the graph, as follows.
Suppose that during the preorder
traversal of the tree, we visit nodes
, , in this order and we assign
the inorder number to .
Then we define and .
If returns the pair , then and are the th and the th children of node , respectively. Thus, and can be computed in time.
This completes the description for encoding of SP graphs.
Supporting navigation queries.

First we find the nodes , , and . (1) If , the subgraph corresponding to the node has terminals with labels and . Therefore and are adjacent if is a leaf (this corresponds to the edge ) or has a leaf child ( is a P node and it has a leaf child that corresponds to the edge ). (2) If , analogous to the previous case. (3) If , find the labels of and . Let be the label of and be the label of . One of must be if and are adjacent. Assume w.l.o.g. that . Then and are adjacent iff is a leaf or has a leaf child. (4) If , analogous to the previous case. In all four cases, the query can be supported in time.

First we find and . Then we apply the following procedure to explore all the neighbors of by executing the two procedure calls, and .
: if is a leaf with label or , then output .
If is an S node, then call , where is the leftmost (rightmost) child of , if (). If is a P node, then call for all the children of .The running time of this procedure is proportional to the size of the output. Note that if we do not want to report the same neighbour multiple times, we can define a canonical ordering between the children of P nodes such that all the leaf children appear after the nonleaf children (S nodes), and only report the first leaf child of the node.

Let and be the microtree and minitree containing respectively. Then the degree of is the summation of (i) the number of adjacent vertices in , (ii) the number of adjacent vertices not in but in , and (iii) the number of adjacent vertices not in , denoted by , , and respectively. Here an adjacent vertex of refers to a vertex such that or is the label of some leaf node. If is not one of the labels of the boundary node of (of ), then (respectively, ). Now we consider three cases as follows. First, can be computed in time using a precomputed table. The value () can be stored with the root of microtree (minitree) whose parent is the boundary node in (). Note that in the above scheme, we only need to store two values corresponding to the two labels of the root, for each microtree/ minitree root. Thus the space usage for storing these values is bits.
4 Block/Cactus/BlockCactus Graphs
A block graph (also known as a clique tree or a Husimi tree [14]) is an undirected graph in which every block (i.e., maximal biconnected component) is a clique. A cactus graph (same as almost tree(1) [13]) is a connected graph in which every two simple cycles have at most one vertex in common (equivalently every block is a cycle). A blockcactus graph is a graph in which every block is either a cycle or a complete graph.
Any graph that belongs to one of these three graph classes can be converted into a tree as follows. Replace each block (either a clique or an induced cycle) with vertices by a star graph by introducing a dummy node that is connected to the nodes that correspond to the vertices of the block. The remaining edges and vertices of the graph are simply copied into the tree. See Figure 2 for an example. Note that the number of dummy nodes is always less than the number of nondummy nodes.
In the following, we describe a succinct encoding for blockcactus graphs, and note that it is easy to obtain succinct encoding for block graph and cactus graph using these ideas.
Succinct representation.
Let be the input blockcactus graph, and let be the corresponding tree obtained by replacing each block with a star graph, as described above. We apply the tree covering algorithm of Theorem 2.1 on with minitree and microtrees of size and
for some constant respectively.
It is easy to see that each micro/minitree obtained by the tree cover algorithm corresponds to a blockcactus graph, although it may not be a subgraph of the original graph . And by storing some additional information with each micro/minitree along with its representation, we can give a bijective map between the vertices in and the nodes in , which we use in describing the query algorithms.
We first note that when we convert a block ( or ) into a star graph (), the neighbors of the dummy node can be ordered in multiple ways when we consider the resulting graph as an ordered tree. In particular, if the ordered tree is rooted at a dummy node corresponding to a cycle, then its children can be ordered in either the clockwise or anticlockwise order of the cycle, and also the first child can be any vertex on the cycle. When the root of microtree is a dummy node corresponding to a cycle, the cycle corresponding to the dummy node is cut into two or more pieces, and the one inside represents a shorter cycle. Then the microtree is encoded as a canonical representation of the modified subgraph, and it loses the information of how it was connected to the other part of the graph. To recover this information, for the microtree it is enough to store one vertex in the shorter cycle that is connected to the outside and the direction (clockwise or anticlockwise) of the cycle. The vertex is encoded in bits, and the direction in one bit. We need the same information for the nonroot boundary node of the microtree. This additional information will enable us to reconstruct the cycle in the original graph from the subgraphs corresponding to the microtrees. Note that if the dummy node corresponds to a clique, then we don’t need this information.
Each microtree is encoded as a twolevel pointer into a precomputed table that stores the set of all possible blockcactus graphs on at most vertices.
Note that the number of dummy nodes is since we can delete all the dummy nodes which are not boundary nodes of microtrees.
We also store bit with each of these dummy nodes, indicating whether it corresponds to a clique or a cycle. Thus each microtree is represented optimally, apart from an bit additional information. Hence the overall space usage is succinct.
This completes the description for the succinct encoding of blockcactus graphs.
Supporting navigation queries.

If there is an edge in between the nodes corresponding to and , then and are adjacent in the graph (since we only delete some edges from the original graph; and all the edges added are incident to some dummy node). Otherwise, and are adjacent if they are connected to the same dummy node , and either (a) corresponds to a clique, or (b) and are “adjacent” in the tree – i.e., if they are adjacent siblings or one of them is the parent of and the other is either the first or last child of . Since all these conditions can be checked in time using the tree representation, we can support the query in time.

The algorithm for this follows essentially from the conditions for checking adjacency. More specifically, to report , we first output all the nondummy nodes adjacent to in the tree. And if is adjacent to any dummy node , then we also output all the vertices: (a) that are connected to if corresponds to a clique, and (b) that are “adjacent” to it in the tree if corresponds to a cycle. This can be done in time proportional to the output size.

From the algorithm for the query, we observe that the degree of a node can be computed by adding the two quantities: (1) the number of nondummy neighbors of , and (2) the number of nodes that are adjacent to through a dummy neighbor. It is easy to compute (1) and (2) within a microtree, in constant time using precomputed tables. In addition, we may need to add the contributions from outside the microtree, if is either a boundary node or is adjacent to a boundary node which is dummy. For each such dummy boundary node, we need to add either or (if the dummy node corresponds to a cycle) or (if the dummy node corresponds to a clique of size ). Since there are at most two such boundary nodes which can be adjacent to , this can be computed in constant time. Also, for the roots of the mini (micro) trees, which are nondummy, we store their degrees (within the minitree) explicitly. Thus, we can compute the query in time.
5 Leaf Power Graphs
A graph with vertices is a leaf power if there exists a tree with leaves where each leaf node corresponds to a vertex in the graph , and any two vertices in are adjacent if and only if the distance between their corresponding leaves in the tree is at most . The tree is called a leaf root of (see Figure 3 for an example).
In this section, we consider the succinct representation of leaf power for the special case of .
Succinct representation.
Our representation of 3leaf power graphs is based on the following lemma.
Lemma 1 (Brandstädt and Le [4])
For any connected and nonclique 3leaf power of vertices, one can construct a unique 3leaf root of of nodes.
Note that We can make as a rooted tree as follows. Because contains an internal node (otherwise consists of just an edge with two nodes, which corresponds to the clique ), we regard it as the root of . We store the root of every micro tree of explicitly using bits in total.
Now consider the 3leaf root of . If is not a clique, one can construct the unique representation of by Lemma 1. If not, we fix as . For any nonleaf node , we order the children of in the nondecreasing order of the sizes of the subtrees rooted under them (thus, all the leaf children of appear before the nonleaf children of ), to support the navigation queries efficiently. We then apply the tree covering algorithm on with parameters and for any constant . We build a precomputed table of size bits which stores all nonisomorphic nonclique 3leaf powers of size at most along with their 3leaf root constructed from the algorithm of Lemma 1.
We use the following properties of 3leaf roots: (1) if is connected, every internal node of has at least one leaf child, and (2) the graphs corresponding to the microtrees created by applying the tree cover algorithm to are connected. The proofs are as follows. For (1), assume to the contrary that there is an internal node with no leaf children. Then any vertex corresponding to a leaf descendant of is not connected to any other vertex corresponding to a leaf node outside of the subtree rooted at since the distance between them (in ) is at least 4. For (2), consider a microtree with a boundary edge connecting a node in and a node which is the root of another microtree . From (1), has a leaf child . If belongs to , the graph corresponding to is connected. If belongs to , the root of must be , which contradicts the assumption that is a boundary edge. Note that root boundary edges do not effect the connectivity of the graph corresponding to the microtree.
Thus each microtree of falls into one of the three cases: (i) 3leaf root of a nonclique, (ii) single nonleaf node, or (iii) 3leaf root of a clique.
For Case (i), we encode as an index into the precomputed table.
For Case (ii), we add one extra entry into the precomputed table, which is used to encode this case.
Finally for Case (iii), note that there are only distinct 3leaf roots corresponding to the clique of size , each of which can be constructed by connecting two nonleaf nodes of and for any (assuming corresponds to the empty graph).
Thus, we add extra entries into the precomputed table which indicate cliques of size at most with an additional index . Overall, the total space of the encoding is succinct.
Supporting navigation queries. For the navigation queries, we refer to each vertex by the leafrank of the corresponding leaf node in . Let be the parent node of , and let and be a set of leaf and nonleaf children of respectively.

By the definition of 3leaf root, and are adjacent if and only if (i) or (ii) is a parent node of or vice versa. Since both and can be computed in time [12], we can answer the in time.

Let be a parent node of . Then if and only if is a (i) leaf child node of , (ii) node in , or (iii) leaf child node of the node in . To return all the leaf children of , we scan from the leftmost child of , and return if the child is a leaf node. This can be done in time per node by using the time tree navigation queries in [12]. Next, we scan all the children of . While scanning the node , if (this returns all the nodes in the case (ii)), we return . Otherwise, we return all the leafchildren of (this returns all the nodes in the case (iii)). Again, all these nodes can be reported in per node by the same argument as the above. Thus, we can return in time.

We count the number of (i) leaf child nodes of (parent node of ), (ii) nodes in , and (iii) leaf children of the nodes in separately, and return the sum of these as the answer of query. Now we describe how to compute (iii) in time (note that (i) and (ii) also can be computed in time analogously). Let (resp. ) be a microtree (resp. minitree) which contains . Then we first consider the case that does not contain the boundary node of . In this case, we compute the (iii) in time using the precomputed table if is not a boundary node of . If is a boundary node of (resp. ), we compute the (iii) in time by referring to the answer stored at the root of (resp. ). Note that we can store all of these answers using at most bits in total. Next, we consider the case that contains the boundary node of . In this case, we additionally store the number of leaf children of the root node of each microtree of using at most bits in total. Then we can compute the (iii) in time by computing the (iii) without the number leaf children of the boundary node of , and adding the number of leaf children of the microtree whose root node is the child of the boundary node of .
6 Conclusions
We present in this work succinct representations of seriesparallel, blockcactus and 3leaf power graphs along with supporting basic navigational queries optimally. We conclude with some possible future directions for further exploration. Following the works of [1, 10], is it possible to support shortest path queries efficiently on these graphs while using same space as in this paper? Is it possible to design spaceefficient algorithms for various combinatorial problems for these graphs? Can we generalize the data structure of Section 5 to construct a succinct representation of leaf power graphs? Finally, can we prove a lower bound between the query time and the extra space i.e., redundancy, for our data structures?
References
 [1] Acan, H., Chakraborty, S., Jo, S., Satti, S.R.: Succinct data structures for families of interval graphs. In: WADS. pp. 1–13 (2019)
 [2] Blelloch, G.E., Farzan, A.: Succinct representations of separable graphs. In: CPM. pp. 138–150 (2010)
 [3] Bodirsky, M., Giménez, O., Kang, M., Noy, M.: Enumeration and limit laws for seriesparallel graphs. Eur. J. Comb. 28(8), 2091–2105 (2007)
 [4] Brandstädt, A., Le, V.B.: Structure and linear time recognition of 3leaf powers. Inf. Process. Lett. 98(4), 133–138 (2006)
 [5] Chakraborty, S., Jo, S., Sadakane, K., Satti, S.R.: Succinct data structures for small cliquewidth graphs. In: 31st Data Compression Conference, DCC 2021, Snowbird, UT, USA, March 2326, 2021. pp. 133–142. IEEE (2021)
 [6] Chauve, C., Fusy, É., Lumbroso, J.O.: An exact enumeration of distancehereditary graphs. In: ANALCO 2017, Barcelona, Spain, Hotel Porta Fira, January 1617, 2017. pp. 31–45 (2017)
 [7] Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms (3. ed.). MIT Press (2009)
 [8] Diestel, R.: Graph Theory, 4th Edition, Graduate texts in mathematics, vol. 173. Springer (2012)
 [9] ElZein, H., Lewenstein, M., Munro, J.I., Raman, V., Chan, T.M.: On the succinct representation of equivalence classes. Algorithmica 78(3), 1020–1040 (2017)
 [10] Farzan, A., Kamali, S.: Compact navigation and distance oracles for graphs with small treewidth. Algorithmica 69(1), 92–116 (2014)
 [11] Farzan, A., Munro, J.I.: Succinct encoding of arbitrary graphs. Theor. Comput. Sci. 513, 38–52 (2013)
 [12] Farzan, A., Munro, J.I.: A uniform paradigm to succinctly encode various families of trees. Algorithmica 68(1), 16–40 (Jan 2014)
 [13] Gurevich, Y., Stockmeyer, L.J., Vishkin, U.: Solving NPHard problems on graphs that are almost trees and an application to facility location problems. J. ACM 31(3), 459–473 (1984)
 [14] Husimi, K.: Note on mayers’ theory of cluster integrals. The Journal of Chemical Physics 18(5), 682–684 (1950)
 [15] Jacobson, G.J.: Succinct static data structures. PhD thesis, Carnegie Mellon University (1998)
 [16] Munro, J.I., Raman, V.: Succinct representation of balanced parentheses and static trees. SIAM J. Comput. 31(3), 762–776 (2001)
 [17] Munro, J.I., Wu, K.: Succinct data structures for chordal graphs. In: ISAAC. pp. 67:1–67:12 (2018)
 [18] Munro, J.I., Nicholson, P.K.: Compressed representations of graphs. In: Encyclopedia of Algorithms, pp. 382–386 (2016)
 [19] Navarro, G.: Compact Data Structures  A Practical Approach. Cambridge University Press (2016)
 [20] Navarro, G., Sadakane, K.: Fully functional static and dynamic succinct trees. ACM Trans. Algorithms 10(3), 16:1–16:39 (2014)
 [21] Raman, R., Rao, S.S.: Succinct representations of ordinal trees. In: SpaceEfficient Data Structures, Streams, and Algorithms. Lecture Notes in Computer Science, vol. 8066, pp. 319–332. Springer (2013)
 [22] Raman, R., Raman, V., Satti, S.R.: Succinct indexable dictionaries with applications to encoding kary trees, prefix sums and multisets. ACM Trans. Algorithms 3(4), 43 (2007)
 [23] Riordan, J., Shannon, C.E.: The number of twoterminal seriesparallel networks. Journal of Mathematics and Physics 21(14), 83–93 (1942). https://doi.org/10.1002/sapm194221183

[24]
Sadakane, K.: Compressed Suffix Trees with Full Functionality. Theory of Computing Systems
41(4), 589–607 (2007)  [25] Sumigawa, K., Sadakane, K.: Storing partitions of integers in sublinear space. Rev. Socionetwork Strateg. 13(2), 237–252 (2019)
 [26] Uno, T., Uehara, R., Nakano, S.: Bounding the number of reduced trees, cographs, and seriesparallel graphs by compression. Discrete Mathematics, Algorithms and Applications 05(02), 1360001 (2013)
 [27] Valdes, J., Tarjan, R.E., Lawler, E.L.: The recognition of series parallel digraphs. SIAM J. Comput. 11(2), 298–313 (1982)
 [28] Voblyi, V.A., Meleshko, A.K.: Enumeration of labeled blockcactus graphs. Journal of Applied and Industrial Mathematics 8(3), 422–427 (2014)