Log In Sign Up

Succinct Data Structures for Series-Parallel, Block-Cactus and 3-Leaf Power Graphs

We design succinct encodings of series-parallel, block-cactus and 3-leaf power graphs while supporting the basic navigational queries such as degree, adjacency and neighborhood optimally in the RAM model with logarithmic word size. One salient feature of our representation is that it can achieve optimal space even though the exact space lower bound for these graph classes is not known. For these graph classes, we provide succinct data structures with optimal query support for the first time in the literature. For series-parallel multigraphs, our work also extends the works of Uno et al. (Disc. Math. Alg. and Appl., 2013) and Blelloch and Farzan (CPM, 2010) to produce optimal bounds.


page 1

page 2

page 3

page 4


Succinct Navigational Oracles for Families of Intersection Graphs on a Circle

We consider the problem of designing succinct navigational oracles, i.e....

Succinct Data Structure for Path Graphs

We consider the problem of designing a succinct data structure for path ...

Succinct Permutation Graphs

We present a succinct, i.e., asymptotically space-optimal, data structur...

Fast Preprocessing for Optimal Orthogonal Range Reporting and Range Successor with Applications to Text Indexing

Under the word RAM model, we design three data structures that can be co...

Spirality and Rectilinear Planarity Testing of Independent-Parallel SP-Graphs

We study the long-standing open problem of efficiently testing rectiline...

#P-completeness of counting update digraphs, cacti, and a series-parallel decomposition method

Automata networks are a very general model of interacting entities, with...

Analysing Parallel Complexity of Term Rewriting

We revisit parallel-innermost term rewriting as a model of parallel comp...

1 Introduction

In modern algorithm development, we observe two drastically opposing trends. Even though memory capacities are increasing and their prices are drastically reducing day-by-day, input data sizes that are being stored are growing at a much faster pace, and this is due to the ongoing digital transformation of business and society in general. There are many application areas, e.g., social networks, web mining, and video streaming systems, where already there exists a tremendous amount of data and it is only increasing. In these domains, most often, a natural representation of the underlying data sets is in the form of graphs, and with each passing day, these graphs are becoming massive. To process such huge graphs and extract useful information from them, we need to answer the following two concrete questions among others: (1) can we store these massive graphs in compressed form using the minimum amount of space? and (2) can we build space-efficient indexes for these huge graphs so that we can extract useful information about them by executing efficient query algorithms on the index itself? The field of succinct data structures aims to exactly answer these questions satisfactorily, and it has been one of the key contributions to the algorithm community in the past two decades, both theoretically and practically. More specifically, given a class of certain combinatorial objects, say , from a universe , the main objective here is to store any arbitrary member using the information-theoretic lower bound of bits (in addition to bits) 111Throughout the paper, we use logarithm to the base 2. along with efficient support of a relevant set of operations on .

There exists already a large body of work representing various combinatorial structures succinctly along with fast query support. For example, succinct data structures for rooted ordered trees [15, 16, 20, 21], chordal graphs [17], graphs with treewidth at most  [10], separable graphs [2], interval graphs [1] etc., are some examples of these data structures. Following similar trend, in this work we provide succinct data structures for series-parallel multigraphs [23], block-cactus graphs [13] and -leaf power graphs [4]. We defer the definitions of the graph classes to the individual sections where their succinct data structure is proposed. These graphs are important because not only they are theoretically appealing to study but they also show up in important practical application domains, e.g., series-parallel graphs are used to model electrical networks, cacti are useful in computational biology etc. To the best of our knowledge, our work provides succinct data structures with optimal query support for the first time in the literature (although there exists a succinct data structure for simple series-parallel graphs [2], such a structure is not known for series-parallel multigraphs).

1.1 Previous work

Series-Parallel (SP) graphs. The information-theoretic lower bound (ITLB) for encoding a simple SP-graph with vertices is bits [3] whereas the ITLB for encoding an SP multigraph with edges is bits [26]. Since an SP graph is separable, one can obtain a succinct representation of any SP graph by using the result of Blelloch and Farzan [2] while supporting some navigation queries efficiently. However, this only works for simple SP graphs [18] since one cannot store the look-up table for all possible micro-graphs (containing multi SP graphs with any fixed number of vertices) within the limited space (as the number of edges is not bounded)222Note that one can encode SP multigraphs by encoding the underlying simple graph using Blelloch and Farzan’s encoding, along with a bit string of size to represent the multiplicities of the edges. However, the space usage is not succinct in this case.. Also since simple SP graphs are exactly the class of graphs with treewidth , one can use the data structure of Farzan and Kamali [10] for representing SP graphs but again, this only works for simple SP graphs. For multigraph case with edges, Uno et al. [26] present an encoding for SP multigraphs taking at most bits without supporting any navigational queries efficiently.

Block-Cactus and 3-Leaf Power graphs. The ITLB for encoding a block-cactus graph and a -leaf power graph with vertices are  [28] and  [6] bits respectively. Note that the class of Block-cactus graphs contains both cactus and block graph classes. As any cactus graph is planar, and hence separable, one can again use the result of Blelloch and Farzan [2] to encode them optimally with supporting the navigation queries efficiently. However, this approach doesn’t work for block or block-cactus graphs since they are not separable.

1.2 Our Main Contribution

We design succinct data structures for (i) series-parallel multigraphs in Section 3 and (ii) block-cactus graphs in Section 4, and finally (iii) 3-leaf power graphs in Section 5 to support the following queries. Given a graph and two vertices , (i) returns the number of edges incident to in , (ii) returns true if and are adjacent in , and false otherwise, and finally (iii) returns the set of all (distinct) vertices that are adjacent to in . The following theorem summarizes our main results on these graphs.

Theorem 1.1

There exists a succinct data structure that supports and queries in time, and query in time, for (1) series-parallel multigraphs, (2) block-cactus graphs, and (3) -leaf power graphs.

The reason for considering these three (seemingly unrelated) graph classes is that any graph in each of these three classes has a corresponding tree-based representation - and hence these graphs can be encoded succinctly by encoding the corresponding tree. In what follows, we briefly discuss a high level idea on how to succinctly represent the graphs of our interest. Roughly speaking, given a graph ( could be series-parallel, block-cactus or -leaf power), we first convert it to a labeled tree which can be used to decode . We then represent by encoding using the tree covering (TC) algorithm of Farzan and Munro [11], which supports various tree navigation queries in time. However, we cannot obtain directly the succinct representation of with efficient navigation queries from the tree covering of . More specifically, the tree covering algorithm first decomposes the input tree and encodes each decomposed tree separately. Thus, a lot of information of can be lost in each of the decomposed trees. For example, decomposed trees may not even belong to the graph class that we originally started with in the first place (and this is in stark contrast to the situation while designing succinct data structures for trees). Thus, we need to apply non-trivial local changes (catering to each graph class separately) to these decomposed trees and argue that (i) these changes convert them again back to the original graph class, without consuming too much space, and (ii) navigation queries on can be supported efficiently as tree queries on . As a consequence, one salient feature of our approach is that for the graphs we consider in this paper, it is not necessary to know the exact information-theoretic lower bound, to design succinct data structures for them if we only know the asymptotic lower bound of the number of non-isomorphic graphs of with a given number of vertices. Note that the overall idea of ‘encoding the graph as a tree-based representation and using the TC algorithm to encode the tree to support the navigation operations on the graph’ is subsequently used in [5] to obtain succinct representation for graphs of small clique-width. The other main contribution of this paper is to construct suitable tree-based encodings and showing how to adapt the TC representation to support the operations.

2 Preliminaries and Main Techniques

Throughout our paper, we assume familiarity with succinct/compact data structures (as given in [19]), basic graph theoretic terminology (as given in [8]), and graph algorithms (as given in [7]). All the graphs in our paper are assumed to be connected and unlabeled, i.e., we can number the vertices arbitrarily. Moreover, we assume the usual model of computation, namely a -bit word RAM model where is the size of the input ( is the number of vertices in the case of graphs, and the number of edges in the case of multigraphs). We start by sketching a modification to the tree covering algorithm of Farzan and Munro [12].

2.1 Tree covering

The high level idea of the tree covering algorithm is to decompose the tree into subtrees called mini-trees (in the rest of the paper, we use subtree to denote any connected subgraph of a given tree), and further decompose the mini-trees into yet smaller subtrees called micro-trees. The micro-trees are small enough to be stored in a compact table. The root of a mini-tree can be shared by several other mini-trees. To represent the tree, we only have to represent the connections and links between the subtrees. In what follows, we summarize the main result of Farzan and Munro [12] in the following theorem:

Theorem 2.1 ([12])

For a rooted ordered tree with nodes and a positive integer , we can obtain a tree covering satisfying (1) each subtree contains at most nodes, (2) the number of subtrees is , (3) each subtree has at most one outgoing edge, apart from those from the root of the subtree.

For each subtree after the decomposition of Theorem 2.1, the unique node that has an outgoing edge is called the boundary node of the subtree, and the edge is called the boundary edge of the subtree. The subtree may have multiple outgoing edges from its root node (in this case, we call it a shared root node), and those edges are called root boundary edges.

To obtain a succinct representation, we first apply Theorem 2.1 with , to obtain mini-trees (here and in the rest of the paper, we ignore all floors and ceilings which do not affect the main result). The tree obtained by contracting each mini-tree into a vertex is referred to as the tree over mini-trees. If more than one mini-tree shares a common root, we create a dummy node in the tree and make the nodes corresponding to the mini-trees as children of the dummy node. We also set the parent of the dummy node as the node corresponding to the parent mini-tree. (See Figure 1 for an example.) This tree has vertices and therefore can be represented in bits using a pointer-based representation. Then, for each mini-tree, we again apply Theorem 2.1 with parameter to obtain micro-trees in total. The tree obtained from each mini-tree by contracting each micro-tree into a node, and adding dummy nodes for micro-trees sharing a common root (as in the case of the tree over mini-trees) is called the mini-tree over micro-trees. Each mini-tree over micro-trees has vertices, and can be represented by -bit pointers. For each non-root boundary edge of a micro-tree , we encode from which vertex of it comes out and the rank among all children of the vertex. One can encode the position where the boundary edge is inserted in bits. Note that in our modified tree decomposition, each node in the tree is in exactly one micro-tree.

For each micro-tree, we define its representative as its root node if it is not shared with other micro-trees, or the next node of the root node in preorder if it is shared. Then we mark bits of the balanced parentheses representation [16] of the entire tree corresponding to the representatives. If we extract the marked bits, it forms a balanced parentheses (BP) and it represents the mini-tree over micro-trees. The positions of marked bits are encoded in bits because there are marked bits in the BP representation of bits. The BP representation is partitioned into many variable-length blocks, each of which is of length . We can decode each block in constant time.

To support basic tree navigational operations such as parent, -th child, child rank, degree, LCA (lowest common ancestor), level ancestor, depth, subtree size, leaf rank, etc. in constant time, we use the data structure of [20]. Note that we slightly change the data structure because now each block is of variable length. We need to store those lengths, but it is done by using the positions of the marked bits.

The total space for all mini-trees over micro-trees is bits. Finally, the micro-trees are stored as two-level pointers (storing the size, and an offset within all possible trees of that size) into a precomputed table that contains the representations of all possible micro-trees. The space for encoding all the micro-trees using this representation can be shown to be bits.

2.2 Graph Representation Using Tree covering

This section describes the high-level idea to obtain succinct encodings for the graph classes that we consider. Let be one of the graph classes among series-parallel multigraphs, block-cactus graphs, and 3-leaf power graphs. Then the following properties hold.

  • the ITLB for representing any graph is bits for some constant  [26, 28, 6], where is the number of vertices (block-cactus, and 3-leaf power graphs) or edges (series-parallel multigraphs) in .

  • For any connected graph , there exists a labeled tree of nodes, such that can be uniquely decoded from .

By the above properties, one can represent any graph by encoding the tree covering of (with and ). Unfortunately, tree covering on does not directly give a succinct encoding of since the number of all non-isomorphic graphs in can be much smaller than the number of all non-isomorphic labeled trees of the same size (for example, multiple labeled trees can correspond to the same graph). To solve this problem, we maintain a precomputed table of all non-isomorphic graphs in of size at most , along with their corresponding trees in canonical representation. By representing each micro-tree as an index of the corresponding graph in the precomputed table, we can store all the micro-trees of in succinct space. If a micro-tree does not have a corresponding graph in (i.e., there is no corresponding graph in the precomputed table), we first extend to where of size at most by adding some dummy nodes, and encode as the index of , along with the information about dummy nodes. Since we only add a small number of (at most ) dummy nodes for each micro-tree, all the additional information can be stored within succinct space. In the following sections, we describe how to add such dummy nodes for series-parallel, block-cactus, and 3-leaf power graphs.

Finally, for the case when is not connected, we extend the above idea as follows. We first encode all the connected components of separately, and encoding the sizes of the connected components using the encoding of [9, 25] using at most additional bits. This implies we can still encode in succinct space even if is not connected. In the rest of this paper, we assume that all the graphs are connected.

3 Series-Parallel Graphs

Series-parallel graphs [23] (SP graphs in short) are undirected multi-graphs which are defined recursively as follows.

  • A single edge is an SP graph. We call its two end points as terminals.

  • Given two SP graphs with terminals and with terminals ,

    • their series composition, the graph made by identifying , is an SP graph with terminals ; and

    • their parallel composition, the graph made by identifying , and , is an SP graph with terminals and .

From this construction, we can obtain the binary tree representing an SP graph as follows. Each leaf of the binary tree corresponds to an edge of . Each internal node of has a label S (or P), which represents an SP graph made by the series (or parallel) composition of the two SP graphs represented by the two child subtrees of . We convert it into a multi-ary SP tree by merging vertically consecutive nodes with identical labels into a single node. More precisely, while scanning all the nodes in in bottom-up, we contract every edge if and have the same labels. Then all the internal nodes at the same depth have the same labels, and the labels alternate between the levels. See Figure 1 for an example. Note that any two non-isomorphic SP graphs have different SP trees.

Succinct representation. Let and be the number of vertices and edges of , respectively. Then has leaves, and nodes. First, we construct the SP tree from an SP graph . If the root of is a P node, we add a dummy parent labeled S with three children, and make the original root as the middle child of . The first and the last children of correspond to dummy edges. If the root of is an S node, we also add two leaves as the leftmost and rightmost children of the root, corresponding to dummy edges. We refer to this modified tree as . Let be the number of nodes in . Then we apply the tree covering algorithm with parameters and .

Figure 1: Example of an SP graph (left), its SP tree representation with tree covering (middle), and the tree over mini-trees (right). The roots of mini-tree G and K are dummy nodes. Numbers below S nodes are inorders. Numbers besides internal nodes of the SP tree are the left and right labels. Leaves of the tree also have the left and right labels, which are vertex labels of the SP graph.

It is obvious that each micro-tree without dummy leaf nodes represents an SP graph. For each graph corresponding to a micro-tree, we use a linear time algorithm [27] to obtain a canonical representation of the micro-tree. Note that if the graphs corresponding to two micro-trees are isomorphic, then those two micro-trees have the same canonical representation. We create a table to store all non-isomorphic SP graphs with at most vertices, and encode each micro-tree as a pointer into this table. To reconstruct the original graph from the graphs corresponding to the micro-trees, we need additional information to combine these graphs. More specifically, assume an SP graph consists of a series composition of graphs and , whose terminals are , and respectively. Then one can construct two different graphs and by (i) connecting and or (ii) and . Thus, for each micro tree, we add one extra bit to store this information.

For each S node of , we assign an inorder number [24] (we only assign inorder numbers for S nodes). Inorder numbers in a rooted tree are given during a preorder traversal from the root. If a node is visited from one of its children and another child of is visited next, we assign one inorder number to . If a node has children, we assign inorder numbers to it. (Unary nodes are not assigned any inorder number.) If a node has more than one inorder number, we use the smallest value as its representative inorder number. Now we consider two operations (i) : return the -th inorder rank of S node (given as preorder number), and (ii) : given an inorder rank of an S node, return where is the preorder number of the node with inorder rank and is the number such that is the -th inorder number of the node. The following describes how to support both queries in time using bits of additional space.

One can observe that for each micro-tree (or mini-tree) of , all the inorder numbers corresponding to the S nodes in form two intervals and . Note that all the intervals corresponding to the mini-trees or micro-trees partition the interval where is the largest inorder number in . We construct a dictionary that stores the right end points of all the intervals corresponding to the mini-trees, where with each element of the dictionary, we store a pointer to the mini-tree corresponding to that interval as the satellite information. The number of elements in this dictionary is with universe size at most , and hence can be represented as an FID [22] using bits to support membership, rank and select queries in time. The satellite information can also be stored in bits, to support -time access. For each mini-tree , we also construct a dictionary that stores the right end points of all the intervals corresponding to its micro-trees, where with each element we associate a pointer to the corresponding micro-tree as the satellite information. The space usage of the dictionaries corresponding to all the mini-trees adds up to bits in total.

In addition, for each mini-tree of , we store its corresponding intervals and using bits in total. We call the two values and as the offsets corresponding to . Also, for each micro-tree contained in the mini-tree and , we store if , and otherwise (i.e., offsets with respect to the mini-tree intervals they belong to). Since all the endpoints of these intervals are at most , we can store all such intervals using bits in total. The total space usage is bits.

To compute , we first find the micro-tree which contains the node . Then, we decode the interval corresponding using the interval stored at as well as the offsets corresponding to , and return the -th smallest value within the interval. To compute , we first micro-tree that contains the answer by the rank queries on and . Finally, we compute the answer within the micro-tree in time using the intervals stored with .

Next, we assign labels to the vertices of the graph. Any vertex in the graph corresponds to a common terminal of two SP graphs which are combined by series composition. For each vertex , let be an S node in which represents such series composition. Then we assign one inorder number of as the label of (note that any two subgraphs which have a common terminal correspond to the subtree at the consecutive child nodes of ). For example, vertex 5 in the graph corresponds to the common terminal of the following two subgraphs: (i) the subgraph consisting of the edge from 4 to 5, and (ii) the subgraph corresponding to the subtree rooted at the mini-tree (consisting of a single P node), which contains the four edges and . Note that the inorder number 5 is assigned to the S node corresponding to the mini-tree , when we traverse from subtree corresponding to (i) to the subtree corresponding to (ii) (during the preorder traversal of T).

Also, we define a label for each node of

, which is an ordered pair

of the two terminals of the subgraph corresponding to the subtree rooted at that node. We call and the left and the right label of the node . The label of a P node can be computed in time as follows. (1) If is the leftmost child of its parent , then is equal to the smallest inorder number of , given when is visited from . To obtain , we traverse the SP tree up from until we reach an S node such that does not belong to the leftmost subtree of . We can compute the node in time as follows. If is in the same micro-tree as , then we can find using a table lookup. Otherwise, if is in the same mini-tree as , then we store with the root of the micro-tree containing . Finally, if is not in the same mini-tree, then we explicitly store with the root of the mini-tree containing . (2) If is the rightmost child of its parent , then is equal to the inorder number of , given when is visited the last time before visiting . To obtain , we traverse the SP tree up from until we reach an S node such that does not belong to the rightmost subtree of . We use a similar data structure as in (1) to compute the answer. (3) In all other cases, and are the inorder numbers of the parent of , defined immediately before visiting from , and immediately after visiting the next sibling of from , respectively.

The label of an S node is the same as its parent P node (we don’t assign a label to the root S node). The label of a leaf can be determined by the same algorithm for P or S nodes depending on whether its parent is an S or P node. Note that, from the above definition, the label of a P node is the same as the label of any of its child S nodes. For an S node , suppose be its children, and be the left and the right labels. Then it holds that , and the label of is .

We also define and for each vertex of the graph, as follows. Suppose that during the preorder traversal of the tree, we visit nodes , , in this order and we assign the inorder number to . Then we define and . If returns the pair , then and are the -th and the -th children of node , respectively. Thus, and can be computed in time. This completes the description for encoding of SP graphs.

Supporting navigation queries.

  1. First we find the nodes , , and . (1) If , the subgraph corresponding to the node has terminals with labels and . Therefore and are adjacent if is a leaf (this corresponds to the edge ) or has a leaf child ( is a P node and it has a leaf child that corresponds to the edge ). (2) If , analogous to the previous case. (3) If , find the labels of and . Let be the label of and be the label of . One of must be if and are adjacent. Assume w.l.o.g. that . Then and are adjacent iff is a leaf or has a leaf child. (4) If , analogous to the previous case. In all four cases, the query can be supported in time.

  2. First we find and . Then we apply the following procedure to explore all the neighbors of by executing the two procedure calls, and .

    : if is a leaf with label or , then output .
    If is an S node, then call , where is the leftmost (rightmost) child of , if (). If is a P node, then call for all the children of .

    The running time of this procedure is proportional to the size of the output. Note that if we do not want to report the same neighbour multiple times, we can define a canonical ordering between the children of P nodes such that all the leaf children appear after the non-leaf children (S nodes), and only report the first leaf child of the node.

  3. Let and be the micro-tree and mini-tree containing respectively. Then the degree of is the summation of (i) the number of adjacent vertices in , (ii) the number of adjacent vertices not in but in , and (iii) the number of adjacent vertices not in , denoted by , , and respectively. Here an adjacent vertex of refers to a vertex such that or is the label of some leaf node. If is not one of the labels of the boundary node of (of ), then (respectively, ). Now we consider three cases as follows. First, can be computed in time using a precomputed table. The value () can be stored with the root of micro-tree (mini-tree) whose parent is the boundary node in (). Note that in the above scheme, we only need to store two values corresponding to the two labels of the root, for each micro-tree/ mini-tree root. Thus the space usage for storing these values is bits.

4 Block/Cactus/Block-Cactus Graphs

A block graph (also known as a clique tree or a Husimi tree [14]) is an undirected graph in which every block (i.e., maximal biconnected component) is a clique. A cactus graph (same as almost tree(1) [13]) is a connected graph in which every two simple cycles have at most one vertex in common (equivalently every block is a cycle). A block-cactus graph is a graph in which every block is either a cycle or a complete graph.

Any graph that belongs to one of these three graph classes can be converted into a tree as follows. Replace each block (either a clique or an induced cycle) with vertices by a star graph by introducing a dummy node that is connected to the nodes that correspond to the vertices of the block. The remaining edges and vertices of the graph are simply copied into the tree. See Figure 2 for an example. Note that the number of dummy nodes is always less than the number of non-dummy nodes.

Figure 2: An example of a block-cactus graph (left) and its tree representation (right). Squares are dummy vertices.

In the following, we describe a succinct encoding for block-cactus graphs, and note that it is easy to obtain succinct encoding for block graph and cactus graph using these ideas.

Succinct representation. Let be the input block-cactus graph, and let be the corresponding tree obtained by replacing each block with a star graph, as described above. We apply the tree covering algorithm of Theorem 2.1 on with mini-tree and micro-trees of size and for some constant respectively.

It is easy to see that each micro/mini-tree obtained by the tree cover algorithm corresponds to a block-cactus graph, although it may not be a subgraph of the original graph . And by storing some additional information with each micro/mini-tree along with its representation, we can give a bijective map between the vertices in and the nodes in , which we use in describing the query algorithms.

We first note that when we convert a block ( or ) into a star graph (), the neighbors of the dummy node can be ordered in multiple ways when we consider the resulting graph as an ordered tree. In particular, if the ordered tree is rooted at a dummy node corresponding to a cycle, then its children can be ordered in either the clockwise or anti-clockwise order of the cycle, and also the first child can be any vertex on the cycle. When the root of micro-tree is a dummy node corresponding to a cycle, the cycle corresponding to the dummy node is cut into two or more pieces, and the one inside represents a shorter cycle. Then the micro-tree is encoded as a canonical representation of the modified subgraph, and it loses the information of how it was connected to the other part of the graph. To recover this information, for the micro-tree it is enough to store one vertex in the shorter cycle that is connected to the outside and the direction (clockwise or anti-clockwise) of the cycle. The vertex is encoded in bits, and the direction in one bit. We need the same information for the non-root boundary node of the micro-tree. This additional information will enable us to reconstruct the cycle in the original graph from the subgraphs corresponding to the micro-trees. Note that if the dummy node corresponds to a clique, then we don’t need this information.

Each micro-tree is encoded as a two-level pointer into a precomputed table that stores the set of all possible block-cactus graphs on at most vertices. Note that the number of dummy nodes is since we can delete all the dummy nodes which are not boundary nodes of micro-trees. We also store bit with each of these dummy nodes, indicating whether it corresponds to a clique or a cycle. Thus each micro-tree is represented optimally, apart from an -bit additional information. Hence the overall space usage is succinct. This completes the description for the succinct encoding of block-cactus graphs.

Supporting navigation queries.

  1. If there is an edge in between the nodes corresponding to and , then and are adjacent in the graph (since we only delete some edges from the original graph; and all the edges added are incident to some dummy node). Otherwise, and are adjacent if they are connected to the same dummy node , and either (a) corresponds to a clique, or (b) and are “adjacent” in the tree – i.e., if they are adjacent siblings or one of them is the parent of and the other is either the first or last child of . Since all these conditions can be checked in time using the tree representation, we can support the query in time.

  2. The algorithm for this follows essentially from the conditions for checking adjacency. More specifically, to report , we first output all the non-dummy nodes adjacent to in the tree. And if is adjacent to any dummy node , then we also output all the vertices: (a) that are connected to if corresponds to a clique, and (b) that are “adjacent” to it in the tree if corresponds to a cycle. This can be done in time proportional to the output size.

  3. From the algorithm for the query, we observe that the degree of a node can be computed by adding the two quantities: (1) the number of non-dummy neighbors of , and (2) the number of nodes that are adjacent to through a dummy neighbor. It is easy to compute (1) and (2) within a micro-tree, in constant time using precomputed tables. In addition, we may need to add the contributions from outside the micro-tree, if is either a boundary node or is adjacent to a boundary node which is dummy. For each such dummy boundary node, we need to add either or (if the dummy node corresponds to a cycle) or (if the dummy node corresponds to a clique of size ). Since there are at most two such boundary nodes which can be adjacent to , this can be computed in constant time. Also, for the roots of the mini (micro) trees, which are non-dummy, we store their degrees (within the mini-tree) explicitly. Thus, we can compute the query in time.

5 -Leaf Power Graphs

A graph with vertices is a -leaf power if there exists a tree with leaves where each leaf node corresponds to a vertex in the graph , and any two vertices in are adjacent if and only if the distance between their corresponding leaves in the tree is at most . The tree is called a -leaf root of (see Figure 3 for an example). In this section, we consider the succinct representation of -leaf power for the special case of .

Succinct representation. Our representation of 3-leaf power graphs is based on the following lemma.

Lemma 1 (Brandstädt and Le [4])

For any connected and non-clique 3-leaf power of vertices, one can construct a unique 3-leaf root of of nodes.

Note that We can make as a rooted tree as follows. Because contains an internal node (otherwise consists of just an edge with two nodes, which corresponds to the clique ), we regard it as the root of . We store the root of every micro tree of explicitly using bits in total.

Now consider the 3-leaf root of . If is not a clique, one can construct the unique representation of by Lemma 1. If not, we fix as . For any non-leaf node , we order the children of in the non-decreasing order of the sizes of the subtrees rooted under them (thus, all the leaf children of appear before the non-leaf children of ), to support the navigation queries efficiently. We then apply the tree covering algorithm on with parameters and for any constant . We build a precomputed table of size -bits which stores all non-isomorphic non-clique 3-leaf powers of size at most along with their 3-leaf root constructed from the algorithm of Lemma 1.

We use the following properties of 3-leaf roots: (1) if is connected, every internal node of has at least one leaf child, and (2) the graphs corresponding to the micro-trees created by applying the tree cover algorithm to are connected. The proofs are as follows. For (1), assume to the contrary that there is an internal node with no leaf children. Then any vertex corresponding to a leaf descendant of is not connected to any other vertex corresponding to a leaf node outside of the subtree rooted at since the distance between them (in ) is at least 4. For (2), consider a micro-tree with a boundary edge connecting a node in and a node which is the root of another micro-tree . From (1), has a leaf child . If belongs to , the graph corresponding to is connected. If belongs to , the root of must be , which contradicts the assumption that is a boundary edge. Note that root boundary edges do not effect the connectivity of the graph corresponding to the micro-tree.

Thus each micro-tree of falls into one of the three cases: (i) 3-leaf root of a non-clique, (ii) single non-leaf node, or (iii) 3-leaf root of a clique. For Case (i), we encode as an index into the precomputed table. For Case (ii), we add one extra entry into the precomputed table, which is used to encode this case. Finally for Case (iii), note that there are only -distinct 3-leaf roots corresponding to the clique of size , each of which can be constructed by connecting two non-leaf nodes of and for any (assuming corresponds to the empty graph). Thus, we add extra entries into the precomputed table which indicate cliques of size at most with an additional index . Overall, the total space of the encoding is succinct.

Figure 3: An example of a -leaf power graph (left) and its -leaf root (right).

Supporting navigation queries. For the navigation queries, we refer to each vertex by the leaf-rank of the corresponding leaf node in . Let be the parent node of , and let and be a set of leaf and non-leaf children of respectively.

  1. By the definition of 3-leaf root, and are adjacent if and only if (i) or (ii) is a parent node of or vice versa. Since both and can be computed in time [12], we can answer the in time.

  2. Let be a parent node of . Then if and only if is a (i) leaf child node of , (ii) node in , or (iii) leaf child node of the node in . To return all the leaf children of , we scan from the leftmost child of , and return if the child is a leaf node. This can be done in time per node by using the -time tree navigation queries in [12]. Next, we scan all the children of . While scanning the node , if (this returns all the nodes in the case (ii)), we return . Otherwise, we return all the leaf-children of (this returns all the nodes in the case (iii)). Again, all these nodes can be reported in per node by the same argument as the above. Thus, we can return in time.

  3. We count the number of (i) leaf child nodes of (parent node of ), (ii) nodes in , and (iii) leaf children of the nodes in separately, and return the sum of these as the answer of query. Now we describe how to compute (iii) in time (note that (i) and (ii) also can be computed in time analogously). Let (resp. ) be a micro-tree (resp. mini-tree) which contains . Then we first consider the case that does not contain the boundary node of . In this case, we compute the (iii) in time using the precomputed table if is not a boundary node of . If is a boundary node of (resp. ), we compute the (iii) in time by referring to the answer stored at the root of (resp. ). Note that we can store all of these answers using at most bits in total. Next, we consider the case that contains the boundary node of . In this case, we additionally store the number of leaf children of the root node of each micro-tree of using at most bits in total. Then we can compute the (iii) in time by computing the (iii) without the number leaf children of the boundary node of , and adding the number of leaf children of the micro-tree whose root node is the child of the boundary node of .

6 Conclusions

We present in this work succinct representations of series-parallel, block-cactus and 3-leaf power graphs along with supporting basic navigational queries optimally. We conclude with some possible future directions for further exploration. Following the works of [1, 10], is it possible to support shortest path queries efficiently on these graphs while using same space as in this paper? Is it possible to design space-efficient algorithms for various combinatorial problems for these graphs? Can we generalize the data structure of Section 5 to construct a succinct representation of -leaf power graphs? Finally, can we prove a lower bound between the query time and the extra space i.e., redundancy, for our data structures?


  • [1] Acan, H., Chakraborty, S., Jo, S., Satti, S.R.: Succinct data structures for families of interval graphs. In: WADS. pp. 1–13 (2019)
  • [2] Blelloch, G.E., Farzan, A.: Succinct representations of separable graphs. In: CPM. pp. 138–150 (2010)
  • [3] Bodirsky, M., Giménez, O., Kang, M., Noy, M.: Enumeration and limit laws for series-parallel graphs. Eur. J. Comb. 28(8), 2091–2105 (2007)
  • [4] Brandstädt, A., Le, V.B.: Structure and linear time recognition of 3-leaf powers. Inf. Process. Lett. 98(4), 133–138 (2006)
  • [5] Chakraborty, S., Jo, S., Sadakane, K., Satti, S.R.: Succinct data structures for small clique-width graphs. In: 31st Data Compression Conference, DCC 2021, Snowbird, UT, USA, March 23-26, 2021. pp. 133–142. IEEE (2021)
  • [6] Chauve, C., Fusy, É., Lumbroso, J.O.: An exact enumeration of distance-hereditary graphs. In: ANALCO 2017, Barcelona, Spain, Hotel Porta Fira, January 16-17, 2017. pp. 31–45 (2017)
  • [7] Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms (3. ed.). MIT Press (2009)
  • [8] Diestel, R.: Graph Theory, 4th Edition, Graduate texts in mathematics, vol. 173. Springer (2012)
  • [9] El-Zein, H., Lewenstein, M., Munro, J.I., Raman, V., Chan, T.M.: On the succinct representation of equivalence classes. Algorithmica 78(3), 1020–1040 (2017)
  • [10] Farzan, A., Kamali, S.: Compact navigation and distance oracles for graphs with small treewidth. Algorithmica 69(1), 92–116 (2014)
  • [11] Farzan, A., Munro, J.I.: Succinct encoding of arbitrary graphs. Theor. Comput. Sci. 513, 38–52 (2013)
  • [12] Farzan, A., Munro, J.I.: A uniform paradigm to succinctly encode various families of trees. Algorithmica 68(1), 16–40 (Jan 2014)
  • [13] Gurevich, Y., Stockmeyer, L.J., Vishkin, U.: Solving NP-Hard problems on graphs that are almost trees and an application to facility location problems. J. ACM 31(3), 459–473 (1984)
  • [14] Husimi, K.: Note on mayers’ theory of cluster integrals. The Journal of Chemical Physics 18(5), 682–684 (1950)
  • [15] Jacobson, G.J.: Succinct static data structures. PhD thesis, Carnegie Mellon University (1998)
  • [16] Munro, J.I., Raman, V.: Succinct representation of balanced parentheses and static trees. SIAM J. Comput. 31(3), 762–776 (2001)
  • [17] Munro, J.I., Wu, K.: Succinct data structures for chordal graphs. In: ISAAC. pp. 67:1–67:12 (2018)
  • [18] Munro, J.I., Nicholson, P.K.: Compressed representations of graphs. In: Encyclopedia of Algorithms, pp. 382–386 (2016)
  • [19] Navarro, G.: Compact Data Structures - A Practical Approach. Cambridge University Press (2016)
  • [20] Navarro, G., Sadakane, K.: Fully functional static and dynamic succinct trees. ACM Trans. Algorithms 10(3), 16:1–16:39 (2014)
  • [21] Raman, R., Rao, S.S.: Succinct representations of ordinal trees. In: Space-Efficient Data Structures, Streams, and Algorithms. Lecture Notes in Computer Science, vol. 8066, pp. 319–332. Springer (2013)
  • [22] Raman, R., Raman, V., Satti, S.R.: Succinct indexable dictionaries with applications to encoding k-ary trees, prefix sums and multisets. ACM Trans. Algorithms 3(4),  43 (2007)
  • [23] Riordan, J., Shannon, C.E.: The number of two-terminal series-parallel networks. Journal of Mathematics and Physics 21(1-4), 83–93 (1942).
  • [24]

    Sadakane, K.: Compressed Suffix Trees with Full Functionality. Theory of Computing Systems

    41(4), 589–607 (2007)
  • [25] Sumigawa, K., Sadakane, K.: Storing partitions of integers in sublinear space. Rev. Socionetwork Strateg. 13(2), 237–252 (2019)
  • [26] Uno, T., Uehara, R., Nakano, S.: Bounding the number of reduced trees, cographs, and series-parallel graphs by compression. Discrete Mathematics, Algorithms and Applications 05(02), 1360001 (2013)
  • [27] Valdes, J., Tarjan, R.E., Lawler, E.L.: The recognition of series parallel digraphs. SIAM J. Comput. 11(2), 298–313 (1982)
  • [28] Voblyi, V.A., Meleshko, A.K.: Enumeration of labeled block-cactus graphs. Journal of Applied and Industrial Mathematics 8(3), 422–427 (2014)