1 Introduction and Related Work
A number of data management tasks related to query evaluation are computationally intractable when rich query languages or complex tasks are involved, even when the query is assumed to be fixed (that is, when we consider data complexity [59]). For example:

query evaluation of Boolean monadic secondorder (MSO) queries is hard for every level of the polynomial hierarchy [4];

computing the probability of conjunctive queries (CQs) over tupleindependent databases, a very simple model of probabilistic databases, is
hard [25]; 
unless , there is no polynomialtime algorithm to construct a deterministic decomposable negation normal form (dDNNF) representation of the Boolean provenance of some CQ [25, 40]; furthermore, there is no polynomial bound on the size of a structured dDNNF representation of the Boolean provenance of unions of conjunctive queries with disequalities [11, Theorem 33].
Other problems yield complexity classes usually considered tractable, such as for Boolean FO query evaluation [1], but may still result in impractical running times on large database instances.
To face this intractability and practical inefficiency, one possible approach has been to determine conditions on the structure of databases that ensure tractability, often through a series of algorithmic metatheorems [44]. This has led, for instance, to the introduction of the notions of locally treedecomposable structures for nearlineartime evaluation of Boolean FO queries [33], or to that of structures of bounded expansion for constantdelay enumeration of FO queries [41].
Treewidth
A particularly simple and widely used way to restrict database instances that ensures a wide class of tractability results is to bound the treewidth of the instance (this is actually a special case of both notions of locally treedecomposable and bounded expansion). Treewidth [55] is a graphtheoretic parameter that characterizes how treelike a graph, or more generally a relational instance, is, and hence whether it can be reasonably transformed into a tree structure (a tree decomposition). Indeed:

computing the probability of MSO queries over a boundedtreewidth tupleindependent database is lineartime assuming constanttime rational arithmetic [8];
These results mostly stem from the fact that, on trees, MSO queries can be rewritten to tree automata [57], though this process is nonelementary in general (which impacts the combined complexity [59], but not the data complexity). We can see these results as fixedparameter tractability, with a complexity in where is the size of the database, its treewidth, the size of the query, and some computable function. Note that another approach for tractability, out of the scope of this paper, is to restrict the queries instead of the instances, e.g., by enforcing low treewidth on the queries [37] or on the provenance of the queries [39].
Such results have been, so far, of mostly theoretical interest – mainly due to the high complexity of the function of and . However, algorithms that exploit the low treewidth of instances have been proposed and successfully applied to realworld and synthetic data: for shortest path queries in graphs [61, 53], distance queries in probabilistic graphs [47], or adhoc queries compiled to tree automata [49]. In other domains, low treewidth is an indicator for efficient evaluation of quantified Boolean formulas [54].
Sometimes, treewidth even seems to be the sole criterion that may render an intractable problem tractable, under some technical assumptions: [45] shows that, unless the exponentialtime hypothesis is false, MSO query evaluation is intractable over subinstanceclosed families of instances of treewidth strongly unbounded polylogarithmically; [34, 9] show that MSO query evaluation is intractable over subinstanceclosed families of instances of treewidth that are densely unbounded polylogarithmically (a weaker notion); [9] shows that counting MSO query results is intractable over subinstanceclosed families of instances of unbounded treewidth that are treewidthconstructible (an even weaker notion, simply requiring largetreewidth instances to be efficiently constructible, see [9, Definition 4.1]); finally, [8, 9] shows that one can exhibit FO queries whose probability evaluation is polynomialtime on structures of bounded treewidth, but #Phard on any treewidthconstructible family of instances of unbounded treewidth.
For this reason, and because of the wide variety of problems that become tractable on boundedtreewidth instances, treewidth is an especially important object of study.
Treewidth of realworld databases
If there is hope for practical applicability of treewidthbased approaches, one needs to answer the following two questions: Can one efficiently compute the treewidth of realworld databases? and What is the treewidth of realworld data? The latter question is the central problem addressed in this paper.
The answer to the former is that, unfortunately, treewidth cannot reliably be computed efficiently in practice. Indeed, computing the treewidth of a graph is an NPhard problem [13] and, in practice, exact computation of treewidth is possible only for very small instances, with no more than dozens of vertices [18]. An additional theoretical result is that, given a width , it is possible to check whether a graph has treewidth and produce a tree decomposition of the graph in linear time [17]; however, the large constant terms make this procedure impractical. Known exact treewidth computation algorithms [18] may be usable on small graphs, but they are impossible to apply for our purposes. Indeed, in [18], the largest graph for which algorithms finished running had a mere 40 vertices.
A more realistic approach is to compute estimations of treewidth, i.e., an interval formed of a lower bound and an upper bound on the treewidth. Upper bound algorithms (surveyed in [19]) use multiple approaches for estimation, which all output a tree decomposition. One particularly important class of methods for generating tree decompositions relies on elimination orderings, that also appear in junction tree algorithms used in belief propagation [46]. For lower bounds (surveyed in [20]), where no decomposition can be obtained, one can use degreebased or minorbased measures on graphs, which themselves act as proxies for treewidth.
Some upper bound and lower bound algorithms have been implemented and experimented with in [58, 19, 12, 20]. However, in all cases these algorithms were evaluated on graphs that are either very small (of the order of dozens of vertices), as in [58], or on slightly larger synthetic graphs (with up to vertices) generated with exact treewidth values in mind, as in [19, 20]. The main purpose of these experiments was to evaluate the estimators’ performance. Recently, the PACE challenge has had a track dedicated to the estimation of treewidth [26]: exact treewidth on relatively small graphs, upper bounds on treewidth on larger graphs. Local improvements of upper bounds have been also evaluated on small graphs in [31]. Since all these works aim at comparing estimation algorithms, they do not investigate the actual treewidth of realworld data.
Another relevant work is [3], which studied the coreperiphery structure
of social networks, by building tree decompositions via node elimination ordering heuristics, but without establishing any treewidth bounds. In this work, we use the same heuristics to compute bounds on treewidth.
Finally, there have been some work on analyzing properties of realworld queries. Queries are usually much smaller than database instances, but it turns out that they are also much simpler in structure: [52] shows that an astounding 99.99% of conjunctive patterns present in a SPARQL query log are acyclic, i.e., of treewidth 1. [22] similarly showed that the overwhelming majority of graph pattern queries in SPARQL query logs had treewidth 1, less than 0.003% had treewidth 2, and a single one (out of more than 15 million) had treewidth 3. We shall see that the situation is much different with the treewidth of database instances. Note that, in many settings, lowtreewidth of queries does not suffice for tractability: in probabilistic databases, for instance, #Phardness holds even for acyclic queries [25].
Contributions
In this experimental study, our contributions are twofold.
First, using previously studied algorithms for treewidth estimation, we set out to find classes of realworld data that may exhibit relatively low values of treewidth, thus identifying potential cases in which treewidthbased approaches are of practical interest. For this, after formally defining tree decompositions and treewidth (Section 2), we select the algorithms that are able to deal with largescale data instances, for both lower and upperbound estimations (Section 3). Our aim here is not to propose new algorithms for treewidth estimation, and not to exhaustively evaluate existing treewidth estimation algorithms, but rather to identify algorithms that can give acceptable treewidth estimation values in reasonable time, in order to apply them to realworld data. Then, we use these algorithms to obtain lower and upper bound intervals on treewidth for 25 databases from 8 different domains (Section 4). We mostly consider graph data, for which the notion of treewidth was initially designed (the treewidth of an arbitrary relational instance is simply defined as that of its Gaifman graph). The graphs we consider, all obtained from realworld applications, have between several thousands and several millions of vertices. To the best of our knowledge, this is the first comprehensive study of the treewidth of realworld data of large scale from a variety of application domains.
Our finding is that, generally, the treewidth is too large to be able to use treewidthbased algorithms directly with any hope of efficiency.
Second, from this finding, we investigate how a relaxed (or partial) decomposition can be used on realworld graphs. In short, we no longer look for complete tree decompositions; instead, we allow the graph to be only partially decomposed. In complex networks, there often exists a dense core together with a treelike fringe structure [51]; it is hence possible to decompose the fringe into a tree, and to place the rest of the graph in a dense “root”. It has been shown that this approach can improve the efficiency of some graph algorithms [61, 5, 47]. In Section 9, we analyze its behavior on realworld graphs. We conclude the paper in Section 13 with a discussion of lessons learned, as to which realworld data admit (full or partial) lowtreewidth tree decompositions, and how this impacts query evaluation tasks.
2 Preliminaries on Treewidth
To make the concepts in the following clear, we start by formally introducing the concept of treewidth. Following the original definitions in [55], we first define a tree decomposition:
[(Tree Decomposition)] Given an undirected graph , where represents the set of vertices (or nodes) and the set of edges, a tree decomposition is a pair where is a tree and is a labeling of the nodes of by subsets of (called bags), with the following properties:

;

, s.t. ; and

, induces a subtree of .
Intuitively, a tree decomposition groups the vertices of a graph into bags so that they form a treelike structure, where a link between bags is established when there exists common vertices in both bags.
Figure 1 illustrates such a decomposition. The resulting decomposition is formed of 4 bags, each containing a subset of the nodes in the graph. The bags containing node (in bold) form a connected subtree of the tree decomposition.
Based on the number of vertices in a bag, we can define the concept of treewidth:
[(Treewidth)] Given a graph the width of a tree decomposition is equal to . The treewidth of , , is equal to the minimal width of all tree decompositions of .
It is easy to see that an isolated point has treewidth 0, a tree treewidth 1, a cycle treewidth 2, and a clique (a complete graph of nodes) treewidth .
The width of the decomposition in Figure 1 is 3. This tells us the graph has a treewidth of at most 3. The treewidth of this graph is actually exactly 3: indeed, the 4clique, which has treewidth 3, is a minor of the graph in Figure 1 (it is obtained by removing nodes and , and by contracting the edges between and and and ), and treewidth never increases when taking a minor (see, for instance, [38]).
As previously mentioned, the treewidth of an arbitrary relational instance is defined as that of its Gaifman graph, the graph whose vertices are constants of the instances and where there is an edge between two vertices if they cooccur in the same fact. We will therefore implicitly represent relational database instances by their Gaifman graphs in what follows.
We are now ready to present algorithms for lower and upper bounds on treewidth.
3 Treewidth Estimation
The objective of our experimental evaluation is to obtain reasonable estimations of treewidth, using algorithms with reasonable execution time on realworld graphs.
Once we know we do not have the luxury of an exact computation of the treewidth, we are left with estimations of the range of possible treewidths, between a lower bound and an upper bound. For the purposes of this experimental survey, we restrict ourselves to the most efficient estimation algorithms from the literature. We refer the reader to [19] and [20], respectively, for a more complete survey of treewidth upper and lower bound estimation algorithms on synthetic data.
Treewidth Upper Bounds
As we have defined, the treewidth is the smallest width among all possible tree decompositions. In other words, the width of any decomposition of a graph is an upper bound of the actual treewidth of that graph. A treewidth upper bound estimation algorithm can thus be seen as an algorithm to find a decomposition whose width is as close as possible to the treewidth of the graph. To understand how one can do that, we need to introduce the classical concept of elimination ordering and to explain its connection to treewidth.
We start by introducing triangulations of graphs, which transform a graph into a graph that is chordal:
A chordal graph is a graph such that every cycle in of at least four vertices has a chord – an edge between two nonsuccessive vertices in the cycle.
A triangulation (or chordal completion) of a graph is a minimal chordal supergraph of : a graph obtained from by adding a minimal set of edges to obtain a chordal graph.
The graph in Figure 1 is not chordal, since, for example, the cycle –––– does not have a chord. If one adds an edge between and , as in Figure 2 (left), one can verify that the resulting graph is chordal, and thus a triangulation of the graph of Figure 1.
One way to obtain triangulations of graphs is elimination orderings. An elimination ordering of a graph of nodes is an ordering of the vertices of , i.e., it can be seen as a bijection from onto . From this ordering, one obtains a triangulation by applying sequentially the following elimination procedure for each vertex : first, edges are added between remaining neighbors of as needed so that they form a clique, then is eliminated (removed) from the graph. For every elimination ordering , along with all edges added to in the elimination procedure forms a graph, denoted . This graph is chordal (indeed, we know that the two neighbors of the first node of any cycle we encounter in the elimination ordering have been connected by a chord by the elimination procedure). It is also a supergraph of , and it can be shown it is a minimal chordal supergraph, i.e., a triangulation of .
Figure 2 (right) shows a possible elimination ordering of the graph of Figure 1. The elimination procedure adds a single edge, when processing node , between nodes and . The resulting triangulation is the graph on the left of Figure 2.
Elimination orderings are connected to treewidth by the following result: [19] Let a graph, and . The following are equivalent:

has treewidth .

has a triangulation , such that the maximum clique in has size .

There exists an elimination ordering such that the maximum clique size in is .
Obtaining the treewidth of the graph is thus equivalent to finding an optimal elimination ordering. Moreover, constructing a tree decomposition from an elimination ordering is a natural process: each time a vertex is processed, a new bag is created containing the vertex and its neighbors. Note that, in practice, we do not need to compute the full elimination ordering: we can simply stop when we know that the number of remaining vertices is lower that the largest clique found thus far.
In the triangulation of Figure 2 (left), corresponding to the elimination ordering on the right, the maximum clique has size 4: it is induced by the vertices . This proves the existence of a tree decomposition of width . Indeed, it is exactly the tree decomposition in Figure 1 (right): bag is constructed when is eliminated, bag when is eliminated, bag when is eliminated, and finally bag when is eliminated.
Finding a “good” upper bound on the treewidth can thus be done by finding a “good” elimination ordering. This is still an intractable problem, of course, but there are various heuristics for generating elimination orderings leading to good treewidth upper bounds. One important class of such elimination ordering heuristics are the greedy heuristics. Intuitively, the elimination ordering is generated in an incremental manner: each time a new node has to be chosen in the elimination procedure, it is chosen using a criterion based on its neighborhood. In our study, we have implemented the following greedy criteria (with ties broken arbitrarily):

FillIn. The node with the minimum needed “fillin” (i.e., the minimum number of missing edges for its neighbors to form a clique) is chosen. [19]

Degree+FillIn. The node with the minimum sum of degree and fillin is chosen.
The elimination ordering of Figure 2 (right) is an example of the use of Degree+FillIn. Indeed, is first chosen, with value 1, then with value 2, then with value . After that, the order is arbitrary since , , , and form a clique (and thus have initial value 3).
Previous studies [58, 42, 19] have found these greedy criteria give the closest estimations of the real treewidth. An alternative way of generating an elimination ordering is based on maximum cardinality search [42, 19]; however, it is both less precise than the greedy algorithms – due to its reliance on how the first node in the ordering is chosen – and slower to run.
Treewidth Lower Bounds
In contrast to upper bounds, obtaining treewidth lower bounds is not constructive. In other words, lower bounds do not generate decompositions; instead, the estimation of a lower bound is made by computing other measures on a graph, that are a proxy for treewidth. In this study, we implement algorithms using three approaches: subgraphbased bounds, minorbased bounds, and bounds obtained by constructing improved graphs.
Given a graph , let be its lowest degree, and its second lowest degree (i.e., the degree of the second vertex when ordered by degree). It is known that is itself a lower bound on the treewidth [43]. This, however, is too coarse an estimation, and we need better bounds. We shall use two degeneracy measures of the graphs. The first, the degeneracy of , , is the maximum value of over all subgraphs of . Similarly, the degeneracy, , of a graph is the maximum value of over all subgraphs .
We have the following lemma: [20] Let be a graph, and be a set of vertices. The treewidth of the subgraph of induced by is at most the treewidth of .
A corollary of the above lemma is that the values and are themselves lower bounds of treewidth:
[20] For every graph , the treewidth of is at least .
To compute and exactly, the following natural algorithms can be used [43, 20]: repeatedly remove a vertex of smallest degree – or smallest except for some fixed node , respectively – from the graph, and keep the maximum value thus encountered. As in [20], we refer to these algorithms as Mmd (Maximum Minimum Degree) and Delta2D, respectively. Ties are broken arbitrarily.
Let us apply the Mmd algorithm to the graph of Figure 1 (left). The algorithm may remove, in order, (degree 1), (degree 2), (degree 2), (degree 2), (degree 2), (degree 1), (degree 0). This gives a lower bound of 2 on the treewidth, which is not tight as we saw.
An equivalent of Lemma 3 on treewidth also holds for minors of a graph : if is a minor of , then the treewidth of is at most the treewidth of [38]. A minor of a graph is a graph obtained by allowing, in addition to edge and node deletion as when taking subgraphs, edge contractions. Then the concepts of contraction degeneracy, , and contraction degeneracy, , are defined analogously to and by considering all minors instead of all subgraphs: [20] For every graph , the treewidth of is at least .
Unfortunately, computing or is NPhard [21]; hence, only heuristics can be used. One such heuristic for is a simple change to the Mmd algorithm, called Mmd+ [21, 36]: at each step, instead of removing a vertex, a neighbor is chosen and the corresponding edge is contracted. Choosing a neighbor node to contract requires some heuristic also; in line with previous studies, in this study we choose the neighbor node that has the least overlap in neighbors – this is called the leastc heuristic [62].
Finally, another approach to treewidth lower bounds that we consider are improved graphs, an approach that can be used in combination with any of the lower bound estimation algorithms presented so far. Consider a graph and an integer and the following operation: while there are nonadjacent vertices and that have at least common neighbors, add the edge to the improved graph . The resulting graph is the neighbor improved graph of . Using these improved graphs can lead to a lower bound on treewidth: the neighbor improved graph of a graph having at most treewidth also has treewidth at most .
To use this property, one can start from an already computed estimation of a lower bound (by using Mmd, Mmd+, or Delta2D for example) and then repeatedly generate a neighbor improved graph, estimate a new lower bound on treewidth, and repeat the process until the graph cannot be improved. This algorithm is known as Lbn in the literature [23], and can be combined with any other lower bound estimation algorithm. A refinement of Lbn, that alternates improvement and contraction steps, Lbn+, has also been proposed [21].
Let us illustrate the use of Lbn together with Mmd on the graph of Figure 1 (left). As shown in Example 3, a first run of Mmd yields . We compute a 3neighbor improved graph for by adding an edge between nodes and (that share neighbors , , ). Now, running Mmd one more time yields the possible sequence (degree 1), (degree 2), (degree 2), (degree 3), (degree 2), (degree 1), (degree 0). We thus obtain a lower bound of on the treewidth, which is this time tight.
4 Estimation Results
We now present our main experimental study, first introducing the 25 datasets we are considering, then upper and lower bound results, running time and estimators, and an aside discussion of the treewidth of synthetic networks. All experiments were made on a server using an 8core Intel Xeon 1.70GHz CPU, having 32GB of RAM, and using 64bit Debian Linux. All datasets were given at least two weeks to finish, after which the algorithms were stopped and the best lower and upper bounds were recorded.
Datasets
For our study, we have evaluated the treewidth estimation algorithms on 25 datasets from 8 different domains (see Appendix A for descriptions of how they were obtained): infrastructure networks (road networks, public transportation, power grid), social networks (explicit as in social networking sites, or derived from interaction patterns), weblike networks, a communication network, data with a hierarchical structure (genealogy trees), knowledge bases, traditional OLTP data, as well as a biological interaction network.
Dataset  Lower  Upper  

type  name  nodes  edges  width  width 
infrastructure  Ca  
Pa  
Tx  
Bucharest  
HongKong  
Paris  
London  
Stif  
USPowerGrid  
social  
Enron  
WikiTalk  
CitHeph  
StackTCS  
StackMath  
LiveJournal  
web  Wikipedia  
communication  Gnutella  
hierarchy  Royal  
Math  
ontology  Yago  
DbPedia  
database  Tpch  
biology  Yeast 
5 Datasets
We describe below the source and preprocessing of the datasets used in our experimental study:
 Infrastructure.

For this domain, we have collected four types of datasets. The first are road networks of three US states, Ca, Pa, and Tx, downloaded from the site of the 9th DIMACS Challenge on shortest paths^{1}^{1}1http://www.dis.uniroma1.it/challenge9/. Second, we have extracted, from the XML export of OpenStreetMap^{2}^{2}2https://www.openstreetmap.org/, the city maps of Bucharest, HongKong, Paris, and London. The OpenStreetMap format consists of ways which represent routes between the nodes in the graph. We have extracted all the ways appearing in each map, and eliminated the nodes which appeared in more than one way. For evaluating public transport graphs, we used the Stif dataset from the open public transit data of the Paris metropolitan area^{3}^{3}3https://opendata.stif.info/page/home/, where the nodes are the stations of the system (bus, train, metro) and an edge appears if at least one transport line exists between the two stations. Finally, we used the Western US power grid network, USPowerGrid, first used in [60], in which the edges are the major power lines and the nodes represent electrical equipment (transformers, generators, etc.).
 Social Networks.

The networks used in this domain are mainly taken from the Stanford SNAP repository^{4}^{4}4https://snap.stanford.edu/; this is the case for the Facebook (ego networks), Enron (Enron email conversations), WikiTalk (convesations between Wikipedia contributors), CitHeph (citations between authors in highenergy physics), and LiveJournal (social network of the Livejournal site). Other social datasets were extracted from the StackOverflow Q&A site^{5}^{5}5http://stackoverflow.com/, for the mathematics subdomain (StackMath) and the theoretical computer science subdomain (StackTCS). The nodes represent users of the site, and an edge exists when a reply, vote, or comment occurs between the edge endpoints.
 Web Networks.

For Weblike graphs, we evaluated treewidth on the Wikipedia network of articles, and the Google Web graph (provided by Google during its 2002 programming contest); the versions we used were downloaded from the Stanford SNAP website.
 Hierarchical.

The next category is of data that has by nature a hierarchical structure. Royal is extracted from a genealogy of royal families in Europe, originally published in GEDCOM format by Brian Tompsett and that has informally circulated on the Web since 1992^{6}^{6}6https://www.hull.ac.uk/php/cssbct/genealogy/royal/nogedcom.html; edges are created between spouses and between a parent and its child. Math is a graph of academic advisor–advisee data in mathematics and related areas, crawled from http://www.genealogy.ams.org/; edges are created between an advisor and each of their students.
 Ontologies.

We used subsets of two of the most popular knowledge bases available on the Web: Yago^{7}^{7}7https://www.yagoknowledge.org/ and DbPedia^{8}^{8}8https://www.dbpedia.org. For Yago, we downloaded its core facts subset, and removed the semantics on the links; hence, an edge exists if there exists at least a fact between two concepts (nodes). For DbPedia, we downloaded the ontology subset containing its RDF type statements, and we generated the edges using the same approach as for Yago.
 Others.

In addition to the above, we have evaluated treewidth on other types of networks: a communication network (Gnutella) [ripeanu2002mapping], a protein–protein interaction network (Yeast) [bu2003topological], and the Gaifman graph of the TPCH relational database benchmark (Tpch)^{9}^{9}9http://www.tpc.org/tpch/, generated using official tools and default parameters.
Table 1 summarizes the datasets, their size, and the best treewidth estimations we have been able to compute. For reproducibility purposes, all datasets, along with the code that has been used to compute the treewidth estimations, can be freely downloaded from https://github.com/smaniu/treewidth/.
Upper Bounds
We show in Figure 3 the results of our estimation algorithms. Lower values mean better treewidth estimations. Focusing on the upper bounds only (red circular points), we notice that, in general, FillIn does give the smallest upper bound of treewidth, in line with previous findings [20]. Interestingly, the Degree heuristic is quite competitive with the other heuristics. This fact, coupled with its lower running time, means that it can be used more reliably in large graphs. Indeed, as can be seen in the figure, on some large graphs only the Degree heuristic actually finished at all; this means that, as a general rule, Degree seems the best fit for a quick and relatively reliable estimation of treewidth.
We plot both the absolute values of the estimations in Figure 3a, but also their relative values (in Figure 3b, representing the ratio of the estimation over the number of nodes in the graph), to allow for an easier comparison between networks. The absolute value, while interesting, does not yield an intuition on how the bounds can differ between network types. If we look at the relative values of treewidth, it becomes clear that infrastructure networks have a treewidth that is much lower than other networks; in general they seem to be consistently under one thousandth of the original size of the graph. This suggests that, indeed, this type of network may have properties that make them have a lower treewidth. For the other types of networks, the estimations can vary considerably: they can go from one hundredth (e.g., Math) to one tenth (e.g., WikiTalk) of the size of the graph.
As further explained in Appendix B, the bounds obtained here on infrastructure networks are consistent with a conjectured bound on the treewidth of road networks [27]. One relevant property is their low highway dimension [2], which helps with routing queries and decomposition into contraction hierarchies. Even more relevant to our results is the fact that they tend to be “almost planar”. More specifically, they are planar: each edge can allow up to crossing in their plane embedding. It has been shown in [28] that planar graphs have treewidth , a relatively low treewidth that is consistent with our results.
6 Predicting Treewidth of Transport Networks
The difference observed in Section 4 between road networks and other networks raises the following question: since road networks are so wellbehaved, can we predict the relation between their original size and the treewidth upper bound? To test this, we need to predict a treewidth value as a function of the nodes of the original graph, :
or, by taking the logarithm, solve the following linear regression problem:
type  val  

road  1.1874  0.3180  0.7867  0.003 
social  0.6853  0.5607  0.6976  0.038 
We train this model on the road and social networks, and report in Table 2 the results in terms of the coefficients , , the goodnessoffit , and the resulting value. The results give an indication that it is indeed easier to predict the treewidth of road networks, as opposed to that of other networks. A further visual indication on the better fit for road networks is given in Fig. 4. Hence a rough estimation of road network treewidth is given by the formula:
The above result is consistent with previous findings that road networks perform well when treelike decompositions are used for query answering [47], and a conjectured bound on the treewidth of road networks [27].
The treewidth of hierarchical networks is surprisingly high, but not for trivial reasons: in both Royal and Math, largest cliques have size 3. More complex structures (cousin marriages, scientific communities) impact the treewidth, along with the fact that treewidth cannot exploit the property that both networks are actually DAGs.
In upper bound algorithms, ties are broken arbitrarily which causes a nondeterministic behavior. We show in Appendix C that variations due to this nondeterminism is of little significance.
7 Variations in Treewidth Upper Bounds
We plot, in Figure 5, the variation in treewidth estimations due to the way ties are broken in upper bound algorithms. To test this, we have replaced the original tie breaking condition (based on node id) with a random tie break, and we ran the upper bound algorithm Degree 100 times. The results show that, even when ties are broken randomly, the treewidth generally stays in a range of 10 between the minimal and maximal values. The only exception is Bucharest which registers a range of 100, but for a graph containing more than 100 000 nodes.
Lower Bounds
Figure 3 also reports on the lower bound estimations (blue rectangular points). Now, higher values represent a better estimation. The same differentiation between infrastructure networks and the other networks holds in the case of lower bounds – treewidth lower bounds are much lower in comparison with other networks. We observe that the variation between upper and lower bound estimations can be quite large. Generally, we find that degreebased estimators, Mmd and Delta2D, give bounds that are very weak. The contractionbased estimator, Mmd+, however, consistently seems to give the best bounds of treewidth, and returns lower bounds that are much larger than the degreebased estimations.
Interestingly, in the case of the networks Ca, Pa, and Tx, the values returned for Mmd+ and Mmd are always 5 and 3, respectively. This has been remarked on in [21] – there exist instances where heuristics of lower bounds perform poorly, giving very low lower bounds. In our case, this only occurs for some road networks, all coming from the same data source, i.e., the DIMACS challenge on shortest paths.
In Figure 3, we have not plotted the estimations resulting from Lbn and Lbn+. The reason is that we have found that these algorithms do not generally give any improvement in the estimation of the lower bounds. Specifically, we have found an improvement only in two datasets for the Delta2D heuristic: for Facebook from 126 originally to 157 for Lbn(Delta2D) and 159 for Lbn+(Delta2D); and for StackTCS from 27 originally to 30 for Lbn+(Delta2D). In all cases, however, Mmd+ is always competitive by itself.
Running Time
8 Computational Analysis of Estimation Algorithms
An important aspect of estimations of treewidth are how much computational power they use, as a function of the size of the original graph.
In the case of upper bounds, the cost depends quadratically on the upper bound found by the decomposition. This is due to the fact that, at each step, when a node is chosen, a fillin of edges is computed between the neighbors; the bound is also tight, as there can be a case in which the neighbors of the node are not previously connected. Depending on the greedy algorithm used and the criteria for generating the ordering, the complexity of updating the queue for extracting the new node is for Degree, and for the others (the neighbors of neighbor have to be taken into account as possible updates). Hence, in the worst case, the complexities of the greedy heuristics are the following: for Degree and for FillIn, DegreeFillIn the cost increases to .
For lower bounds, the cost greatly depends on the algorithm chosen. In the case of Mmd and Mmd+, the costliest operation is to sort and update a queue of degrees, and the whole computation can be done in . For Delta2D the cost is known to be [43]. Lbn and Lbn+ add a factor.
Surprisingly, the best estimations do not necessarily come from the costliest algorithms. Indeed, in our experiments, the simplest algorithms tend to also give the best estimations.
We detail the full running time results in Figure 6, on a logarithmic scale. For the upper bounds (Figure 6a), there is a clear distinction between Degree on the one hand and FillIn or Degree+FillIn on the other: Degree is always at least one order of magnitude faster than the others. Moreover, for larger networks, only Degree has a chance to finish. This also occurs for lower bounds (Figure 6b): Mmd and Mmd+ have reasonable running times, while Delta2D has a running time that is much larger – by several orders of magnitude. On the other hand, the Lbn and Lbn+ methods (Figure 6c) do not add much to the running time of Mmd and Mmd+; we conjecture that this occurs due to the fact that not many iterations are performed, which also explains the fact that they generally do not give improvements in the estimated bounds.
The complexity of different estimation algorithms (see Appendix D), varies a lot, ranging from quasilinear for lowtreewidth to cubic time. Even if all of the algorithms exhibit polynomial complexity, the cost can become quickly prohibitive, even for graphs of relatively small sizes; indeed, not all algorithms finished on all datasets within the time bound of two weeks of computation time: indeed, only the fastest algorithms for each bound – Degree and Mmd respectively – finish processing all datasets. In some cases, for upper bounds, even Degree
timed out – in this case, we took the value found at the moment of the timeout; this still represents an upper bound. The datasets for which this occurred are
LiveJournal, Yago, DbPedia, and Tpch.^{10}^{10}10We show in Figure 9 of Appendix D the full running time results for the upper and lower bound algorithms.Phase Transitions in Synthetic Networks
We have seen that graphs other than the infrastructure networks have treewidth values that are relatively high. An interesting question that these results lead to is: is it the case for all relatively complex networks, or is it a particularity of the chosen datasets?
To give a possible answer to this, we have evaluated the treewidth of a range of synthetic graph models and their parameters:
 Random.

This synthetic network model due to Erdős and Rényi [30] is an entirely random one: given nodes and a parameter , each possible edge between the nodes is added with probability . We have generated several network of nodes, and with values of ranging from to .
 Preferential Attachment.

This network model [6] is a generative model, and aims to simulate link formation in social networks: each time a new node is added to the network, it attaches itself to other nodes, each link being formed with a probability proportional to the number of neighbors the existing node already has. We have generated graphs of nodes, with the parameter varying between and .
 SmallWorld.

The model [60] generates a smallworld graph by the following process: first, the nodes are organized in a ring graph where each node is connected with other nodes on each side; finally, each edge is rewired (its endpoints are changed) with probability . In the experiments, we have generated graphs of , with and ranging between 1 and .
Our objective with evaluating treewidth on these synthetic instances is to evaluate whether some parameters of these models lead to lower treewidth values, and if so, in which model the “phase transition” occurs, i.e., when a lowtreewidth regime gives way to a hightreewidth one. For our purpose, and for reasons we explain in Section
9, we consider a treewidth under to be low.Our first finding – see Figure 7 – is that the hightreewidth regime arrives very early, relative to the parameter values. For random graphs, only relatively small value of allow for a low treewidth; for smallworld and preferential attachment only trivial values of allow lowtreewidth – and, even so, it is usually 1 or 2, possibly due to the fact that the graph, in those cases, is composed of several small connected components, i.e., the wellknown subcritical regime of random graphs [15]. The lowtreewidth regime for random networks seems to be limited to values immediately after ; after this point, the treewidth is high, in line with findings of a linear dependency between graph size and treewidth in random graphs [35]. Moreover, we notice that there is no smooth transition for preferential attachment and small world networks; the treewidth jumps from very low values to high treewidth values. This is understandable in scalefree networks – resulting from the preferential attachment model – where a few hubs may exist with degrees that are comparable to the number of nodes in the graph. Comparatively, random networks exhibit a smoother progression – one can clearly see the shift from trivial values of treewidth, to relatively low values, and to high treewidth values. Finally, the gap between lower bound and upper bound tends to increase with the parameter; that is, however, not surprising since all three model graphs tend to have more edges with larger parameter values.
9 Partial Decompositions
Our results show that, in practice, the treewidths of real networks are quite high. Even in the case of road networks, having relatively low treewidths, their value can go in the hundreds, rendering most algorithms whose time is exponential time in the treewidth (or worse) unusable. In practical applications, however, we can still adapt treewidthbased approaches for obtaining data structures – not unlike indexes – which can help with some important graph queries like shortest distances and paths [61, 5] or probability estimations [47, 10].
The manner in which treewidth decomposition can be used starts from a simple observation made in studies on complex graphs, that is, that they tend to exhibit a treelike fringe and a densely connected core [51, 50]. The treelike fringe precisely corresponds to boundedtreewidth parts of the network. This yields an easy adaptation of the upper bound algorithms based on node ordering: given a parameter representing the highest treewidth the fringe can be, we can run any greedy decomposition algorithm (Degree, FillIn, DegreeFillIn) until we only find nodes of degree , at which point the algorithm stops. At termination, we obtain a data structure formed of a set of treewidth elements (trees) interfacing through cliques that have size at most with a core graph. The core graph contains all the nodes not removed in the bag creation process, and has unbounded treewidth. Figure 8 illustrates the notion of partial decompositions.
The resulting structure can be thought of as a partial decomposition (or relaxed decomposition), a concept introduced in [61, 5] in the context of answering shortest path queries, and used in [47] for probabilistic distance queries. A partial decomposition can be extremely useful. The treelike fringe can be used to quickly precompute answers to partial queries (e.g., precompute distances in the graph). Once the precomputation is done, these (partial) answers are added to the core graph, where queries can be answered directly. If the resulting core graph is much smaller than the original graph, the gains in running time can be considerable, as shown in [61, 5, 47]. Hence, the objective of our experiments in this section is to check how feasible partial decompositions are.
An interesting aspect of greedy upper bound algorithms is that they generate at any point a partial decomposition, with width equal to the highest degree encountered in the greedy ordering so far. The algorithm can then be stopped at any width, and the size of the resulting partial decomposition can be measured.
To evaluate this, we track here the size of the core graph, in terms of edges. We do this because most algorithms on graphs have a complexity that is directly related to the number of edges in the graph. As we discussed in the previous section, we aim that the size of the core graph to be as small as possible. Another aspect to keep in mind is the number of edges that are added by the fillin process during the decomposition: each time a node of degree is removed, at most edges are added to the graph. Finally, for a graph of treewidth , the decomposition contains a root of size at most edges. Hence, to ensure that the core graph is smaller than the original graph we aim for the treewidth to be . As we saw in Section 4, this is only rarely the case, and it is more likely to occur in infrastructure networks.
Indeed, if we plot the core graph size in the partial decompositions of infrastructure networks (Figures 9a, 9b), we see that their size is only a fraction of the original size. We see a large drop in the size for low widths; this is logical, since the fillin process does not add edges for and . After this point, the size is relatively stable, and can go as low as 10% of the original size. In this sense, infrastructure networks seem the best fit for indexes based on decompositions, and represents further confirmation of the good behavior of these networks for hierarchical decompositions.
This desired decomposition behavior no longer occurs for social networks (Figures 9c, 9d) and the other networks in our dataset (Figures 9e, 9f). In this case, the decomposition becomes too large and even unwieldy – for some networks, the decomposition started filling the main memory of our computers (32 GB of RAM); for other networks, they did not finish in reasonable time (i.e., a few days of computation). For most of these networks, the resulting treewidth is much larger than our desired bound of at most ; this results in core graph sizes that can be hundreds of times larger than the original size. The only exceptions are hierarchical networks, Royal and Math (Figures 9g, 9h), which are after all supposed to be close to a tree – despite having a relatively high treewidth (remember Figure 3b), partial decompositions work well on them. See Appendix E for full results.
10 Full Partial Decomposition Results
11 Partial Decompositions of Synthetic Graphs
We show in Figure 12 partial decompositions of synthetic graphs, confirming the behavior observed on realworld graphs; the sizes of the resulting core graphs increase as we increase the parameters of interest.
In practice, however, we do not need to compute full decompositions. For usable indexes, we may want to stop at low treewidths, which allow exact computing. This may be useful for graphs whose decompositions have large core graphs. Indeed (see Appendix G for details), by stopping at width between 5 and 10, it is often possible to significantly reduce the original size of the graph, even in cases where core graphs are large, such as the Google graph.
Such partial decompositions are an extremely promising tool: on databases where partial decomposition result in a core graph of reduced size, efficient algorithms can be run on the fringe, exploiting its low treewidth, while more costly algorithms are used on the core, which has reduced size. Combining results from the fringe and the core graph can be a challenging process, however, which is worth investigating for separate problems. As a start, we have shown in [47] how to exploit partial decompositions to significantly improve the running time of sourcetotarget queries in probabilistic graphs.
12 Zoomed View of Partial Decompositions
We plot in Figure 13 decompositions up to a width of 25, for a selection of graphs, including the graphs for which no upper bound algorithm has finished (Yago, DbPedia, LiveJournal, Tpch). Immediately apparent is the fact that the minimal size of the core graph occurs at very low widths: in almost all cases, it occurs around a width between 5 and 10. This is low enough that running algorithms even doubly exponential in the width on the resulting fringe graphs can be performed. In terms of actual size, it can vary greatly. In road networks, it even reaches 10%, compared to around 50% for other graphs. The exception to this are denser networks (CitHeph and LiveJournal) where almost no benefit is visible for partial decompositions of small width. An interesting behavior occurs in Tpch, where the decomposition size changes in steps; we conjecture this is due to the fact that database instances contain many cliques – specifically, one clique for each tuple in each relation.
13 Discussion
In this article, we have estimated the treewidth of a variety of graph datasets from different domains, and we have studied the running time and effectiveness of estimation algorithms. This study was motivated by the central role treewidth plays in many theoretical work attacking query evaluation tasks that are otherwise intractable, where low treewidths are an indicator for low complexity.
Our study on the algorithms for estimation leads to results that may seem surprising. For upper bounds, we discovered that greedy treewidth estimators, based on elimination orderings, provide the best cost–estimation tradeoff; they also have the advantage of outputting readilyusable tree decompositions. In the case of lower bounds, we discovered that degeneracybased bounds display the best behavior; moreover, the algorithms which aim to improve the bounds (LBN and LBN+) only very rarely do so.
In terms of treewidth estimations, we have discovered that, generally, the treewidth of realworld graphs is quite large. With few exceptions, most of the graphs we have tested in this study are scalefree. This may partly explain our findings – scalefree networks exhibit a number of highdegree, or hub, nodes that force high values for the treewidth. The few exceptions to this rule are infrastructure networks, where treewidths are comparatively lower. Indeed, we were able to reproduce a bound on the treewidth of road networks. We conjecture these relatively low bounds are explained by characteristics of infrastructure networks: specifically, they are similar to very sparse random networks.
Even in the case of infrastructure networks, the absolute value of treewidth still renders algorithms that are exponential in the treewidth impractical. However, one of the main lesson of this work is that it is still possible to exploit the structure of such datasets, by computing partial tree decompositions: following the approach of Section 9, it is often possible to decompose a dataset into a fringe of low treewidth and a smaller core of high treewidth. This has been used in [47], but for a very specific (and limited) application: connectivity queries in probabilistic graphs.
In brief, though lowtreewidth data guarantees a wealth of theoretical results on the tractability of various data management tasks, this is unexploitable in most realworld datasets, which do not have low enough treewidth. One direction is of course to find relaxations of the treewidth notion, such as in [33, 41]. But since low treewidth is sometimes the only notion that ensures tractability [45, 34, 9], other approaches are needed; a promising one lies in the notion of partial tree decompositions. We believe future work in database theory should study in more detail the nature of partial tree decompositions, how to obtain optimal or nearoptimal partial decompositions, and how to exploit them for improving the efficiency of a wide range of data management problems, from query evaluation, to enumeration, probability estimation, or knowledge compilation.
References
 [1] Serge Abiteboul, Richard Hull, and Victor Vianu. Foundations of databases. AddisonWesley, 1995.
 [2] Ittai Abraham, Amos Fiat, Andrew V Goldberg, and Renato F Werneck. Highway dimension, shortest paths, and provably efficient algorithms. In SODA, 2010.
 [3] Aaron B. Adcock, Blair D. Sullivan, and Michael W. Mahoney. Tree decompositions and social graphs. Internet Mathematics, 12(5), 2016.
 [4] M. Ajtai, R. Fagin, and L. J. Stockmeyer. The closure of monadic NP. JCSS, 60(3), 2000.
 [5] Takuya Akiba, Christian Sommer, and Kenichi Kawarabayashi. Shortestpath queries for complex networks: Exploiting low treewidth outside the core. In EDBT, 2012.
 [6] Réka Albert and AlbertLászló Barabási. Statistical mechanics of complex networks. Rev. Mod. Phys., 74, 2002.
 [7] Antoine Amarilli, Pierre Bourhis, Louis Jachiet, and Stefan Mengel. A circuitbased approach to efficient enumeration. In ICALP, 2017.
 [8] Antoine Amarilli, Pierre Bourhis, and Pierre Senellart. Provenance circuits for trees and treelike instances. In ICALP, 2015.
 [9] Antoine Amarilli, Pierre Bourhis, and Pierre Senellart. Tractable lineages on treelike instances: Limits and extensions. In PODS, 2016.
 [10] Antoine Amarilli, Silviu Maniu, and Mikaël Monet. Challenges for efficient query evaluation on structured probabilistic data. In SUM, 2016.
 [11] Antoine Amarilli, Mikaël Monet, and Pierre Senellart. Connecting width and structure in knowledge compilation. In ICDT, pages 6:1–6:17, 2018.
 [12] Eyal Amir. Approximation algorithms for treewidth. Algorithmica, 56(4):448–479, 2010.
 [13] Stefan Arnborg, Derek G. Corneil, and Andrzej Proskuworski. Complexity of finding embeddings in a ktree. SIAM Journal on Algebraic and Discrete Methods, 8(2), 1987.
 [14] Guillaume Bagan. MSO queries on tree decomposable structures are computable with linear delay. In CSL, volume 4207, 2006.
 [15] AlbertLászló Barabási and Márton Pósfai. Network science. Cambridge University Press, 2016.
 [16] Anne Berry, Pinar Heggernes, and Geneviève Simonet. The minimum degree heuristic and the minimal triangulation process. GraphTheoretic Concepts in Computer Science, 2880(Chapter 6), 2003.
 [17] Hans L. Bodlaender. A lineartime algorithm for finding treedecompositions of small treewidth. SIAM J. Comput., 25(6), 1996.
 [18] Hans L. Bodlaender, Fedor V Fomin, Arie M C A Koster, Dieter Kratsch, and Dimitrios M Thilikos. On exact algorithms for treewidth. ACM TALG, 9(1), 2012.
 [19] Hans L. Bodlaender and Arie M C A Koster. Treewidth computations I. Upper bounds. Information and Computation, 208(3), 2010.
 [20] Hans L. Bodlaender and Arie M C A Koster. Treewidth computations II. Lower bounds. Information and Computation, 209(7), 2011.
 [21] Hans L. Bodlaender, Arie M. C. A. Koster, and Thomas Wolle. Contraction and treewidth lower bounds. In ESA, 2004.
 [22] Angela Bonifati, Wim Martens, and Thomas Timm. An analytical study of large SPARQL query logs. PVLDB, 11(2), 2017.
 [23] François Clautiaux, Jacques Carlier, Aziz Moukrim, and Stéphane Nègre. New lower and upper bounds for graph treewidth. In WEA, 2003.
 [24] Bruno Courcelle. The monadic secondorder logic of graphs. I. Recognizable sets of finite graphs. Inf. Comput., 85(1), 1990.
 [25] Nilesh N. Dalvi and Dan Suciu. The dichotomy of conjunctive queries on probabilistic structures. In PODS, 2007.
 [26] Holger Dell, Christian Komusiewicz, Nimrod Talmon, and Mathias Weller. The PACE 2017 parametrized algorithms and computation experiments challenge: The second iteration. In IPEC, 2017.
 [27] Julian Dibbelt, Ben Strasser, and Dorothea Wagner. Customizable contraction hierarchies. Journal of Experimental Algorithmics, 21(1), 2016.
 [28] Vida Dujmović, David Eppstein, and David R Wood. Structure of graphs with locally restricted crossings. SIAM J. Discrete Maths, 31(2), 2017.
 [29] Arnaud Durand and Yann Strozecki. Enumeration complexity of logical query problems with secondorder variables. In CSL, 2011.
 [30] Paul Erdős and Alfréd Rényi. On random graphs I. Publicationes Mathematicae (Debrecen), 6, 1959.
 [31] Johannes K. Fichte, Neha Lodha, and Stefan Szeider. Satbased local improvement for finding tree decompositions of small width. In Serge Gaspers and Toby Walsh, editors, Theory and Applications of Satisfiability Testing – SAT 2017, 2017.
 [32] Jörg Flum, Markus Frick, and Martin Grohe. Query evaluation via treedecompositions. J. ACM, 49(6), 2002.
 [33] Markus Frick and Martin Grohe. Deciding firstorder properties of locally treedecomposable structures. J. ACM, 48(6), 2001.
 [34] Robert Ganian, Petr Hliněnỳ, Alexander Langer, Jan Obdržálek, Peter Rossmanith, and Somnath Sikdar. Lower bounds on the complexity of MSO1 modelchecking. JCSS, 1(80), 2014.
 [35] Yong Gao. Treewidth of Erdős–Rényi random graphs, random intersection graphs, and scalefree random graphs. Discrete Applied Mathematics, 160(4–5), 2012.
 [36] Vibhav Gogate and Rina Dechter. A complete anytime algorithm for treewidth. In UAI, 2004.
 [37] Martin Grohe, Thomas Schwentick, and Luc Segoufin. When is the evaluation of conjunctive queries tractable? In STOC, 2001.
 [38] Daniel John Harvey. On Treewidth and Graph Minors. PhD thesis, The University of Melbourne, 2014.
 [39] Abhay Jha and Dan Suciu. On the tractability of query compilation and bounded treewidth. In ICDT, 2012.
 [40] Abhay Kumar Jha and Dan Suciu. Knowledge compilation meets database theory: Compiling queries to decision diagrams. Theory Comput. Syst., 52(3), 2013.
 [41] Wojciech Kazana and Luc Segoufin. Enumeration of firstorder queries on classes of structures with bounded expansion. In PODS, 2013.
 [42] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009.
 [43] Arie M. C. A. Koster, Thomas Wolle, and Hans L. Bodlaender. Degreebased treewidth lower bounds. In WEA, 2005.
 [44] Stephan Kreutzer. Algorithmic metatheorems. CoRR, abs/0902.3616, 2009.
 [45] Stephan Kreutzer and Siamak Tazari. Lower bounds for the complexity of monadic secondorder logic. In LICS, 2010.
 [46] S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society Series B (Methodological), 50(2), 1988.
 [47] Silviu Maniu, Reynold Cheng, and Pierre Senellart. An indexing framework for queries on probabilistic graphs. ACM Trans. Database Syst., 42(2), 2017.

[48]
Harry M. Markowitz.
The elimination form of the inverse and its application to linear programming.
Management Science, 3(3), 1957.  [49] Mikaël Monet. Probabilistic evaluation of expressive queries on boundedtreewidth instances. In SIGMOD PhD Symposium, 2016.
 [50] M. E. J. Newman, D. J. Watts, and S. H. Strogatz. Random graph models of social networks. In Proceedings of the National Academy of Sciences, 2002.
 [51] Mark E. J. Newman, Steven H. Strogatz, and Duncan J. Watts. Random graphs with arbitrary degree distributions and their applications. Phys. Rev. E, 64, Jul 2001.
 [52] François Picalausa and Stijn Vansummeren. What are real SPARQL queries like? In SWIM, 2011.

[53]
Léon Planken, Mathijs de Weerdt, and Roman van der Krogt.
Computing allpairs shortest paths by leveraging low treewidth.
Journal of Artificial Intelligence Research
, 43, 2012.  [54] Luca Pulina and Armando Tacchella. An empirical study of qbf encodings: from treewidth estimation to useful preprocessing. 102:391–427, 2010.
 [55] Neil Robertson and Paul D. Seymour. Graph minors. III. Planar treewidth. J. Comb. Theory, Ser. B, 36(1), 1984.
 [56] S. Saluja, K.V. Subrahmanyam, and M.N. Thakur. Descriptive complexity of #P functions. JCSS, 50(3), 1995.
 [57] James W. Thatcher and Jesse B. Wright. Generalized finite automata theory with an application to a decision problem of secondorder logic. Math. Systems Theory, 2(1), 1968.
 [58] Thomas van Dijk, JanPieter van den Heuvel, and Wouter Slob. Computing treewidth with LibTW. Technical report, University of Utrecht, 2006.
 [59] M. Y. Vardi. The complexity of relational query languages (extended abstract). In STOC, 1982.
 [60] D. J. Watts and S. H. Strogatz. Collective dynamics of smallworld networks. Nature, 393, 1998.
 [61] Fang Wei. TEDI: efficient shortest path query answering on graphs. In SIGMOD, 2010.
 [62] Thomas Wolle. Computational aspects of treewidth: Lower bounds and network reliability. PhD thesis, Utrecht University, 2005.