1 Introduction
The exploitation of structural properties found in sparse graphs has a long and fruitful history in the design of efficient algorithms. Besides the long list of results on planar graphs and graphs of bounded degree (which are too numerous to fairly represent here), the celebrated structure theory of graphs with excluded minors, developed by Robertson and Seymour [60] falls into this category. It not only had an immense influence on the design of efficient algorithms (see e.g. [19, 20]) it further introduced the now widely used notion of treewidth (see e.g. [9]) and gave rise to the field of parameterized complexity: “In the beginning, all we did was graph minors” (M. Fellows, pers. comm.). As such, the impact of the theory of sparse graphs on algorithmic research cannot be overstated.
Many of the algorithmic results concerning classes excluding a minor or a topological minor are in some way based on topological arguments, depending on the structure theorems (e.g. decompositions) for the class under consideration. A complete paradigm shift was initiated by Nešetřil and Ossona de Mendez with their foundational work and introduction of the notions of bounded expansion [45, 46, 47] and nowhere denseness [49]. These graph classes extend and properly contain all the aforementioned sparse classes and many arguments based on topology can be replaced by more general, and surprisingly often much simpler, arguments based on density. We refer to the textbook [50] for extensive background on the theory of sparse graph classes.
The rich structural theory for bounded expansion and nowhere dense graph classes has been successfully applied to design efficient algorithms for hard computational problems on specific sparse classes of graphs, see e.g. [6, 17, 22, 23, 24, 25, 26, 30, 32, 66]. On the other hand, several results indicate that nowhere dense graph classes form a natural limit for algorithmic methods based on sparseness arguments, see e.g. [22, 24].
One core strength of the bounded expansion/nowhere dense framework is that there exists a multitude of equivalent definitions that provide complementing perspectives. Here, we study two structural properties of these classes that are of particular importance in the algorithmic context, namely the property of having bounded generalized coloring numbers and the property of being uniformly quasiwide. The generalized coloring numbers intuitively measure reachability properties in a linear vertex ordering of a given graph. Such an ordering yields a very weak and local form of a graph decomposition which can be exploited combinatorially [25, 57] and algorithmically [6, 22, 23, 32]. Uniform quasiwideness was originally introduced in finite model theory [16], and soon found combinatorial and algorithmic applications on nowhere dense classes [17, 25, 30, 37, 48, 55, 63].
Even though the above results render many problems tractable in theory, many of the known algorithms have worstcase running times that involve huge constant factors and combinatorial explosions with respect to the discussed parameters. The central question of our work here is to investigate how the generalized coloring numbers and uniform quasiwideness behave on realworld graphs, an endeavor which so far has only been conducted for a single notion of bounded expansion and on a smaller scale [21]. Controllable numbers would be a prerequisite for practical implementations of these algorithms based on such structural approaches. We provide an experimental evaluation of several algorithms that approximate these parameters on real world graphs.
On the theoretical side, we provide a new algorithm for uniform quasiwideness with polynomial size guarantees in graph classes of bounded expansion and show a lower bound indicating that the guarantees of this algorithm are close to optimal in graph classes with fixed excluded minor.
Organization. We give background on the theory of bounded expansion and nowhere dense graphs in Section 2. In Section 3 and Section 4 we describe our approaches to compute the weak coloring numbers and uniform quasiwideness. Our experimental setup is described in Section 5 and our results are presented in Section 6 and Section 7.
2 Preliminaries
Graphs. All graphs in this paper are finite, undirected and simple, that is, they do not have loops or multiple edges between the same pair of vertices. For a graph , we denote by the vertex set of and by its edge set. The distance between a vertex and a vertex is the length (that is, the number of edges) of a shortest path between and . For a vertex of , we write for the set of all neighbors of , , and for we denote by the closed neighborhood of , that is, the set of vertices of at distance at most from . Note that we always have .When no confusion can arise regarding the graph we are considering, we usually omit the superscript . The radius of a connected graph is the minimum integer such that there exists with the property that all vertices of have distance at most to . A set is independent if all distinct vertices of have distance greater than .
Bounded expansion and nowhere denseness. A minor model of a graph in a graph is a family of pairwise vertexdisjoint connected subgraphs of , called branch sets, such that whenever is an edge in , there are and for which is an edge in . The graph is a depth minor of , denoted , if there is a minor model of in such that each has radius at most . A class of graphs is nowhere dense if there is a function such that for all it holds that for all , where denotes the clique on vertices. The class has bounded expansion if there is a function such that for all and all with , the edge density of , i.e. , is bounded by . Note that every class of bounded expansion is nowhere dense. The converse is not necessarily true in general [50].
3 The weak coloring numbers
The coloring number of a graph is the minimum integer such that there is a linear order of the vertices of for which each vertex has backdegree at most , i.e., at most neighbors with . It is wellknown that for any graph , the chromatic number satisfies , which possibly explains the name “coloring number”.
We study a generalization of the coloring number that was introduced by Kierstead and Yang [33] in the context of coloring games and marking games on graphs. The weak coloring numbers are a series of numbers, parameterized by a positive integer , which denotes the radius of the considered ordering.
The invariants are defined in a way similar to the definition of the coloring number. Let be the set of all linear orders of the vertices of the graph , and let . Let . For a positive integer , we say that is weakly reachable from with respect to , if there exists a path of length , , between and such that is minimum among the vertices of (with respect to ). Let be the set of vertices that are weakly reachable from with respect to . Note that . The weak coloring number of is defined as
As proved by Zhu [70], the weak coloring numbers can be used to characterize bounded expansion and nowhere dense classes of graphs: A class of graphs has bounded expansion if and only if there exists a function such that for all and all . A class is nowhere dense if and only if for every real there exists an integer such that for all vertex graphs with which are subgraphs of some we have .
An interesting aspect of the weak coloring numbers is that these invariants can also be seen as gradations between the coloring number and the treedepth (which is the minimum height of a depthfirst search tree for a supergraph of [44]). More explicitly, for every graph we have (see [50, Lemma 6.5])
Consequently, we also consider an algorithm for computing treedepth in our empirical evaluation.
A related notion to weak coloring numbers are strong coloring numbers, which were also introduced in [33]. Let , let be a positive integer and let . We say that a vertex is strongly reachable from if there is a path of length , such that or is the only vertex of smaller than (with respect to ). Let be the set of vertices that are strongly reachable from with respect to . Again, . The strong coloring number is defined as As weak coloring numbers converge to treedepth with growing , strong coloring numbers converge to treewidth [29]:
The reason is that treewidth of can be characterized by the minimal width of an elimination ordering of defined exactly as .
Clearly, for all , (and thus ). Moreover, for all we have [33]. It follows that for every graph there is some (possibly large) integer such that . This gives a hope that an elimination ordering computed for treewidth gives a good upper bound for where . We we will evaluate orders produced by an algorithm for treewidth approximations, but interpreted as an order for weak coloring numbers.
Concrete bounds for the weak coloring numbers on restricted graph classes are given in [29, 36, 45, 56, 65, 70]. Our approximation algorithms are based on the approaches described in [45, 56, 65], which we describe in more detail in the following subsections.
3.1 Distanceconstrainted Transitive Fraternal Augmentations
Given a graph and a linear order of its vertices, observe that we have the following properties:

Let be such that and for some numbers with . Then either or .

Let be such that and for some numbers with . Then .
We can approximate the weak coloring numbers by orienting the input graph and iteratively inserting arcs so that the above reachability properties are satisfied. Introducing an arc with the aim of satisfying property 1 above is called a fraternal augmentation, while introducing an arc with the aim of satisfying property 2 is called a transitive augmentation. These operations were studied first in [46]. We are going to work with an optimized version, called distanceconstrained transitivefraternal augmentations, short dtfaugmentations, which was introduced in [56] as a more practical variant of transitivefraternal augmentations.
Let be an undirected graph and let be any orientation of . Then a dtfaugmentation of is a sequence of directed graphs which satisfy the following two constraints:

Let be such that and are arcs of and , respectively. Then it follows that either or .

Let be such that and are arcs of and , respectively. Then it follows that .
Just as above, arcs added because of the first item are called fraternal and arcs added because of the second item are called transitive. To simplify notation we associate a weight function with the th dtfaugmentation where if and if and
In other words: if the arc is present in but not in , then we have and . It can be shown that the arcs of weight appear exactly in augmentation . These augmentations behave similarly to graph powers in the following sense: consider two vertices that are at distance in . Then in every augmenation for we either find the arc with , or the the arc with , or we find a common outneighbor of and in such that .
Importantly, graph classes of bounded expansion admit dtfaugmentations in which the maximum outdegree is bounded by a function of depth and the graph class in question [56] (we remark that commonly in the literature one orients the graphs to minimize indegrees instead of outdegrees, however, for consistency with the weak coloring numbers we orient so that an arc corresponds to ). The algorithm to compute such augmentations closely follows the original algorithm for tfaugmentations (described in [46, 50]): first, the orientation is chosen to be the acyclic ordering derived from the degeneracy ordering of ; this orientation minimizes . Second, we can orient the fraternal arcs added in step by first collecting all potential fraternal edges in an auxiliary graph and then again compute an acyclic orientation which minimizes the outdegree. We then insert the arcs into according to their orientation in .
If instead of computing fraternal edges at step by searching for fraternal configurations in all pairs , with , it suffices to consider the pair , . The same optimization does not hold for transitive arcs, however.
The precise connection between dtfaugmentations and orderings is presented in the following lemma.
Lemma 3.1 ([6, 30]).
Let be the th dtfaugmentation of a graph and let be the underlying undirected graph. Let be an ordering of such that every vertex has at most smaller neighbours with respect to . Then for all .
Therefore we can obtain a ordering from the th dtfaugmentation by simply computing a degeneracy ordering of .
3.2 Flat decompositions
The following approach for approximating the weak coloring numbers was introduced in [65] and provably yields good results on graphs that exclude a fixed minor.
A decomposition of a graph is a sequence of nonempty subgraphs of such that the vertex sets partition . The decomposition is connected if each is connected.
A decomposition of a graph induces a partial order on by defining if and for . A decomposition yields a good order for the weak coloring numbers if we can

[(1)]

guarantee that the neighborhood of each has a small intersection with (then, in particular, is small), and

ensure that we can order the vertices inside each so that we have good weak reachability properties.
We call such a decomposition flat. The following procedure was proposed in [65] to compute a decomposition of a graph . If excludes the complete graph as a minor, the resulting decomposition is flat. For a decomposition of a graph and , we denote by the subgraph of induced by .
Without loss of generality we may assume that is connected. We iteratively construct a connected decomposition of . To start, we choose an arbitrary vertex and let be the connected subgraph . Now assume that for some , , the sequence has already been constructed. Fix some component of and denote by the subgraphs that have a connection to . Using that is excluded as a minor, one may argue that . Because is connected, we have . Let be a vertex of and let be a breadthfirst search tree in with root . We choose to be a minimal connected subgraph of that contains and that contains for each , , at least one neighbor of . As shown in [65], if , then the above procedure produces a linear order that certifies that .
Observe that this procedure leaves some freedom on how to pick the vertex of from which we start the breadthfirst search and in which order to insert the vertices of . We evaluate several options. For the choice of the root vertex, the following choices seem reasonable.

Choose a vertex that is maximizing the number of neighbors in some , to possibly obtain a set that is smaller than when we choose a vertex far from all .

Choose a vertex that has maximum degree in , high degree vertices should be low in the order.

Choose a vertex that has maximum degree in , but only among those that are adjacent to some .
For the order of the vertices of , we check the following options.

The breadthfirst search and the depthfirst search order from the root.

Sorted by degrees, nonincreasingly.

Each of the above, but reversed.
3.3 Treedepth heuristic
Since the ‘limit’ of weakcoloring numbers is exactly the treedepth of a graph, i.e. , we consider simply computing a treedepth decomposition and using an ordering derived from the decomposition. Our algorithm of choice, developed by Sanchez [62] and implemented by Oelschlägel [54], recursively extracts separators from the graph. To minimize the search space, only close separators are considered, that is, separators that lie in the closed neighborhood of some vertex. Furthermore, the algorithm makes use of the following proposition.
Proposition 3.2 ([8]).
If is a minimal separator of a graph and , then for each connected component of the set is a minimal separator of .
Let be the set of minimal separators that can be constructed from a minimal separator by applying the above proposition, where is an arbitrary minimal close separator. The algorithm then finds the separator which minimizes the size of the largest connected component in
(the implementation supports other heuristics, but this heuristic turned out to have an acceptable running time for the large instances).
3.4 Treewidth heuristic
A wellknown approach to compute a treewidth decomposition of a graph is to find a linear order of the vertices, an elimination order, of possibly small maximum backdegree. From such an order it is easy to construct a tree decomposition of width equal to the backdegree (see, e.g. [10]). Let and let . The backdegree of is defined as
There is a number of heuristics to produce good elimination orders. We chose one that is simple, fast and that gives rather good results for treewidth: the socalled minimumdegree heuristic [10].
The minimumdegree algorithm orders the vertices of the graphs starting from the biggest vertex which is one with minimum degree. Assume that we already ordered vertices with indices greater than , we put on position a vertex with the least backdegree.
3.5 Sorting by degrees and other heuristics
Apart from algorithms with theoretical guarantees we also compared several naive heuristics.

For an optimal order is a degeneracy order, which can be easily computed. We can check if this order produces reasonable results for higher values of as well.

Intuitively, it makes sense to sort vertices by descending degree (ties are broken arbitrarily) because from vertices of high degree more vertices can be reached in one step. This intuition is further supported by one popular network model, the Chung–Lu random graphs which sample graphs with a fixed degree distribution and succesfully replicate several statistics exhibited by realworld networks [13, 14]. In this model, vertices are assigned weights (corresponding to their expected degree) and edges are sampled independently but biased according to the endpoints weights. Under this model, vertices of the same degree are exchangable and the one ordering we can choose to minimize the number of reachable vertices is simply the descending degree ordering.

A simple idea of generalizing the above heuristics to bigger values of is to apply them to the th power of , i.e. is defined as the graph with and .

As a baseline we also included random ordering of vertices.
3.6 Local search
In addition to all these approaches we can try to improve their results by local search, a technique where we make small changes to a candidate solution. We applied the following local changes and tested whether they caused improvements to the current order .

Take any vertex that has biggest and swap it with a random vertex that is smaller with respect to .

Take any vertex that has biggest and swap it with its direct predecessor in .
Both heuristics try to place a vertex with many weakly reachable vertices to the left of them and thus to make them nonweakly reachable. The advantage of the second rule is that the only possible changes are that loses (if was there) and that may obtain . So is trivial to recompute and the only computationally heavy update is for the new . For the first rule, recomputing sets is more expensive. However, the disadvantage of the second rule is that it does not lead to further improvements quickly, hence applications of only the first rule give better results than applications of the second rule only. In our implementation we did a few optimizations in order to improve the results of second rule, but we refrain from describing them in detail. Final algorithm conducting local search firstly performs round of applications of first rule and when they no longer improve results it performs round of applications of second rule. Such combination turned out to be empirically most effective.
4 Uniform quasi wideness
Intuitively, a class of graphs is wide if for every graph from the class, every radius and every large subset of vertices one can find a large subset of vertices which are pairwise at distance greater than (recall that such a subset is called independent). The notion of uniform quasiwideness allows to additionally delete a small number of vertices to make independent. The following definition formalises the meaning of “large” and “small”.
Definition 4.1.
A class of graphs is uniformly quasiwide if for every and every there exist numbers and such that the following holds.
Let and let with . Then there exists a set with and a set of size at least such that for all distinct we have .
Uniform quasiwideness was introduced by Dawar in [16] and it was proved by Nešetřil and Ossona de Mendez in [48] that uniform quasiwideness is equivalent to nowhere denseness. Very recently, it was shown that the function in the above definition can be chosen to be polynomial in [37, 55]. A single exponential dependency was earlier established for classes of bounded expansion [36]. We are going to evaluate the algorithms derived from the proofs in [36, 55], as well as a new algorithm that is streamlined for bounded expansion classes and also achieves polynomial bounds in . We discuss these algorithms in more detail next. We will prove in Section 8 that the bounds of our new algorithm are close to optimal.
4.1 Distance trees
We first describe the algorithm that was introduced in [55]. For simplicity, we focus on the case . First, observe that every graph from a nowhere dense class contains large independent sets. By definition of a nowhere dense class, some complete graph is excluded as a depth minor, that is, simply as a subgraph. Hence, Ramsey’s Theorem immediately implies that if we consider any set of size at least , then there exists a set of size which is independent (without deleting any elements). Furthermore, the proof of Ramsey’s Theorem yielding this bound is constructive and can easily be implemented. The difficult part is now to find in a large independent set a large independent set, possibly after deleting a few elements (consider a star to see that deletion may be necessary).
Assume now that is a large independent set. The idea is to arrange the elements of in a binary tree , which we call a distance tree, and prove that this tree contains a long path. From this path the set is extracted.
We identify the nodes of with words over the alphabet , where corresponds to the root, and where for a word the word is its left and the word is its right successor. Fix some enumeration of the set . We define by processing the elements of sequentially according to the enumeration. We start with the tree that has its root labeled with the first element of . For each remaining element we execute the following procedure which results in adding a node with label to .
When processing the vertex , do the following. Start with being the empty word. While is a node of , repeat the following step: if the distance from to the vertex which is at the position corresponding to in is at most , replace by , otherwise, replace by . Once does not correspond to a node of , extend by adding the node corresponding to and label it with . In this way, we have processed the element , and now proceed to the next element of until all elements are processed. This completes the construction of . Thus, is a tree labeled with vertices of , and every vertex of appears exactly once in .
Now, based on the fact that some complete graph is excluded as a depth minor of , it is shown that contains a long path. This path either has many left branches or many right branches. Take a subpath that has only left branches or only right branches. Such a path corresponds to a set such that all elements have pairwise distance , or all elements have pairwise distance greater than , that is, to a independent set. In the second case, we have found the set that we are looking for. In the other case, we proceed to show that there must exist an element that is adjacent to many elements of , i.e., is large. We add the vertex to the set of elements to delete and repeat the above treeclassification procedure with the set . It is shown that this process must stop after at most steps and yields a set which is independent in .
The general case reduces to the case or if instead of starting with an independent set we start with an independent set and contract the disjoint or neighborhoods of the elements of , respectively, to single vertices. Then one iteratively finds independent sets for larger and larger radii.
4.1.1 Implementation details
We have implemented three variants of the above method, which we denote tree1, tree2 and ld_it. In all variants, we get a graph , a vertex subset and as input. We do not have the number as input but we aim to find an independent subset which is as large as possible while deleting as few elements as possible.
For the odd cases (which reduce to
in the description above), in each variant we use a simple heuristic for finding independent sets described in Section 4.4.For more interesting even cases (which reduce to in the description above), tree2 computes a set of candidate solutions . Here, is a set which corresponds to a long path in the distance tree and is the set of vertices removed so far (for this set ). At every step we compute one candidate solution , remove a vertex , i.e. move it to , which has largest intersection and continue the process with until becomes too small. In the end, we output the best solution from the pool of collected solutions.
In the version denoted by tree1, we modify tree2 as follows. We let be a candidate for a large independent set, which however, we do not choose as a subset of the currently handled set , but of the original input set
. That is, we reclassify all distances of elements of the initial set
in a distance tree with vertices that were deleted in later steps, to draw the candidate independent set from a larger pool of vertices.Finally, in the ld_it version (least degree iterated) we do not find independent sets based on the distance tree, but rather in a simple greedy manner as an independent set in the graph .
4.2 Weak coloring numbers and uniform quasiwideness
We now describe the approach of [36] which is designed for classes of bounded expansion and combines the weak coloring numbers with uniform quasiwideness.
Let be a graph, and be given. First, fix some order such that for every (for some constant ). Let be the graph with vertex set , where we put an edge if and only if or . Then certifies that is degenerate, and hence, assuming that , we can greedily find an independent set of size in . By the definition of the graph , we have that for each . Now observe that for , deleting from leaves at a distance greater than (in from all the other vertices of .
Based on this observation, one follows the simple approach also used to prove Ramsey’s Theorem with exponential bounds. For each vertex of (in decreasing order, starting with the largest vertex with respect to ), we test whether is connected by a path of length at most to more than half of the remaining vertices of . If this is the case, we delete the set from (i.e. add it to ) and add the vertex to the set . We continue with the subset of that had such a connection to (which is now separated by the deletion of though). Otherwise, is not connected to more than half of the remaining vertices of , in which case we simply add to and do not delete anything. In this case, we continue the construction with those vertices of that are not connected to . It is proved that the first case can happen at most many times, hence, in total we delete at most vertices and arrive at a set with vertices that are pairwise at distance greater than in .
We have implemented exactly the algorithm outlined above. We denote it by mfcs.
4.3 A new algorithm
Motivated by the rather conservative character of the algorithm of [36] described above, we propose here a new algorithm (albeit inspired by [36]). Furthermore, in Section 8 we show an almost tight lower bound for the guarantees of this algorithm in graphs excluding a fixed minor.
More formally, we show the following theorem.
Theorem 4.2.
Assume we are given a graph , a set , integers and , and an ordering of with . Furthermore, assume that . Then in polynomial time, one can compute sets and such that , , and is independent in .
Proof.
The algorithm iteratively constructs sets , , and , maintaining the following invariants in every step : , the set is an independent set in , and every vertex of is within distance greater than from every vertex in in the graph .
At step , given , , and , the algorithm proceeds as follows.
 (growth step)

If or there exists such that at most vertices of are within distance at most from in (i.e., ), then move to and delete the conflicting vertices from , that is set
 (deletion step)

Otherwise, pick a vertex that appears in a maximum number of weakly reachable sets of vertices of . That is, pick maximizing the quantity
Insert into and restrict to vertices containing in their weak reachable sets. More formally,
The algorithm stops when becomes empty, and then returns and .
Let us now analyze the algorithm. The fact that in the growth step we remove from the vertices of that are within distance at most from preserves the invariant that the distance between and in is greater than . This invariant, in turn, proves that is an independent set in . It remains to show the bounds on the sizes of and . To this end, we show the following two claims.
Claim 4.3.
At every step , for every and , we have that .
Proof.
The claim follows directly from the fact that in the deletion step, we restrict to be the set of those vertices of that have in their weak reachability set.
Claim 4.4.
At every step , if the is no vertex with , then there exists with at least vertices satisfying .
Proof.
Let be the least vertex of in the ordering . Since the growth step is not applicable, we have that the set is of size at least . For every , fix a path of length at most between and in , and let be the minimal vertex on this path. The subpath of from to shows that and the subpath of from to shows that . Since , while , there exists with
This finishes the proof of the claim.
Consequently, when the algorithm executes the deletion step, we have (the comes from the case ).
In particular, we have that the last step of the algorithm is the growth step: the deletion step executes only if , and then . Let be the vertex added to in this last growth step. Then we have that . Consequently, the algorithm executed at most deletion steps and .
For the bound on the size of set , let be the minimum index with . For every that executed a deletion step, we have
For every that executed a growth step, we have
In particular, we have due to . Consequently, since the algorithm executed deletion steps and growth steps, we have
Hence, since for every , if , then we have . This finishes the proof. ∎
The actual implementation of the above algorithm differs in a number of aspects. First, we found the threshold for the distinction between the growth step and the deletion step too small in practice, despite working well in the proof above. Moreover, experiments with this algorithm showed that it is unstable in the sense that small changes in this threshold can trigger big changes in produced result which are, a priori, hard to predict. Because of that our implementation has a fixed constant and executes the above algorithm with thresholds and chooses the best result (we will address comparing different results later).
Second, the above algorithm can be modified so that the growth step is applied only in cases where least vertex of with respect to has only a small number of conflicts, in which case we use that first vertex to enlarge . Note that such an algorithm also satisfies the theorem, because in the analysis of the algorithm we used only the fact that if the growth step is not applicable, then this condition is not satisfied for the first vertex of . Such a variant is present in our implementation.
Third, in the proof above, the algorithm always applies the growth step when the size of drops below the threshold . This is a minor technical detail, and can be omitted at the cost of some more hassle in the proof (in the analysis of the last steps of the algorithm) and somewhat worse bounds for and . In the implementation, we do not have this threshold, but instead we roll back the unnecessary deletion steps that were performed by the algorithm near the end of the execution. It is straightforward (but a bit more tedious) to adapt the above analysis to this variant.
4.3.1 Implementation details
We have implemented three variants of the above described method, which we denote new1, new2 and new_ld. In the outlined algorithm, when we consider a vertex , we compute the set of vertices from conflicting with . In new1, we consider two vertices to be conflicting if their sets intersect. In new2 and new_ld, two vertices are considered to be conflicting if the distance between them in the remaining part of the graph is at most . Moreover, new_ld after every step tries to fill its partial solution with the heuristic described in Section 4.4 to find independent set in , where is a set of already removed vertices.
4.4 Other naive approaches and heuristic optimizations
Since uniform quasiwideness for is exactly finding independent sets, it makes sense to include heuristics for finding independent sets as a baseline. Moreover the problem of finding independent sets is also used as a subroutine in the approach based on distance trees. We used the following simple greedy algorithm to find independent sets. As long as our graph is nonempty, take any vertex that has the smallest degree, add it to the independent set and remove it and its neighbors from the graph.
The following algorithm is what we came up with as a naive but reasonable heuristic for larger values of . For every number (where is some hardcoded constant) computes the biggest independent set in graph using the greedy procedure described above, where is a set of vertices with biggest degrees. This heuristic is based on the fact that independent sets in correspond to independent sets in . Without any other knowledge about the graph, vertices with the biggest degree seem to be the best candidates to be removed. In the end, we output the best solution obtained in this manner. In the following, we abbreviate this approach as ld (least degree on power graph).
4.5 Comparing different results
Uniform quasiwideness is a twodimensional measure: we have to measure both the size of the independent set which we desire to find, as well as the size of vertices to be deleted. In order to compare the performance of our studied methods we propose the following approach that arises from applications of uniform quasiwideness in several algorithms [17, 22, 55, 63].
Let be an input to any of our algorithms (note that none of our algorithms takes the target size of the independent set as input) and let and such that is independent in be its output. Let us define – the distance profile of on – as the function from to so that if this distance is at most , and otherwise. The performance of the algorithms [17, 22, 55, 63] strongly depends on the size of the largest equivalence class on defined by if for .
We hence decided to use the size of the largest equivalence class in the above relation as the scoring function to measure the performance of our algorithms. Note that number of different distance profiles is bounded by , so if is fixed and is bounded then the number of different distance profiles is also bounded, so having a big independent set implies having a big subset of this set with equal distance profiles on .
This well defined scoring function makes it possible to compare the results of the algorithms. Furthermore, in our code the implementation of the scoring function can be easily exchanged, so if different scoring functions are preferred, recomputation and reevaluation is easily possible.
5 Experimental setup
5.1 Hard and Software
The experiments on generalized coloring numbers has been performed on an Asus K53SC laptop with Intel® Core™ i32330M CPU @ 2.20GHz x 2 processor and with 7.7GiB of RAM. Weak coloring numbers of a larger number of graphs for the statistics in Section 6.4 (presented without running times) were produced on a cluster at the Logic and Semantics Research Group, Technische Universität Berlin. The experiments on uniform quasiwideness have been performed on a cluster of 16 computers at the Institute of Informatics, University of Warsaw. Each machine was equipped with Intel Xeon E31240v6 3.70GHz processor and 16 GB RAM. All machines shared the same NFS drive. Since the size of the inputs and outputs to the programs is relatively small, the network communication was neglible for tests with substantial running times. The dtf implementation has been done in Python, while all other code in C++ or C. The code is available at [43, 3].
5.2 Test data
Our dataset consists of a number of graphs from different sources.

[leftmargin=*]
 Realworld data

We collected appropriatelysized networks from several collections [1, 35, 41, 7, 61, 38]. Our selection contains classic social networks [69, 12], collaboration networks [40, 52, 51] contact networks [64, 42], communication patterns [40, 59, 34, 39, 58, 4], proteinprotein interaction [11], gene expression [28], infrastructure [67], tournament data [27]
, and neural networks
[68]. We kept the names assigned to these files by the respective source.  PACE 2016 Feedback Vertex Set

The Parameterized Algorithms and Computational Experiments Challenge is an annual programming challenge started in 2016 that aims at investigate the applicability of algorithmic ideas studied and developed in the subfields of multivariate, finegrained, parameterized, or fixedparameter tractable algorithms (from the PACE webpage). In the first edition, one of the tracks focused on the Feedback Vertex Set problem [18], providing 230 instances from various sources and of different sizes. We have chosen a number of instances with small feedback vertex set number, guaranteeing their very strong sparsity properties (in particular, low treewidth). In our result tables, they are named fvs???, where ??? is the number in the PACE 2016 dataset.
 Random planar graphs

In their seminal paper, Alber, Fellows, and Niedermeier [5] initiated the very fruitful direction of developing of polynomial kernels (preprocessing routines rigorously analyzed through the framework of parameterized complexity) in sparse graph classes by providing a linear kernel for Dominating Set in planar graphs. Dominating Set soon turned out to be the pacemaker of the development of fixedparameter and kernelization algorithms in bounded expansion and nowhere dense graph classes [6, 17, 22, 23]. In [5], an experimental evaluation is conducted on random planar graphs generated by the LEDA library [2]. We followed their setup and included a number of random planar graphs with various size and average degree. In our result tables, they are named planarN, where N stands for the number of vertices.
 Random graphs with bounded expansion

A number of random graph models has been shown to produce almost surely graphs of bounded expansion [21]. We include a number of graphs generated by O’Brien and Sullivan [53] using the following models: the stochastic block model (sb? in our dataset) [31] and the ChungLu model with households (clh?) and without households (cl?) [15]. We refer to [21, 53] for more discussion on these sources.
The graphs have been partitioned into four groups, depending on their size: the small group gathers graphs up to edges, medium between and edges, big between and edges, and huge above edges. The random planar graphs in every test group have respectively , , , and edges. The whole dataset is available for download at [3].
6 Weak coloring numbers: results
6.1 Finetuning flat decompositions
option  average appx. ratio  option  average appx. ratio  option  average appx. ratio 

BFS/(1)  1.159  DFS/(1)  1.156  SORT/(1)  1.072 
BFS/(2)  1.131  DFS/(2)  1.117  SORT/(2)  1.039 
BFS/(3)  1.147  DFS/(3)  1.135  SORT/(3)  1.054 
/(1)  1.363  /(1)  1.368  /(1)  1.41 
/(2)  1.277  /(2)  1.291  /(2)  1.329 
/(3)  1.309  /(3)  1.324  /(3)  1.36 
As discussed in Section 3.2, we have experimented with a number of variants of the flat decompositions approach, with regards to the choice of the next root vertex and the internal order of the vertices of the next . The results for the big dataset are presented in Table 1. They clearly indicate that (a) all reversed orders performed much worse, and (b) among other options, the best is to sort the vertices of a new nonincreasingly by degree and choose as the next root the vertex of maximum degree. In the subsequent tests, we use this best configuration for comparison with other approaches.
6.2 Comparison of all approaches
tests  dtf  flat  treedepth  treewidth  degree sort  

small  2  0:04.20  0:00.16  0:08.97  0:00.34  0:00.09  
3  0:05.08  
4  0:05.74  
5  0:06.55  
medium  2  0:27.97  0:01.97  —  0:23.64  0:00.56  
3  1:02.31  
4  1:53.21  
5  2:15.04  
big  2  0:32.82  0:19.08  —  —  —  0:03.30  
3  —  —  
4  —  —  —  
5  —  —  —  
huge  2  —  —  —  —  —  —  —  —  
3  —  —  —  —  
4  —  —  —  —  
5  —  —  —  — 
Table 2 presents the results of our experiments on all test instances and all approaches, summarized as follows:
 dtf

dtfaugmentations with the respective radius supplied as the distance bound;
 flat

the best configuration of the flat decompositions approach (see previous section);
 treedepth

the treedepth approximation heuristic;
 treewidt