Algorithmic Meta-Theorems for Monotone Submodular Maximization

07/12/2018 ∙ by Masakazu Ishihata, et al. ∙ 0

We consider a monotone submodular maximization problem whose constraint is described by a logic formula on a graph. Formally, we prove the following three `algorithmic metatheorems.' (1) If the constraint is specified by a monadic second-order logic on a graph of bounded treewidth, the problem is solved in n^O(1) time with an approximation factor of O( n). (2) If the constraint is specified by a first-order logic on a graph of low degree, the problem is solved in O(n^1 + ϵ) time for any ϵ > 0 with an approximation factor of 2. (3) If the constraint is specified by a first-order logic on a graph of bounded expansion, the problem is solved in n^O( k) time with an approximation factor of O( k), where k is the number of variables and O(·) suppresses only constants independent of k.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

1.1 Problems and Results

We consider monotone submodular maximization problems whose feasible sets are subgraphs specified by monadic second-order (MSO) formula and first-order (FO) formula 111 The first-order logic on graphs is a language that consists of vertex variables , edge predicates , and the usual predicate logic symbols (, , , , , , , etc.) The monadic second-order logic on graphs extends the first-order logic by adding vertex subset variables and vertex inclusion predicate . In this paper, a first-order formula is a formula expressed by the first-order logic on graphs. A monadic second-order formula is defined similarly. . Formally, we consider the following two problems.

Definition 1.1 (MSO-Constrained Monotone Submodular Maximization Problem)

Let be an undirected graph, be a monadic second-order formula with a free vertex-subset variable , and be a nonnegative monotone submodular function.222 For a finite set , a function is nonnegative if . is monotone if for all with . is submodular if for all . Then, the MSO-constrained monotone submodular maximization problem is defined as follows.

(1)
Definition 1.2 (FO-Constrained Monotone Submodular Maximization Problem)

Let be an undirected graph, be a first-order formula with free vertex variables , and be a nonnegative monotone submodular function. Then the FO-constrained monotone submodular maximization problem is defined as follows.

(2)

In both the problems, we regard the length of the formula as a constant. In particular, in the FO-constrained problem, we regard the number of free variables (i.e., the cardinality of the solution) as a constant.

Both problems are very difficult, even for finding feasible solutions; the MSO-constrained problem contains the three-coloring problem. Therefore, unless P NP, we cannot obtain a feasible solution in polynomial time [23]. The FO-constrained problem can be solved in time by an exhaustive search, where is the number of the vertices in the graph; however, it is difficult to improve this result, because the problem contains the -clique problem, which cannot be solved in time unless the exponential time hypothesis fails [8]. Therefore, in both the problems, we have to restrict the graph classes suitably to obtain non-trivial results.

In this study, we show that the MSO- and FO-constrained monotone submodular maximization problems are well solved if the graphs are in certain classes as follows (see Sections 2, 3, and 4 for the definition of these graph classes). Here, we assume that a submodular function is given by a value oracle, and is evaluated in time. Let be a class of graphs having bounded treewidth. Then, for each , the MSO-constrained monotone submodular maximization problem is solved in time with an approximation factor of .444An algorithm has an approximation factor of if holds. where ALG is the solution obtained by the algorithm and OPT is the optimal solution. Let be a class of graphs having low degree. Then, for each , the FO-constrained monotone submodular maximization problem is solved in time for any with an approximation factor of . Let be a class of graphs having bounded expansion. Then, for each , the FO-constrained monotone submodular maximization problem is solved in time with an approximation factor of . Here, suppresses only the constants independent of .

1.2 Background and Motivation

1.2.1 Submodular maximization.

The problem of maximizing a monotone submodular function under some constraint is a fundamental combinatorial optimization problem, and has many applications in machine learning and data mining 

[26]. This problem cannot be solved exactly in polynomially many function evaluations even for the cardinality constraint [15]; therefore, we consider approximation algorithms.

Under several constraints, the problem can be solved in polynomial time within a reasonable approximation factor. Examples include the cardinality constraint [28], knapsack constraint [36], and matroid constraint [6]. The problem is also solved on some graph-related constraints such as connectivity constraint [27] and - path constraint [7].

Here, our research question is as follows:

What constraints admit efficient approximation algorithms for monotone submodular maximization problems?

One solution to this question is given by Goemans et al. [18]: If we can maximize linear functions on the constraint in polynomial time, the corresponding monotone submodular maximization problem can be solved in polynomial time with an approximation factor of . This factor is nearly tight since we cannot obtain a approximate solution in polynomially many oracle calls [5]

. If a linear programming relaxation of the constraint has low correlation gap, we obtain an algorithm with an approximation factor that depends on the correlation gap by using the continuous greedy algorithm with the contention resolution scheme 

[39].

In this study, we consider another approach. As in Definitions 1.1 and 1.2, we assume that the feasible sets are subgraphs of a graph specified by a logic formula. To the best of our knowledge, no existing studies have considered this situation, and we believe that this situation is important in both practice and theory: In practice, such problems appear in sensor network design problems [7, 27]; thus, understanding classes of tractable problems helps practitioners to model problems. In theory, this may provide new algorithmic techniques because we need to combine quite different techniques in submodular maximization and mathematical logic.

1.2.2 Algorithmic metatheorem.

In the field of algorithmic meatheorem, the constraints are represented by logic formulas [20]. An algorithmic metatheorem claims that if a problem is described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources [37]. There are many existing algorithmic metatheorems, and Table 1 shows some existing results.

The model checking problem on a graph asks whether the graph satisfies a certain property , , or not. For the MSO formulas, Courcelle [9] showed that the model checking problem can be solved in linear time for bounded treewidth graphs. This result is tight for minor-closed graph classes [33]. For the first-order formulas, Seese [34] showed that the model checking problem can be solved in linear time for bounded degree graphs. Later, this result was extended to low-degree graphs [19], bounded expansion graphs [21] and nowhere dense graphs [22].

The counting problem on a graph asks the number of subgraphs which satisfy a certain property , that equals to the cardinality of the set for the monadic second-order logic case and the set for the first-order case. The enumeration problem on a graph outputs the elements in one-by-one. As for the model checking problem, there are results on the monadic second-order logic with bounded treewidth graphs [3, 1], and the first-order logic with bounded degree graphs [24], low degree graphs [13], bounded expansion graphs [25], and nowhere dense graphs [32].

The linear maximization problem on a graph involves the maximization of a linear function on defined above. Compared with the model checking, counting, and enumeration problems, this problem is less studied. There is a classical result on the monadic second-order logic with bounded treewidth graphs [2]. To the best of our knowledge, the result on the first-order logic with low degree graphs has not been explicitly stated yet. The result on the first-order logic with bounded expansion graphs has also not been explicitly stated; however, it is obtained by the same technique as that of Gajarsky et al. [17], which shows the existence of a linear-sized extended formulation. The possibility of extending these results for nowhere dense graphs is still open.

Our problems (Definitions 1.1, 1.2) generalize linear functions to monotone submodular functions. However, due to the submodularity, we need several new techniques to prove our theorems; see below.

Logic Graph Model Checking Counting Linear Max. Submod Max.
MSO Bounded Treewidth  [9]  [2]  [2] (, )
FO Low Degree  [19]  [13] (, )
FO Bounded Expansion  [14]  [25]  [17] (, )
FO Nowhere Dense  [21]  [22] open open
Table 1: Existing results and our results on algorithmic metatheorems; the results without citations are shown in this paper. Each entry for model checking, counting, and linear maximization shows the time complexity, and each entry for submodular maximization column shows a pair of time complexity and approximation factor.

1.3 Difficulty of Our Problems

Difficulty of MSO-Constrained Problem on Bounded Treewidth Graphs.

A linear function can be efficiently maximized on this setting [2]. Therefore, it seems natural to extend their technique to the submodular setting. However, we see that such an extension is difficult.

Their method first encodes a given graph as a binary tree by tree-decomposition. Then, it converts a given monadic second-order formula into a tree automaton [38]. Finally, it solves the problem using the bottom-up dynamic programming algorithm. This gives the optimal solution in time.

This technique cannot be extended to the submodular setting because a monotone submodular function cannot be maximized by the dynamic programming algorithm.

Difficulty of FO-Constrained Problem on Low Degree Graphs.

There are no existing studies on the linear maximization problem for low degree graphs. Therefore, we need to establish a new technique. In particular, in the result on the counting problems [13], they performed an inclusion-exclusion type algorithm. However, it is difficult to extend such a technique to the optimization problems.

Difficulty of FO-Constrained Problem on Bounded Expansion Graphs.

To describe the difficulty of this case, we first introduce the algorithm for the linear maximization problem on bounded expansion graphs.555This is a simplified version of Gajarsky et al [17]’s proof for their result on extended formulations. If a graph class has bounded expansion, then there exists functions and such that, for all and , there is a coloring such that any colors induce a subgraph of treewidth bounded by . Such coloring is referred to as low-treewidth coloring [29, 30].

The algorithm is described as follows. First, we remove the universal quantifiers from the formula using Lemma 8.21 in [20]. Let , where is the number of existentially quantified variables. Then, we find a low-treewidth coloring of with colors. Here, we can see that colors are enough to cover all the variables in the formula. Therefore, by solving the problems on all the colored subgraphs using the algorithm for bounded treewidth graphs [2], we obtain the solution.

This technique cannot be extended to the submodular setting, because our result for the bounded treewidth graphs only gives an time approximation algorithm. Since we can obtain the optimal solution in time through an exhaustive search, it does not make sense to reduce the problem to the bounded treewidth graphs.

In fact, most of the existing results for the first-order logic use the results for bounded treewidth graphs as a subroutine [14, 21, 22, 25, 32]. However, the above discussion implies that we cannot use such a reduction for the submodular setting.

1.4 Proof Outlines

1.4.1 Proof Outline of Theorem 1.2.

We represent the feasible set in the structured decomposable negation normal form (structured DNNF) [10] using Amarilli et al. [1]’s algorithm. Here, a structured DNNF is a Boolean circuit based on the negation normal form, where the partition of variables is specified by a tree, called a vtree.

Then, we apply the recursive greedy algorithm [7] to the structured DNNF. We split the vtree at the centroid. Then, we obtain constantly many subproblems whose numbers of variables are constant factors smaller than the original problem. By solving these subproblems greedily and recursively, we obtain an -approximate solution in time, since the recursion depth is and the branching factor is .

1.4.2 Proof Outline of Theorem 4.

By using Gaifman’s locality theorem [16], we decompose a given formula into multiple -local formulas. We perform the greedy algorithm with exhaustive search over the local formulas as follows: First, we perform the exhaustive search to obtain the optimal solution for the first local formula in time. Then, by fixing the obtained solution, we proceed to the next local formula similarly. By continuing this process until all the local formulas are processed, we obtain a solution.

In the above procedure, if each -local part of the optimal solution are feasible to the corresponding subproblem, then the obtained solution is a -approximate solution. Otherwise, we can guess an entry of the optimal solution. Thus, for each possibility, we call the procedure recursively. Then we obtain a recursion tree of size . We call this technique suspect-and-recurse. We show that there is at least one solution that has an approximation factor of .

1.4.3 Proof Outline of Theorem 4.

We also use the suspect-and-recurse technique for this theorem; however, the tools used in each step are different.

By using the quantifier elimination procedure of Kazana and Segoufin [25], we decompose a given formula into multiple “tree” formulas. We perform the greedy algorithm with the recursive greedy algorithm over the tree formulas as follows. First, we perform the recursive greedy algorithm to obtain an -approximate solution to the first tree formula in time. Then, by fixing the obtained solution, we proceed to the next tree formula similarly. By continuing this process until all the formulas are processed, we obtain a solution.

In the above procedure, if each tree part of the optimal solution is feasible to the corresponding subproblem, the obtained solution is an -approximate solution. Otherwise, we can guess an entry of a forbidden pattern that specifies which assignment makes the optimal solution infeasible. For each possibility, we call the procedure recursively. Then, we obtain a recursion tree of size . We show that there is at least one solution in the tree that has an approximation factor of .

2 Monadic Second-Order Logics on Bounded Treewidth Graphs

2.1 Preliminaries

2.1.1 Bounded Treewidth Graphs

Let be a graph. A tree decomposition of is a tree with map satisfying the following three conditions [31].

  • .

  • For all , there exists such that .

  • For all , holds for all on and .

The treewidth of is given by . A graph class has bounded treewidth if there exists such that the treewidth of all is at most .

2.1.2 Structured Decomposable Negation Normal Form.

The decomposable negation normal form (DNNF) is a representation of a Boolean function [10]. Let be a finite set, and be a set of Boolean variables indexed by . Then, the DNNF is recursively defined as follows.

  • The constants (always true) and (always false) are in DNNF.

  • The literals and () are in DNNF.

  • For any partition of variables and formulas and () in DNNF, the following formula is in DNNF.

    (1)

    We call and factors of this decomposition.

By recursively applying the above decomposition to each factor, every Boolean function can be represented as a DNNF [10]. The maximum number of disjunctions in (1), which appears in the recursion, is called the width of the DNNF. A DNNF is usually represented by a Boolean circuit, which is a directed acyclic graph whose internal gates are labeled “AND” or “OR”, and the terminals are labeled , , , or ; see Example 2.1 below.

A vtree is a rooted full binary tree666A binary tree is full if every non-leaf vertex has exactly two children. whose leaves are the Boolean variables . A DNNF respects vtree if, for any OR-gate of the DNNF, there exists an internal node of the vtree such that the partition and of variables of the decomposition represented by the OR-gate coincides with the leaves of the left and right subtrees of . A structured DNNF is a DNNF that respects some vtree [12].

Example 2.1

This example is from Darwiche [11]. Let , and be a Boolean formula. We split into and . Then, the formula is factorized as

(2)

This is in a structured DNNF. The circuit and the vtree are shown in Figures 2, 2. The top OR-gate in Figures 2 represents the above decomposition and corresponds to the root node of the vtree shown in Figure 2.

Figure 1: Example of a structured DNNF.

Figure 2: The vtree of the structured DNNF in Figure 2.

A boolean function can be used to represent a family of subsets of . We identify a subset as the indicator assignment , which is defined by ( and (. Then, represents a family of subsets . For simplicity, we say is in if where is a Boolean function represented by . Amarilli et al. [1] showed that a family of subsets in a bounded treewidth graph specified by a monadic second-order formula has a compact structured DNNF representation. [Amarilli et al. [1]] Let be a bounded treewidth graph, be a monadic second-order formula, and is a Boolean function representing the family of subsets . Then, is represented by a structured DNNF with a bounded width.777Amarilli et al. [1] did not claim the structuredness and the boundedness of the DNNF. However, by observing their construction, these two properties are immediately confirmed. The structured DNNF is obtained in polynomial time.

Remark 2.1

The structured DNNF provides a “syntax sugar” of the Courcelle-type automaton technique [9]. Actually, the theorem is proved as follows. First, a bounded treewidth graph is encoded by a labeled binary tree using a tree-decomposition [4]. Then, the given formula is interpreted as a formula on labeled trees, and is converted into a (top-down) tree automaton using a result of Thatcher and Wright [38]. We consider the root vertex. For each tree-automaton transition, we construct structured DNNFs for the subtrees. Then, by joining the DNNFs with an AND-gate, and by joining the AND-gates with an OR-gate, we obtain the desired structured DNNF whose width is the number of states of the automaton.

Any proof with a structured DNNF can be converted into a proof using a tree-decomposition and a tree automaton by following the above construction. However, in our case, the former approach gives a simpler proof than the latter one.

2.2 Proof of Theorem 1.2

We propose an algorithm to prove Theorem 1.2. First of all, we encode a given bounded treewidth graph and a monadic second-order formula into a structured DNNF and the corresponding vtree using Theorem 2.1. Then, the problem is reduced to maximizing a monotone submodular function on .

Our algorithm is based on Chekuri and Pal’s recursive greedy algorithm [7], which is originally proposed for - path constrained monotone submodular maximization problem. In this approach, we decompose the problem into several subproblems, and solve the subproblems one-by-one in a greedy manner.

We use leaf separators to obtain the subproblems. An edge is a -leaf separator if each subtree, which is obtained by removing , has at most fraction of leaves. A full binary tree has a -leaf separator as follows. A full binary tree with leaves has a -leaf separator . Such is obtained in time. This proof is almost the same as that of Lemma 3 in [40]. Let be the edge such that the difference between the numbers of leaves of the subtrees obtained by removing is the smallest. Such is easily obtained in time using a depth-first search. We show that is a -separator.

Let and be the subtrees obtained by removing , and let be the number of leaves of . Without loss of generality, we assume . Let and be subtrees of obtained by removing the other two edges adjacent to . By the definition of , we have . By considering the cut separating with the minimality of , we have . Therefore, we have . Similarly, we have . Therefore, we have . Since , we have and . This shows that is a -separator.

The subproblems are obtained as follows. Let be a -leaf separator of the vtree , where is a child of . Let be the vertices corresponding to the leaves of the subtree rooted by , and let . By definition, there are OR-gates that correspond to a factorization (1) induced by . For each , we define , which is the structured DNNF induced by the descendants (inclusive) of . Then, is a structured DNNF over . We also define the structured DNNF obtained by replacing to and to . Then, is a structured DNNF over . Through this construction, we obtain the following two properties. The widths of , () are at most that of . Let and . Then is in if and only if there exists such that is in and is in .

By Lemma 2.2, for some , the optimal solution is partitioned to and that satisfy and , respectively. In the algorithm, we guess such . Then, we solve the problem on recursively to obtain . Then, by modifying the function to , we solve the problem on recursively to obtain . By taking the maximum over all , we obtain a solution for some . The precise implementation is shown in Algorithm 1. We analyze this algorithm.

1:procedure RecursiveGreedy(, )
2:     Compute an edge separator , where is a child of .
3:     Let be the AND-gates associated with .
4:     for  do
5:         Construct and .
6:         Let .
7:         Let .
8:     end for
9:     return that maximizes .
10:end procedure
Algorithm 1 Recursive Greedy Algorithm on structured DNNF

Algorithm 1 runs in time with an approximation factor of .

First, we analyze the running time. Since the number of vertices in each subproblem is at most to the original one, the depth of the recursion is . The branching factor is , which is the width of the DNNF. Here, we used Lemma 2.2. Therefore, the size of the tree is . Therefore, the running time is .

Next, we analyze the approximation factor. Let be the approximation factor of the algorithm when it is applied to the problem on variables. Let be the optimal solution. By Lemma 2.2, is partitioned into and , and these are feasible to and for some , respectively. Let and be solutions returned at the -th step. Then, by the definition of , we have

(3)
(4)

By adding these inequalities, we have

(5)

We simplify the right-hand side of the above inequality. Here, we prove a slightly general lemma, which will also be used in later sections. Let be a nonnegative monotone submodular function, and let be arbitrary subsets. Then, the following inequality holds.

(6)

We prove the lemma by induction. If , the inequality is reduced to

(7)

This holds since is a nonnegative function. If , we have

(8)

where . Since is also a nonnegative monotone submodular function, we can use the inductive hypothesis as

(9)

where is the submodularity on the first two terms, and follows from the nonnegativity of the first term and monotonicity of the second term. This proves the lemma.

By using this lemma in (5), we obtain the inequality

(10)

which shows that the approximation factor of the algorithm satisfies the following recursion.

(11)

By solving this recursion, we have .

This lemma proves Theorem 1.2.

3 First-Order Logics on Low Degree Graphs

3.1 Preliminaries

In the first-order case, we work on tuples of variables and vertices. To simplify the notation, we use to represent a tuple of variables and to represent a tuple of vertices. We denote by to represent an element of . For a set function and a set of tuples , we define .

3.1.1 Low Degree Graphs.

A graph class has low degree if for any , there exists such that for all with , the maximum degree of is at most  [20]. A typical example of low degree graph class is the graphs of maximum degree of at most for some constant . The low degree graph class and the bounded expansion graph class, which we consider in the next section, are incomparable [20].

3.1.2 Gaifman’s Locality Theorem

Let be the shortest path distance between the vertices of . For a vertex and an integer , we denote by the ball of radius centered at . Also, for a tuple of vertices and an integer , we define . For tuples of vertices, we define . For variables and , and integer , we denote by the first-order formula that represents the distance between and is less than or equal to . For tuples of variables, we denote by the formula . Note that it is a first-order formula.

A first-order formula is -local if it satisfies the property

(1)

where denotes the induced subgraph. Intuitively, a formula is -local it is determined by the -neighborhood structure around the variables.

One of the most important theorems on the first-order logic is the Gaifman’s locality theorem. [Gaifman [16]] Every first-order sentence888A sentence is a formula without free variables. is equivalent to a Boolean combination of sentences of the form

(2)

where is an -local formula. Furthermore, such a Boolean combination can be computed from .

We can obtain the Gaifman’s locality theorem for formulas by considering all the partitions of the variables as follows. [Equation (1) in Segoufin and Vigny [35]] Every first-order formula is equivalent to a formula of the form

(3)

where are -local formulas, and expresses the fact that for all , and no refinement of satisfies this property. Furthermore, such a formula can be computed from .

3.2 Proof of Theorem 4

First of all, we transform the given formula into the form (3) in time. Then, we solve the problem for each disjunction of the formula. By taking the maximum of the solutions for the disjunctions, we obtain a solution. Thus, we now consider the case where formula is in the following form:

(4)

and the optimal solution also satisfies this formula.

To design an approximation algorithm, we introduce the following concept. A feasible solution is an -prefix dominating solution if

(5)

for all . Let be an -prefix dominating solution. Then, it is an -approximate solution. By adding (5) over , we obtain

(6)

Here, follows from Lemma 2.2. By moving the right-most term to the left-hand side, we obtain the lemma.

Lemma 3.2 implies that we only have to construct a prefix dominating solution. A natural approach will be a greedy algorithm. Suppose that we have a partial solution . Then, we find a solution for the -th component by solving the subproblem. Here, the exact -th solution is efficiently obtained as follows. For given , we find the exact solution to in time. We guess vertex that is assigned to the first variable of . Then, by Lemma 3.1.2, all the other variables should be assigned by the vertices in the -neighborhood of . The number of vertices in the -neighborhood of is at most ; thus, we can check all the assignments in time. Therefore, we can check all the assignments in time.

By iterating this procedure, we obtain a (possibly partial) solution. In this procedure, if the obtained solution is not a partial, and -th component of the optimal solution is feasible to the -th subproblem for all , the obtained solution is a -prefix dominating solution, which is a -approximate solution. We call such a situation is prefix feasible to . When is not prefix feasible to ? By observing (3), we can see that it is infeasible only if the distance between and is less than or equal to for some . If we know , it is easy to avoid such a solution. Here, we develop a method to avoid such a solution without knowing .

Our idea is the following. Suppose that we have a (possibly partial) solution such that is not prefix feasible to . Then, there exist such that for some . This means that, at least one is in . Since the number of possibilities ( and ) is , we can suspect it by calling the procedure recursively until we suspect assignment. We call this technique suspect-and-recurse.

The detailed implementation is shown in Algorithm 2 which calls Algorithm 3 as a subroutine. The algorithm maintains a current guess of some entries of optimal solution as a list , i.e., means that we guess the -th entry of the optimal solution is . Then, the feasible set to the -th subproblem when the -th solutions () and are specified is given by

(7)

Algorithm 3 runs in time with an approximation factor of . First, we analyze the running time. The algorithm constructs a recursion tree, whose depth is , and the branching factor is . Thus, the size of the tree is at most . In each recursion, the algorithm calls Algorithm 2 that runs in time by Lemma 3.2 (with a modification to handle ). Therefore, the total running time is .

Next, we analyze the approximation factor. By the above discussion, the algorithm seeks at least one solution such that is prefix feasible to . Such a solution has an approximation factor of because of Lemma 3.2. Therefore, we obtain the lemma.

We obtain Theorem 4 immediately from this lemma. We replace by . If , we solve the problem by an exhaustive search, which gives the exact solution in time. Otherwise, we apply Algorithm 3. This gives the desired result.

1:procedure GreedyLowDeg()
2:     for  do
3:         
4:     end for
5:     return
6:end procedure
Algorithm 2 Greedy algorithm for low degree graphs.
1:procedure SuspectRecurseLowDeg()
2:     
3:     if  then
4:         for  do
5:              for  do
6:                  
7:              end for
8:         end for
9:     end if
10:     return best solution among and
11:end procedure
Algorithm 3 Suspect-and-recurse algorithm for low degree graphs.
Remark 3.1

If is a linear function, we can obtain the exact solution using Algorithm 3 as follows. First, we enumerate all the possibilities that which variables take the same value. Then, for each possibility, we apply Algorithm 3 after removing the redundant variables. We see that, if the variables in a -prefix dominating solution are pairwise disjoint, instead of (6), the following inequality holds.

(8)

This means that a -prefix dominating solution is the optimal solution. Therefore, this procedure gives the optimal solution.

4 First-Order Logics on Bounded Expansion Graphs

4.1 Preliminaries

4.1.1 Bounded Expansion Graphs.

Let be a directed graph. A -transitive fraternal augmentation is a minimal supergraph of such that

(transitivity)

if and then , and

(fraternality)

if and then at least or

holds. By the minimality condition, we must have . A transitive fraternal augmentation is a sequence of -transitive fraternal augmentations . Note that a transitive fraternal augmentation is not determined uniquely due to the freedom of choice of the fraternal edges.

We say that a class of graphs has bounded expansion [29] if there exists a function such that, for each , there exists an orientation and a transitive fraternal augmentation where , where is the maximum in-degree of .999There are several equivalent definitions for bounded expansion. We choose the transitive fraternal augmentation for the definition because it is a kind of degree boundedness, and therefore, it looks similar to degree lowness. In a class of graphs having bounded expansion, we compute a suitable transitive fraternal augmentation efficiently as follows.

[Nešetřil and Ossona de Mendez [30]] For a class of graphs of bounded expansion, we can compute a transitive fraternal augmentation such that for some function .101010 can be constant factor larger than the optimal . Below, we fix the transitive fraternal augmentation computed by Theorem 4.1.1

4.1.2 Kazana–Segoufin’s Normal Form.

Here, we introduce Kazana–Segoufin’s normal form, which is proposed for the counting and enumeration problems for first-order formulas.

Let us consider the -th graph