More Effective Randomized Search Heuristics for Graph Coloring Through Dynamic Optimization

by   Jakob Bossek, et al.

Dynamic optimization problems have gained significant attention in evolutionary computation as evolutionary algorithms (EAs) can easily adapt to changing environments. We show that EAs can solve the graph coloring problem for bipartite graphs more efficiently by using dynamic optimization. In our approach the graph instance is given incrementally such that the EA can reoptimize its coloring when a new edge introduces a conflict. We show that, when edges are inserted in a way that preserves graph connectivity, Randomized Local Search (RLS) efficiently finds a proper 2-coloring for all bipartite graphs. This includes graphs for which RLS and other EAs need exponential expected time in a static optimization scenario. We investigate different ways of building up the graph by popular graph traversals such as breadth-first-search and depth-first-search and analyse the resulting runtime behavior. We further show that offspring populations (e. g. a (1+λ) RLS) lead to an exponential speedup in λ. Finally, an island model using 3 islands succeeds in an optimal time of Θ(m) on every m-edge bipartite graph, outperforming offspring populations. This is the first example where an island model guarantees a speedup that is not bounded in the number of islands.



There are no comments yet.


page 1

page 2

page 3

page 4


Time Complexity Analysis of Randomized Search Heuristics for the Dynamic Graph Coloring Problem

We contribute to the theoretical understanding of randomized search heur...

On List k-Coloring Convex Bipartite Graphs

List k-Coloring (Li k-Col) is the decision problem asking if a given gra...

Spectrum graph coloring to improve Wi-Fi channel assignment in a real-world scenario via edge contraction

The present work deals with the problem of efficiently assigning Wi-Fi c...

New results on the robust coloring problem

Many variations of the classical graph coloring model have been intensiv...

Computing the Chromatic Number Using Graph Decompositions via Matrix Rank

Computing the smallest number q such that the vertices of a given graph ...

Runtime Performances of Randomized Search Heuristics for the Dynamic Weighted Vertex Cover Problem

Randomized search heuristics such as evolutionary algorithms are frequen...

Graph 3-coloring with a hybrid self-adaptive evolutionary algorithm

This paper proposes a hybrid self-adaptive evolutionary algorithm for gr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Evolutionary computing techniques have been applied to a wide range of problems that involve stochastic and/or dynamic environments [1]. These methods can easily adapt to new environments which makes them well suited to deal with dynamic changes [2, 3]

. Understanding the principle of reoptimization carried out by an evolutionary algorithm for a dynamically changing problem is an important task and we contribute to this area by studying dynamic variants of the well-known graph coloring problem. Our main message is that a static combinatorial optimization problem may be solved more efficiently in a dynamic setup than in a static one.

Studies around dynamic optimization in the context of evolutionary algorithms have focused on the type, magnitude and frequency of changes that occur in the problem that is changing dynamically over time. Different types of experimental and theoretical studies have been carried out. Those experimental studies usually consider a benchmark that may be obtained from a classical static problem by applying specific dynamic changes to the static problem formulation over time [4, 5]. A wide range of studies on the runtime behavior of evolutionary computing techniques for dynamic and stochastic problems have been carried out in recent years. We refer the reader to [6] for an overview. These studies build on a larger body of mathematical methods for the analysis of evolutionary computing techniques developed over the last 20 years (see [7, 8, 9, 6] for comprehensive presentations). Theoretical investigations in terms of runtime analysis for dynamic problems usually focus on the reoptimization time which measures the amount of time that an algorithm needs to recompute an optimal solution when a dynamic change has happened to a static problem for which an optimal solution has been obtained. Other studies for -hard problems also consider the task of recomputing a good approximation after a dynamic change has occurred. Such studies include makespan scheduling [10], the minimum vertex cover problem [11, 12, 13], a dynamic constraint changes in the context of submodular optimization [5].

We investigate the classical graph coloring problem that has already been studied in the context of evolutionary algorithms. For the static problem, Fischer and Wegener [14] considered a problem inspired by the Ising model from physics, where vertices of a graph need to be colored with the same color. On bipartite graphs, this corresponds to the classical graph coloring problem with 2 colors. They showed that on cycles, the (1+1) EA has expected optimization time

under a reasonable assumption, but a simple (2+1) Genetic Algorithm with 2-point crossover and fitness sharing succeeds in expected time

. Sudholt [15] considered the same problem on complete binary trees. He showed that, while (+) EAs take exponential expected time, the aforementioned (2+1) Genetic Algorithm finds an optimum in expected time . Sutton [16]

presented bipartite graphs on which the (1+1) EA needs superpolynomial time, with high probability. Sudholt and Zarges 

[17] considered iterated local search algorithms in a different representation, where algorithms operate with an arbitrary number of colors, but the fitness function encourages the evolution of small color values. They considered mutation operators that can recolor large parts of a graph, based on so-called Kempe chains. Along with a local search algorithm for graph coloring, iterated local search is shown to efficiently 2-color all bipartite graphs and to color all planar graphs with maximum degree at most 6 with at most 5 colors. Recently, Bossek and Sudholt [18] also studied the performance of (1+1) EA and RLS for the edge coloring problem, where edges instead of vertices have to be colored such that no two incident edges share the same color, and the number of colors is minimized.

Bossek et al. [19] considered a dynamic graph coloring problem where an edge is inserted into a properly colored graph. The authors analyze the expected time for the (1+1) EA, Randomized local search (RLS) and two iterated local search algorithms from [17] to rediscover a proper coloring in case the newly added edge introduces a conflict. They consider 2-coloring bipartite graphs and 5-coloring planar graphs with maximum degree 6 as in [17]. The authors show that dynamically adding an edge can lead to very hard symmetry problems that, in the worst case, may be harder to solve than coloring a graph from scratch. On binary trees, RLS can easily get stuck in local optima and the (1+1) EA needs exponential expected time.

1.1 Our Contribution

We consider the classical graph coloring problem and show that dynamic optimization can be helpful for this problem if the input graph is given to the algorithm incrementally based on an order determined by graph traversals. Our investigations provide additional insights to a wide range of studies of evolutionary algorithms and other search heuristics that examine the computational complexity of these methods on instances of the graph coloring problem in static and dynamic environments.

We consider an important aspect that bridges these static and dynamic studies to a certain extent. We are interested in whether giving an evolutionary algorithm the input graph in an incremental way and optimizing the resulting dynamic problem can lead to a faster optimization process than giving the algorithm the whole input at once as done in a standard static setting. Our focus is on bipartite graphs, that is, the final graph resulting from the edge sequence is bipartite, which corresponds to the classical graph coloring problem with colors. This problem is polynomial time solvable in the context of problem specific algorithms. On the other hand, it is -complete to decide if a given graph admits a -coloring for [20]. Furthermore, even if the input graph is promised to be -colorable, it is -hard to color with colors [21].

We examine a dynamic variant of the graph coloring problem in bipartite graphs where edges of a given static instance are made available to the algorithm over time. We show that, if the edges are provided in an order that preserves the connectivity of the graph, even the simple RLS can find proper colorings for all bipartite graphs efficiently. This is surprising since in the static setting, RLS fails badly even on simple bipartite graphs such as trees [19]. We further show that the order of edges is crucial: if edges are provided in a worst-case or random order, RLS only has an exponentially small probability of ever finding a proper 2-coloring on worst-case graph instances. Specifically, we assume that the order in which the edges are made available is determined by a graph traversal algorithm. We study the reoptimization time after a given edge has created a conflict and show that the use of graph traversals leads to an efficient optimization process for a wide range of graph classes where evolutionary algorithms for the static setting (where the whole graph is given right at the beginning) fail. We pay special attention to popular graph traversal algorithms such as depth first search (DFS) and breadth-first-search (BFS) and show the difference that a choice between them may make with respect to the optimization time when carrying out dynamic graph coloring for bipartite graphs.

Finally, we investigate speed ups that can be gained when using offspring populations and parallel dynamic reoptimization based on island models. We show that offspring populations of logarithmic size can decrease the expected optimization time by a linear factor. Island models that try to rediscover a proper coloring from the same initial coloring after adding an edge can benefit from independent evolution. It turns out that just using 3 islands leads to an asymptotically optimal runtime. This is one of very few examples where island models are proven to be more efficient than offspring populations and the first example where the speedup is not bounded in the number of islands. Our results are summarized in Table 1.

The paper is structured as follows. In Section 2, we introduce the graph coloring problem and the incremental reoptimization approaches that are subject to our analysis. In Section 3, we show that RLS is efficient with any graph traversal, while Section 4 shows that not using graph traversals may be hugely inefficient. We carry out more detailed investigations when using BFS and DFS in Section 5. We show the benefit of using large enough offspring populations in Section 6 and the benefit of parallel incremental reoptimization based on island models in Section 7.

Edge insertion order generic RLS Tailored RLS Islands
Any connectivity-preserving  [Thm 1]  [Thm 9]  [Thm 11]
DFS traversal  [Thm 1]  [Thm 9]  [Thm 11]
BFS traversal  [Thm 7]  [Thm 9]  [Thm 11]
Random / worst-case insertion order (w.h.p.) [Thm 6]
Table 1: Worst-case expected times in the setting of adding edges incrementally to build up a whole bipartite graph for generic RLS (see Section 2), tailored (1+) RLS (see Section 6) and island models (see Section 7). We denote the length of the longest simple path by and the diameter by .

2 Preliminaries

Let denote an undirected graph with vertices  and edges . We denote by the number of vertices and by the number of edges in . We assume in the following that all considered graphs are connected (as otherwise connected components can be colored separately). By we denote the length of the longest simple path (number of edges) between any two vertices in the graph. The diameter is the maximum number of edges on any shortest path between any two vertices.

A vertex coloring of is an assignment of color values to the vertices of . Let be the degree of a vertex  and be its color in the current coloring. Every edge where is called a conflict. A color is called free for a vertex if it is not assigned to any neighbor of . The chromatic number is the minimum number of colors that allows for a conflict-free coloring. A coloring is called proper if there is no conflicting edge.

We use the most common representation for graph coloring: the total number of colors is fixed and the objective function is to minimize the number of conflicts. Since we only consider 2-coloring bipartite graphs, we can use the standard binary representation that assigns each vertex a color from . We use the notion of “flipping” vertices, by which we mean that the bit corresponding to the vertex’ color is flipped.

The well-known randomized local search (RLS) is defined as follows. Assume that the current solution is . In every iteration a single vertex color is flipped to produce . Next, is replaced by if the fitness of is no worse than its parent fitness (see Algorithm 1). We consider all algorithms as infinite processes as we are mainly interested in the expected number of iterations until good solutions are found or rediscovered.

1:  while optimum not found do
2:      Generate by choosing an index uniformly at random and flipping bit .
3:      If has no more conflicts than , let .
Algorithm 1 RLS ()

Similar to [19], we also consider a tailored RLS algorithm that only mutates vertices that are involved in conflicts (see Algorithm 2). We sometimes refer to the original RLS as generic RLS as opposed to tailored RLS.

1:  while optimum not found do
2:      Generate by choosing a vertex uniformly at random from all vertices that are part of a conflict. Flip the color of .
3:      If has no more conflicts than , let .
Algorithm 2 Tailored RLS ()

We consider a setting of building up and re-optimizing a graph incrementally, a setting termed as incremental reoptimization (IR) in the following. To be more precise, given a graph with nodes and edges, we start with an empty -vertex graph with and assign colors to the nodes uniformly at random. Note, that initially has no edges and hence no conflicts occur regardless of the colors assigned. Next, we subsequently add single edges to according to a given order of the edges , one by one, and re-optimize with algorithm , e.g., generic RLS, between edge insertions (see Algorithm 3).

1:  Let be a graph with isolated vertices ().
2:  Let be a coloring of all vertices, chosen uniformly at random.
3:  for  to  do
4:      Add edge to .
5:      Run on with as the initial search point. Stop when a desired coloring has been obtained and store the final search point in .
Algorithm 3 Incremental Reoptimization (IR) (, , )

Graph traversal.

Let be a sequence of edges with endpoints in . Let be the graph with . We will consider a special type of order that maximally preserves the connectivity. More precisely, for any , we let be the edge-induced subgraph of that is induced by the set of the first edges . That is, the edge set of is and the vertex set of is the set of vertices that are endpoints of , . Note that might be a strict subset of . Now the order is called a graph traversal order of if for any , the number of connected components (CCs) of is at least the number of CCs of . In other words, an edge insertion can never link two CCs, which would reduce the number of CCs. Instead, the graph traversal needs to fully build one connected component before moving on to the next one. Once an edge from some CC in appears, then the next edges gradually build a connected subgraph surrounding until all the edges in have appeared. After that, a different CC will be built, and so on.

We call the order a Breadth-First-Search (BFS) traversal or order, if the ordering can be obtained by first selecting some starting vertex from each connected component, and then following edges in the same way that a breadth-first-search starting at would explore the connected component. A Depth-First-Search (DFS) traversal or order can be defined similarly except that depth-first search is used. Note that both BFS and DFS traversal are special cases of graph traversal orders defined before.

3 RLS is efficient with any graph traversal

Our main research question is whether incremental optimization leads to efficient runtimes on subclasses of bipartite graphs if is set to RLS. Recall that the worst-case expected time for discovering or re-discovering proper 2-colorings for bipartite graphs is infinite as demonstrated for binary trees in [19]. The key idea to prove the latter was to complete an -vertex binary tree by adding a single edge which leads to strong symmetry problems if the linked parts are colored inversely.

It turns out that for IR in order to find proper 2-colorings of bipartite graphs efficiently, the order of edge insertions is crucial. This aspect will be further investigated in Section 5. For now we formulate the following general result:

Theorem 1.

Let be the length of the longest path in . On every bipartite graph , the total expected time of IR with generic RLS to incrementally build a proper 2-coloring is at most when edges are added in an order given by a graph traversal.

To prove Theorem 1, we make use of two folklore random walk results. The presentation is adapted from [18, Lemma A.1].

Lemma 2.

Consider a fair random walk on where 0 is an absorbing state and is a reflecting state. More formally, abbreviating , for all , , and . Let be the first hitting time of state 0 and be the first hitting time of either state 0 or . Then the following statements hold:

  1. For all , .

  2. For all and all , .

  3. For all , .

All statements also hold for a lazy random walk with a self-loop probability of , when multiplying all time bounds by .

Proof of Lemma 2.

The first two statements were shown in [18, Lemma A.1]. The third statement follows from the fair gambler’s ruin scenario where one player starts with dollars and the other player starts with dollars and the game ends when either player is broke. It is well known that the expected time for the game to end is . ∎

Proof of Theorem 1.

Note that in our setting we start with an -vertex graph with no edges at all, each vertex having color 0 or 1 with equal probability. Now we add edges incrementally in an order of a graph traversal. Since the graph is bipartite, adding a single edge links two vertices of different sets. This step may introduce at most one conflict if . Note that this can happen only if one vertex, w. l. o. g. , has degree one after insertion of , i.e., has not yet been linked to the growing connected component before. Otherwise, closes a cycle . This cycle must be of even length since the graph is bipartite and the path has alternating colors since the previous coloring was proper. Thus and must have different colors already. In this case, inserting does not create a conflict.

Figure 1: Snapshot of an IR iteration where a random walk might take place. Here, edge was added last in the course of incremental optimization and lead to a single conflict. Mutating resolves the conflict while mutating moves the conflict to the other edge incident with . The conflict can then propagate further to the left where node serves as a reflecting node for the random walk.

Now, assume there is a conflict and let be the vertex with degree . Mutating  will resolve the conflict. However, if has degree 2, mutating  moves the conflict to the other incident edge at  (see Fig. 1 for an illustration). This yields a random walk that can be mapped to the integers as follows. Let be the graph distance, that is the smallest number of edges on any path between and . If the conflict involves an edge then the current state is defined as

with an additional absorbing state 0 that is attained when the conflict is resolved. The random walk always starts in state 1 as initially is the conflicting edge. The random walk is fair since flipping the vertex that is closer to  decreases the state by 1, and flipping the other vertex increases it by 1, if this mutation is accepted. It is accepted if and only if the mutated vertex has degree at most 2 as otherwise the number of conflicts increases. Hence, the random walk is reflected at the first vertex on the path from that has degree greater than 2; if there is no such vertex, there is another leaf at which the conflict can be resolved. The maximum state that can be reached is bounded by , i. e. the length of the longest path in  (since the closest vertex to  must have graph distance at most ). This random walk requires at most relevant steps by Lemma 2. Each propagating step happens with probability at least and thus has waiting time .

Finally, recall that every time an edge insertion closes a cycle no conflict is introduced at all as argued at the beginning of the proof. In these cases terminates after a single fitness function evaluation. As a consequence, only the cases where an isolated node is linked for the first time may introduce a conflict. There are such steps. Hence the total runtime is

The upper bound from Theorem 1 is tight on path graphs.

Theorem 3.

On any path with nodes, the total expected time of IR with generic RLS to incrementally build a proper 2-coloring is when edges are added in an order given by a graph traversal.


Consider an -vertex path which is built incrementally starting from either one of its leaf nodes. After adding the -th edge , , with probability no conflict is introduced if by chance . With the converse probability, if and have the same colors, a random walk with states is started, where both states and are goal states and the random walk starts in state 1. This random walk runs for at least relevant steps in expectation by Lemma 2 and a relevant step happens with probability at most . In total we add edges incrementally and all events of a random walk taking place are independent. There are such random walks in expectation. Note that, by Chernoff bound, the probability of having less than random walks is . Let be the steps a random walk takes place and note that . Then the expected time to incrementally reoptimize a path is bounded from below by

Here, the first term results from the fact that the length of the random walks is monotonically increasing. Note that for paths since . ∎

Figure 2: Example of a depth- star with nodes and depth .

Paths are examples where the upper bound from Theorem 1 is tight for a maximum value of , namely . We also show that there is a family of graphs for all (even) values of for which the upper bound from Theorem 1 is tight. Consider a generalization of the star-graph termed the depth- star where we have one center node and paths originating in the center node (see Fig. 2 for an example), for some value . (For simplicity we assume that is integer). Note that only the center node can have a degree greater and may serve as a reflecting node in the course of incremental optimization. Hence, the behavior of RLS is similar to its behavior on a path. In the following we show that the runtime bound from Theorem 1 is tight on depth- stars for any reasonable choice of .

Theorem 4.

On any depth- star with with nodes (odd) and , the expected time of generic RLS to build a proper 2-coloring is when edges are added in an order given by a graph traversal.


Note that (as each path from one leaf to another is a longest path) for any depth- star. Note further that the center node reflects random walks once it reaches a vertex of degree  in the course of incremental optimization. This must happen after adding edges since edges are added according to a graph traversal and the center node is the only link between paths. At the time a third edge at the center is added, there can only be two paths that have been built, or partially build. We consider the expected remaining time for adding the remaining paths. Note that for all these paths, the addition of edges must start from the center vertex and now the center node acts as a reflecting node for these random walks.

After adding the -th edge of a path, with probability a random walk with states starts. By Lemma 2 this random walk runs for at least steps in expectation and relevant steps take place with probability at most . For a fixed path in total edges are added until a leaf node is connected to the growing connected component. Let be the number of generations spent fixing a conflict on the -th path, then

By construction of the depth- star there are paths and two of these were covered in the first phase. Adding up all times spent on the remaining paths, the expected number of steps until the depth- star is properly colored with two colors is

Recall that . As a consequence, the runtime of IR with generic RLS with any graph traversal on any depth- star is tight for any valid choice of the graph parameter .

We finish this section by noting that, similarly to [19], the expected runtime can be reduced by using tailored RLS which reduces the waiting time for re-coloring the right vertex from to .

Corollary 5.

On any bipartite graph, the total expected time of tailored RLS to incrementally build a proper 2-coloring is when edges are added in an order given by a graph traversal.

4 Graph Traversals are Important

The following result emphasizes that the order of edge insertions is of utmost importance; an unfavorable order may lead to infinite runtimes for RLS with overwhelming probability. Furthermore, even if the order is uniformly random, it may still lead to infinite runtimes for RLS. Given a graph , and an edge sequence over , we say is a random order of or the graph if is chosen uniformly at random from the set of all possible permutations over .

Theorem 6.

For every there exists a tree and a worst-case edge insertion strategy such that RLS has infinite runtime with probability . Furthermore, for the random order of , RLS has infinite runtime with probability .


We consider a tree where the root has children and each child of the root has two children. This means that on level 1 of , we have binary trees of height 1. Now consider the following worst-case edge insertion strategy: first add edges such that all binary trees are formed (phase 1) and afterwards connect the root to its children (phase 2). Note that once two binary trees are colored inversely, RLS gets stuck forever since there is no possibility to color without conflicts after connecting both binary trees to the root. This is because the root’s children – once connected to the root – have degree greater 2 and thus act as reflecting states for the random walk of the introduced conflict. Since in the first phase of the edge insertion all binary trees are unconnected and hence colored independently, the probability that they are all colored the same is . Hence, the unfavorable situation occurs with probability .

Finally, if the edges are inserted in random order, i.e., the edge sequence is chosen uniformly at random from the set of all edge permutations over , then for each height- binary tree with the vertex being the child of root , the probability that both edges in the tree appear first before edge is . We call a height- binary tree bad if both of its edges appear before the edge connecting to its child in the tree. Therefore, the expected number of bad binary trees is . Further note that all the bad binary trees occur independently due to the random order assumption. By Chernoff bound, with probability at least , the number of bad binary trees is at least . Finally, for all these bad binary trees, since they are unconnected to the rest of the tree when they are formed, and hence colored independently, the probability that they are all the same is . Hence, the unfavorable situation occurs with probability . ∎

5 On the choice of graph traversal

Theorem 1 states that (generic) RLS is efficient with any connectivity-preserving graph traversal. In the following we study the effect of using DFS- versus BFS-traversals and point out major differences on special cases of bipartite graphs. To motivate this, consider a complete bipartite graph with , and . Note that given an arbitrary starting node there is a DFS-traversal that adds edges in an order such that after adding the first edges, the partial graph is an -vertex path. Such a DFS-traversal can be easily constructed by following an edge to a node that was not yet connected to the growing connected component. This path has length and is a longest path in , i.e., . Now consider a BFS-traversal and assume w. l. o. g. that we start in an arbitrary node . Now, according to the working principles of BFS, BFS adds all edges to the neighbors of , first producing random walks of length at most 2 in the optimization steps of IR. Subsequently, for each vertex in , all edges to the remaining nodes in are added. Again, each IR step deals with random walks of length at most . Hence, the length of the paths introduced by BFS is vs. for DFS (see Fig. 3 for an illustration).

Figure 3: Example of possible first edge insertions following a DFS traversal (left) and BFS traversal (right) on a complete bipartite graph. Nodes are numbered with the iteration they are linked to the growing connected component. For visual clarity all other edges are not shown.

Since BFS visits the nodes in level order, level by level, we can substitute with the diameter of the graph , denoted by , in the expected runtime bound. This observation is made mathematically rigorous in the following theorem.

Theorem 7.

On any bipartite graph, the total expected time of IR with generic RLS to incrementally build a proper 2-coloring is when edges are added in order of a breadth-first-search traversal.


We focus on the maximum length of random walks that may occur during the optimization. First of all note that BFS traverses a graph in level order visiting all adjacent nodes first, nodes with distance two second and so on. Put differently, BFS solves the unweighted Single-Source-Shortest-Path (USSSP) problem. That is, given a starting node , the length of each path in a BFS-traversal until a previously seen node is visited again is bounded by the length of the longest shortest path to any other vertex – in terms of the number of edges on the path. Since depends on the starting node, the length of the longest possible path produced by incrementally adding edges by any BFS traversal is upper bounded by the diameter , i. e., the length of the longest shortest path in . Adopting the waiting-time arguments of Theorem 1 we obtain a runtime bound of for any BFS-traversal. ∎

This bound is tight on paths and depth- stars as for both graph classes .

Even though the asymptotic runtime bounds are the same, e.g. on paths, it makes a huge difference for other sub-classes of bipartite graphs. As pointed out in the beginning of this section, on complete bipartite graphs whereas , yielding a performance advantage of a factor of for BFS traversals. Similarly, on toroids, and . As on any graph there is no advantage of using DFS and the usage of BFS shows similar or superior performance. Table 2 gives an overview of the expected runtimes of RLS with DFS and BFS on sub-classes of bipartite graphs as well as further results obtained in the following sections.

Graph class RLS with DFS RLS with BFS Tailored RLS with BFS Tailored RLS with BFS Island Model
Complete -ary tree
-dim. hypercube
Star graph
Complete bipartite
Depth- star
Table 2: Obtained runtime results for sub-classes of bipartite graphs. For -ary trees we assume that . The toroid is assumed to have lengths . For (1+) RLS we use the choice of . The island model uses the optimal number of 3 islands. We use where we have an explicit lower bound or the trivial one of . Note that the island model has optimal performance on all bipartite graph classes.

For sake of completeness we close this section with a corollary on the runtime of IR with tailored RLS and BFS.

Corollary 8.

On any complete bipartite graph, the total expected time of IR with tailored RLS to incrementally build a proper 2-coloring is when edges are added in order of a breadth-first-search traversal.

6 Offspring Populations

We now consider the use of offspring populations in RLS. The (1+) RLS creates offspring through independent mutations from the current search point, and then picks a best offspring that is compared against the parent as in RLS. Ties between offspring are broken uniformly at random. For simplicity, we only consider tailored RLS in the following, but it easy to derive bounds on generic RLS with offspring populations. The following theorem quantifies the improved time bounds when using BFS and DFS.

Theorem 9.

For a given connected graph , let denote an upper bound on the length of any random walk; more specifically, when using BFS and for any other graph traversal. Then the expected time of tailored (1+) RLS is .

For this is .


Consider the situation after adding one edge, which leads to a conflict. The conflict is resolved in one generation if there is an offspring that flips the leaf node. This happens with probability . With the converse probability , all offspring flipped the leaf’s neighbor and the conflict moved away from the added edge.

We argue that, while both end points of the conflicting edge have degree at least 2, (1+) RLS behaves like RLS. Assume both end points have degree 2. Since there is no way of resolving the conflict in one step, all offspring will have the same fitness. Since all offspring are generated independently and with identical distributions, we may assume w. l. o. g. that the first offspring is selected for survival. This means that the remaining offspring are irrelevant and (1+) RLS simulates a step of RLS. If one end point of the conflicting edge has degree larger than 2, flipping this end point leads to an offspring with a worse fitness. Hence the only accepted step is to flip the edge’s other end point. Having multiple offspring can only decrease the time until this step happens.

Using our upper bound on RLS (Theorem 1), (1+) RLS resolves the conflict after any edge insertion after generations. Since one generation creates evaluations, the number of evaluations is . Since we only have at most random walks, the total time for solving random walks is . Iterations where no random walks are necessary make evaluations. Together, this yields an upper bound of .

For , the last term simplifies to if , or equivalently, . Otherwise, the bound is dominated by the first term . ∎

For paths the upper bound from Theorem 9 is tight.

Theorem 10.

The expected reoptimization time of tailored (1+) RLS on a path with any graph traversal is .


The proof is similar to the lower bound for RLS on paths (Theorem 3). Consider a random walk started after inserting the -th edge. Recall that the random walk has states and both states 0 and  are goal states. Whenever the state of the random walk is or , there is a probability of that one of the offspring finds a goal state. As argued in the proof of Theorem 9, on states the (1+) RLS behaves like RLS. Hence, with probability , state 2 is reached after the first generation and then (1+) RLS needs at least relevant steps in expectation to reach either state 1 or state . If this happens, we assume pessimistically that a proper coloring is found. Summing up expected times as in the proof of Theorem 3 implies the claim. ∎

7 Island Models

We now consider island models that evolve several populations in parallel and communicate to exchange good solutions. More specifically, at each step of the IR process, there exist islands that each run a tailored RLS. All islands are all started on the same graph after inserting a new edge, with the same initial coloring. The islands run independently until the first island has found a proper coloring; then the proper coloring is shared with all islands (ties broken arbitrarily but ensuring that all islands store the same proper coloring). Note that we implicitly use a complete graph as migration topology (though our main result applies to all topologies containing a triangle). Algorithm 4 shows the respective pseudocode.

1:  Let be a graph with isolated vertices ().
2:  Let be a coloring of all vertices chosen uniformly at random.
3:  for  to  do
4:      Add edge to .
5:      Run tailored RLSs on with as the initial search point. In every generation, check whether an island has obtained a desired coloring. If so, store the final search point in .
Algorithm 4 Incremental Reoptimization (IR) (, , ) using an island model

We will show that independent evolution steps are more efficient than offspring populations. Our main result in this section is:

Theorem 11.

For any graph traversal order, the expected reoptimization time of the island model is for . For we get an optimal time of .

The surprising finding is that 3 islands are sufficient to obtain an asymptotically optimal reoptimization time. This is one of very few examples where island models perform better than offspring populations. The only other examples we are aware of in the context of rigorous runtime analysis are an artificially constructed function [22] and a particular instance for the Eulerian Cycle problem [23]. In the latter case, the speedup is exponential in . To our knowledge, Theorem 11 gives the first example where the speedup is not bounded by a function of .

To prove Theorem 11, we first study independent fair random walks and analyze the time until the first random walk reaches the target state. The following lemma may be of independent interest.

Lemma 12.

Consider independent random walks as defined in Lemma 2. Let be the first point in time any of the random walks reaches state 0, assuming that all random walks start in state 1. Then

  1. There is a constant such that .


We first consider a single random walk, that is, . Here the claim on the expectation follows from folklore argument, formalised in the first statement of Lemma 2.

It is known that . This can be derived as follows. By [24, III.7, Theorem 2]

where the binomial coefficient is 0 in case the second argument is non-integral. For odd  the above is at least . Integrating over all odd values of yields .

Let be the implicit constant in the upper bound of the expression. For , in order for , all random walks must not have reached the target in the first states. Since all random walks are independent, .

For it suffices to consider as stochastically dominates for . The expectation can then be derived as

For , we use the second statement of Lemma 2 to infer that for all and all , we have , thus . Thus, we get

Choosing , this is at most and we get

A lower bound of follows from the fact that at least steps are needed to reach the reflecting state, and until then the process behaves as on an unbounded state space. Then where is the implicit constant in the lower bound of . ∎

Now we are prepared to prove Theorem 11.

Proof of Theorem 11.

We show that the expected number of generations for finding a proper coloring after each edge insertion is if a random walk is necessary. If an added edge leads to a conflict, the islands perform independent random walks as described in Lemma 12. Applying said lemma with yields the claimed bound of generations. Multiplying by for the number of evaluations and summing over edge insertions yields the claim. ∎

8 Conclusions

Evolutionary algorithms have been applied to a wide range of dynamic optimization problems. We have shown that dynamic evolutionary optimization approaches can also be useful to solve a given static problem if the problem instance is fed to the algorithm in an incremental fashion.

For 2-coloring bipartite graphs, the simple RLS is effective on all graph instances if the order of the edges is given based on popular graph traversals. This includes graphs where RLS fails with an overwhelming probability in the static case. The order of edges provided is essential: for a worst-case order or a random order, RLS fails on trees with an overwhelming probability. However, every graph traversal leads to polynomial expected times. Comparing popular graph traversals like depth-first search and breadth-first-search shows that the latter is more effective as performance guarantees only depend on the diameter of the graph, whereas for the former they depend on the length of the longest simple path.

Furthermore, we have shown that offspring populations in the (1+) RLS lead to an exponential speedup for appropriate choices of , since the probability of making the right decision for resolving a new conflict immediately is amplified. Surprisingly, island models using parallel evolution to rediscover proper colorings are even more effective. With only 3 islands, the island model achieves the best possible runtime of for all graphs with edges. This is the first example of a proven speedup with islands that is not bounded in the number of islands. Island models are also more robust with respect to the choice of graph traversal and the graph instance as the expected time for the island model only depends on the number of edges, for every graph traversal and every graph.

Future work could consider whether the incremental approach would also work on graphs with a larger number of colors and whether it proves useful for other combinatorial problems.


This research has been supported by the Australian Research Council (ARC) through grant DP160102401.


  • [1] Hendrik Richter and Shengxiang Yang. Dynamic optimization using analytic and evolutionary approaches: A comparative review. In Handbook of Optimization - From Classical to Modern Approach, pages 1–28. 2013.
  • [2] Jürgen Branke. Evolutionary optimization in dynamic environments, volume 3. Springer Science & Business Media, 2012.
  • [3] Trung Thanh Nguyen, Shengxiang Yang, and Juergen Branke. Evolutionary dynamic optimization: A survey of the state of the art. Swarm and Evolutionary Computation, 6:1–24, 2012.
  • [4] Vahid Roostapour, Aneta Neumann, and Frank Neumann. On the performance of baseline evolutionary algorithms on the dynamic knapsack problem. In Parallel Problem Solving from Nature - PPSN XV - 15th International Conference, Coimbra, Portugal, September 8-12, 2018, Proceedings, Part I, pages 158–169, 2018.
  • [5] Vahid Roostapour, Aneta Neumann, Frank Neumann, and Tobias Friedrich. Pareto optimization for subset selection with dynamic cost constraints. In

    AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, 2019

    , 2019.
  • [6] Benjamin Doerr and Frank Neumann (Eds.). Theory of Evolutionary Computation – Recent Developments in Discrete Optimization. Natural Computing Series. Springer, 2020.
  • [7] Frank Neumann and Carsten Witt. Bioinspired Computation in Combinatorial Optimization. Natural Computing Series. Springer, 2010.
  • [8] Anne Auger and Benjamin Doerr (Eds.). Theory of Randomized Search Heuristics: Foundations and Recent Developments. World Scientific Publishing Co., Inc., 2011.
  • [9] Thomas Jansen. Analyzing Evolutionary Algorithms - The Computer Science Perspective. Natural Computing Series. Springer, 2013.
  • [10] Frank Neumann and Carsten Witt. On the runtime of randomized local search and simple evolutionary algorithms for dynamic makespan scheduling. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 3742–3748. AAAI Press, 2015.
  • [11] Mojgan Pourhassan, Wanru Gao, and Frank Neumann. Maintaining 2-approximations for the dynamic vertex cover problem using evolutionary algorithms. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2015, Madrid, Spain, July 11-15, 2015, pages 903–910. ACM, 2015.
  • [12] Mojgan Pourhassan, Vahid Roostapour, and Frank Neumann. Improved runtime analysis of RLS and (1+1) EA for the dynamic vertex cover problem. In 2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017, Honolulu, HI, USA, November 27 - Dec. 1, 2017, pages 1–6, 2017.
  • [13] Feng Shi, Frank Neumann, and Jianxin Wang. Runtime analysis of randomized search heuristics for the dynamic weighted vertex cover problem. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2018, Kyoto, Japan, July 15-19, 2018, pages 1515–1522, 2018.
  • [14] Simon Fischer and Ingo Wegener. The one-dimensional Ising model: Mutation versus recombination. Theoretical Computer Science, 344(2–3):208–225, 2005.
  • [15] Dirk Sudholt. Crossover is provably essential for the Ising model on trees. In Proc. of GECCO ’05, pages 1161–1167. ACM Press, 2005.
  • [16] Andrew M. Sutton. Superpolynomial lower bounds for the (1+1) ea on some easy combinatorial problems. Algorithmica, (75):507–528, 2016.
  • [17] Dirk Sudholt and Christine Zarges. Analysis of an iterated local search algorithm for vertex coloring. In 21st International Symposium on Algorithms and Computation (ISAAC 2010), volume 6506 of LNCS, pages 340–352. Springer, 2010.
  • [18] Jakob Bossek and Dirk Sudholt. Time complexity analysis of RLS and (1+1) EA for the edge coloring problem. In Proceedings of the 15th ACM/SIGEVO Conference on Foundations of Genetic Algorithms, FOGA ’19, page 102–115, New York, NY, USA, 2019. Association for Computing Machinery.
  • [19] Jakob Bossek, Frank Neumann, Pan Peng, and Dirk Sudholt. Runtime analysis of randomized search heuristics for dynamic graph coloring. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’19), pages 1443–1451, New York, New York, USA, 2019. ACM Press.
  • [20] Michael R Garey, David S Johnson, and Larry Stockmeyer. Some simplified np-complete problems. In

    Proceedings of the sixth annual ACM symposium on Theory of computing

    , pages 47–63, 1974.
  • [21] Venkatesan Guruswami and Sanjeev Khanna. On the hardness of 4-coloring a 3-colorable graph. In Proceedings of the 15th Annual IEEE Conference on Computational Complexity, page 188, 2000.
  • [22] Jörg Lässig and Dirk Sudholt. Design and analysis of migration in parallel evolutionary algorithms. Soft Computing, 17(7):1121–1144, 2013.
  • [23] Jörg Lässig and Dirk Sudholt. Analysis of speedups in parallel evolutionary algorithms and (1+) EAs for combinatorial optimization. Theoretical Computer Science, 551:66–83, 2014.
  • [24] W. Feller.

    An Introduction to Probability Theory and Its Applications

    , volume 1.
    Wiley, 3rd edition, 1968.