# Runtime Analysis of RLS and (1+1) EA for the Dynamic Weighted Vertex Cover Problem

In this paper, we perform theoretical analyses on the behaviour of an evolutionary algorithm and a randomised search algorithm for the dynamic vertex cover problem based on its dual formulation. The dynamic vertex cover problem has already been theoretically investigated to some extent and it has been shown that using its dual formulation to represent possible solutions can lead to a better approximation behaviour. We improve some of the existing results, i.e. we find a linear expected re-optimization time for a (1+1) EA to re-discover a 2-approximation when edges are dynamically deleted from the graph. Furthermore, we investigate a different setting for applying the dynamism to the problem, in which a dynamic change happens at each step with a probability P_D. We also expand these analyses to the weighted vertex cover problem, in which weights are assigned to vertices and the goal is to find a cover set with minimum total weight. Similar to the classical case, the dynamic changes that we consider on the weighted vertex cover problem are adding and removing edges to and from the graph. We aim at finding a maximal solution for the dual problem, which gives a 2-approximate solution for the vertex cover problem. This is equivalent to the maximal matching problem for the classical vertex cover problem.

## Authors

• 6 publications
• 6 publications
• 88 publications
02/15/2018

### An O(1)-Approximation Algorithm for Dynamic Weighted Vertex Cover with Soft Capacity

This study considers the (soft) capacitated vertex cover problem in a dy...
01/24/2020

### Runtime Performances of Randomized Search Heuristics for the Dynamic Weighted Vertex Cover Problem

Randomized search heuristics such as evolutionary algorithms are frequen...
11/20/2021

### Distributed CONGEST Approximation of Weighted Vertex Covers and Matchings

We provide CONGEST model algorithms for approximating minimum weighted v...
04/06/2016

### Parameterized Analysis of Multi-objective Evolutionary Algorithms and the Weighted Vertex Cover Problem

A rigorous runtime analysis of evolutionary multi-objective optimization...
05/07/2020

### Neighbourhood Evaluation Criteria for Vertex Cover Problem

Neighbourhood Evaluation Criteria is a heuristical approximate algorithm...
11/28/2013

### Solving Minimum Vertex Cover Problem Using Learning Automata

Minimum vertex cover problem is an NP-Hard problem with the aim of findi...
02/07/2020

### Population Monotonic Allocation Schemes for Vertex Cover Games

For the class of vertex cover games (introduced by Deng et al., Math. Op...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Evolutionary algorithms [5]

and other bio-inspired algorithms have been widely applied to combinatorial optimization problems. They are easy to implement and have the ability to adapt to changing environments. Because of this, evolutionary algorithms have been widely applied to dynamic optimization problems

[1, 12]. Most studies in this area consider dynamically changing fitness functions [11]. However, often resources such as the number of trucks available in vehicle routing problems may change over time while the overall goal function, e.g. maximize profit or minimize cost, stays the same.

Evolutionary algorithms for solving dynamic combinatorial optimization pro-blems have previously been theoretically analysed in a number of articles [4, 8, 10, 16, 20, 21]. Different analyses may consider the impact of different parameters such as diversity, frequency or magnitude of the changes on the performance of evolutionary algorithms [14, 19]. Some of the classical problems that have been investigated in the dynamic context are the OneMax problem, the makespan scheduling problem and the vertex cover problem [4, 8, 10, 16]. In a recent work [20], the behaviour of evolutionary algorithms on linear functions under dynamically changing constraints is investigated.

We contribute to this area of research by investigating the (weighted) vertex cover problem in terms of its dual formulation which becomes a maximal matching problem. The vertex cover problem has the constraint that all edges have to be covered by a feasible solution. We investigate the behaviour of evolutionary algorithms when this constraint changes through the addition and removal of edges. In [16] the vertex cover problem is considered with a simple dynamic setting where the rate of dynamic changes is small enough, so that the studied algorithms can re-optimize the problem after a dynamic change, before the following change happens. We call this dynamic setting One-time Dynamic Setting. The article by Droste [4] on the OneMax problem presents another setting for dynamically changing problems, where a dynamic change happens at each step with probability . We call this dynamic setting Probabilistic Dynamic Setting. In that article, the maximum rate of dynamic changes is found such that the expected optimization time of (1+1) EA remains polynomial for the studied problem. In his analyses the goal is to find a solution which has the minimum Hamming distance to an objective bit-string and one bit of the objective bit-string changes at each time step with a probability ; which results in the dynamic changes of the fitness function over time. The author of that article has proved that the (1+1) EA has a polynomial expected runtime if , while for every substantially larger probability the runtime becomes superpolynomial. The results of that article hold even if the expected re-optimization time of the problem is larger than the expected time until the next dynamic change happens. Kötzing et al. [8] have reproved some of the results of [4] using the technique of drift analysis, and have extended the work to search spaces with more than two values for each dimension. Furthermore, they analyse how closely their investigated algorithm can track the dynamically moving target over time.

In this paper, we consider both dynamic settings and analyse two simple randomised algorithms on the vertex cover problem. This paper is an extension to a conference paper [17], in which the classical vertex cover problem had been investigated. Here, we expand those analyses to the weighted vertex cover problem, where integer weights are assigned to vertices, and the goal is to find a set of vertices with minimum weight that covers all the edges.

Different variants of the classical randomised local search algorithm (RLS) and (1+1) EA have previously been investigated for the static vertex cover problem in the context of approximations. This includes a node-based representation examined in [6, 9, 13, 18] as well as a different edge-based representation analysed in [7] and a generalization of that for the weighted vertex cover problem analysed in [15].

For the dynamic version of the problem, three variants of those randomised search heuristics have been investigated in

[16]. The investigated variants include an approach with the classical node-based representation in addition to two approaches with edge-based representation introduced in [7]. One of the edge-based approaches uses a standard fitness function, while the other one uses a fitness function that gives a large penalty for adjacent edges. The latter approach finds a -approximation from scratch in expected time  [7], where is the number of edges. Having the large penalty for adjacent edges in that approach results in finding a maximal matching, which induces a -approximate vertex cover. Considering the dynamic version of the problem where a solution that is a maximal matching is given before the dynamic change, Pourhassan et al. [16] have proved that the RLS re-optimises the solution in expected time i. e. after a dynamic change, it takes expected time to recompute a 2-approximate solution. They also proved that (1+1) EA manages to maintain the quality of -approximation in expected time when the dynamic change is adding an edge. But for edge deletion, the expected time was obtained, which is the same as the expected time of finding a -approximate solution from scratch.

In this paper we improve the upper bound on the expected time that (1+1) EA with the third approach requires to re-optimise the -approximation when edges are dynamically deleted from the graph. We improve this bound to , being the number of edges, which can be shown to be tight for this problem. Moreover, we investigate the probabilistic dynamic changes for applying dynamism on the problem, in which a dynamic change happens with a certain probability, , at each step. For the classical vertex cover problem, we prove that when is small enough, (1+1) EA with the third approach finds a -approximate solution from an arbitrary initial solution in expected polynomial time, and rediscovers a solution with the same quality in expected linear time after a dynamic change happens.

Using similar arguments, we also find pseudo-polynomial upper bounds on the expected time that RLS and (1+1) EA require to re-optimise the 2-approxi-mation for the dynamic weighted vertex cover problem in both dynamic settings. In the setting with probabilistic dynamic changes, we also obtain a pseudo-polynomial upper bound for the expected time that these two algorithms need to find a 2-approximate solution by starting from a solution that assigns a weight of 0 to all edges.

Similar to the classical vertex cover problem, in the weighted vertex cover problem, when we start with a 2 approximate solution, and the input graph faces a dynamic change, the solution may become infeasible, but using the edge-based representation, it is not far from a new 2 approximate solution. Similar situations may happen in other dynamic optimization problems, where the re-optimization time is usually less than the time that is required to optimize the problem from an arbitrary initial solution. Nevertheless, in the weighted version of the problem, the presence of weights has made our final results to be pseudo polynomial, rather than polynomial.

A number of strategies have been proposed and studied for selecting the mutation strength or step size adaptation for multi-value decision variables [2, 3]. Step size adaptation is a promising approach for finding a polynomial bound in such situations. This technique is studied in [15] on the static version of the weighted vertex cover problem with a simple fitness function that aims at finding a maximal solution for the dual problem. Step size size adaptation has been proved to improve the efficiency of a randomised local search algorithm in that paper. Using this technique for the dynamic version of the problem and a fitness function that prioritises minimizing the number of uncovered edges, is left for future work on this topic. The focus of this paper is dealing with the damage that can be caused by a dynamic change, and also dealing with the situation where the distance to a maximal dual solution is increased, but the number of uncovered edges is decreased. Moreover, since in the dual setting we are looking for a maximal solution rather than a maximum solution, the goal that the algorithm is moving towards can change when multiple mutations happen at the same step. This makes the analysis much harder for the (1+1) EA, for which our resulting upper bound is presented with respect to the weight of the optimal solution of the vertex cover problem in addition to the number of edges.

The rest of the paper is structured as follows. The problem definition and the investigated algorithm are given in Section 2. All analyses for the classical vertex cover problem are presented in Section 3, where Section 3.1 includes the analysis for improving the expected re-optimization time of (1+1) EA with the third approach for one-time dynamic deletion of an edge, and Section 3.2 includes the investigations on the probabilistic dynamic setting for the problem. The dynamic weighted vertex cover problem is analysed in Section 4, with the one-time and the probabilistic dynamic settings being investigated in Section 4.1 and Section 4.2, respectively. Finally, we conclude in Section 5. The analyses of Section 3 are based on the conference version of this work [17].

## 2 Preliminaries

In this section we present the definition of the problems and the algorithms that are investigated in this paper. We divide the section into two parts. In the first part (Section 2.1), we give the formal definition of the vertex cover problem and the dynamic version of that problem. Moreover, we explain the edge based approach for solving this problem and present the algorithm that we investigate in this paper for that problem: (1+1) EA. In the second part (Section 2.2), we introduce the weighted vertex cover problem, its dynamic version, the investigated approach and the algorithms that we are analysing in this paper: RLS and (1+1) EA.

### 2.1 The Dynamic Vertex Cover Problem and the Investigated Algorithms

For a given graph with set of vertices and set of edges , the vertex cover problem is to find a subset of nodes with minimum cardinality, that covers all edges in , i.e. .

In the dynamic version of the problem, an arbitrary edge can be added to or deleted from the graph. We investigate two different settings for applying the dynamism on the problem. In the one-time dynamic setting, which has previously been analysed in [16], the changes on the instance of the problem take place every iterations where is a polynomial function in . We improve some results that were obtained in [16] for this setting. In the probabilistic dynamic setting, a probabilistic dynamic change happens at each step with a probability ; therefore, in expectation, a dynamic change happens on the graph each steps.

For solving the vertex cover problem by means of evolutionary algorithms, two kinds of representation have been suggested: the node-based representation and the edge-based representation. While the node-based representation is the natural one for this problem, and is used in most of the relevant works [6, 13, 9], the edge-based representation, introduced by Jansen et al. [7], has been suggested to speed up the approximation process. In their work [7], they have proved that an evolutionary algorithm using the edge-based representation and a specific fitness function, can find a -approximate solution in expected time where is the number of edges in the graph.

In this representation, each solution is a bit string , describing a selection of edges . Then the cover set of , denoted by , is the set of nodes on both sides of each edge in . It should be noticed that the size of the solution may change according to the dynamic changes of the graph. In our analysis is the maximum number of edges in the graph.

The specific fitness function that Jansen et al. [7] have suggested for this representation is:

 f(s)= |VC(s)|+(|V|+1)⋅|{e∈E∣e∩VC(s)=∅}| +(|V|+1)⋅(m+1)⋅ |{(e,e′)∈E(s)×E(s)∣e≠e′,e∩e′≠∅}|. (1)

The goal of the studied evolutionary algorithm is to minimize which consists of three parts. The first part is the size of the cover set that we want to minimize. The second part is a penalty for edges that solution does not cover, and the third part is an extra penalty inspired from the fact that a maximal matching induces a -approximate solution for the vertex cover problem.

Pourhassan et al. [16] proved that an RLS with the edge-based representation and the fitness function given in Equation 1 re-discovers the -approximate solution if the initial solution is a maximal matching in expected time and such result holds for (1+1) EA if changes are limited to adding edges. For (1+1) EA and dynamic deletion of an edge, the expected time was obtained there, which is not tight. This bound is improved in this paper to the tight bound of . The (1+1) EA of [16] for the edge-based representation is presented in Algorithm 1. In the dynamic setting that was studied in that paper, a large gap of iterations was assumed in which no dynamic changes happened. In addition to analysing this dynamic setting, in this paper we consider a second setting for applying the dynamism on the problem where a dynamic change happens at each step with a certain probability.

### 2.2 The Dynamic Weighted Vertex Cover Problem and the Investigated Algorithms

In the weighted vertex cover problem, the input is a graph with vertex set and edge set , in addition to a positive weight function on the vertices. In this version of the problem, the goal is to find a subset of nodes, , that covers all edges and has minimum weight, i. e. the problem is to minimize , s.t. . For the dynamic weighted vertex cover problem, similar to the classical case, we consider dynamic changes of adding and removing edges to and from the graph.

A generalization of the edge-based approach of the classical vertex cover problem for the weighted vertex cover problem has been studied in [15]

, where the relaxed Linear Programming (LP) formulation of the problem is considered as the primal LP problem, and the dual form (which is also an LP problem) is solved by an evolutionary algorithm. Using the standard node-based representation, in which a solution is denoted by a bit-string

and each node , is chosen iff , the Integer Linear Programming formulation of the weighted vertex cover is:

 minn∑i=1w(vi)⋅xi s.t. xi+xj≥1   ∀(i,j)∈E xi∈{0,1}   ∀i∈{1,⋯,n}.

By relaxing the constraint on to , an LP problem is obtained, and the dual form of that problem is formulated as the following, where denotes a weight on edge

 maxm∑j=1sj s.t. ∑j∈{1,⋯,m}∣ej∩{v}≠∅sj≤w(v)    ∀v∈V.

The dual problem is to maximize the weights on the edges, and the constraint is that the weight of each node must be more than or equal to the sum of weights on edges connected to that node. We say that a node is tight, when the weight of the node is equal to the sum of weights on edges connected to that node. Observe that in a maximal solution for the dual problem, at least one node of each edge is tight. Therefore, the set of tight nodes in a maximal dual solution, i. e.

 VC=⎧⎪⎨⎪⎩v∈V∣w(v)=∑j∈{1,⋯,m} ∣ ej∩{v}≠∅sj⎫⎪⎬⎪⎭

is a vertex cover for the primal problem, and the total weight of this solution is at most twice the sum of weights of the edges.

It is already known that when the primal problem is a minimization problem, any feasible solution of the dual problem gives a lower bound of the optimal solution of the primal problem (See [22] for the Weak Duality Theorem). Therefore, of a maximal solution of the dual problem, is less than or equal to the weight of the optimal solution of the weighted vertex cover problem. Therefore, the vertex cover that is induced by a maximal dual solution, , which has a weight of at most , has an approximation ratio of at most 2.

In this paper, we investigate the behaviour of (1+1) EA and RLS with the edge-based approach in achieving a 2-approximate solution for the weighted vertex cover by finding a maximal solution for the dual problem. The solution’s representation and the mutation operator in these algorithms, which are presented in Algorithms 2 and 3, are different from what we have for the classical vertex cover problem. A solution is an integer array and represents the weights on the edges and a mutation on an edge increases or decreases this weight. The RLS algorithm chooses a uniformly random position of the solution in each iteration, and decreases the value of it (which is the weight of the corresponding edge) with probability , and increases it otherwise. The (1+1) EA algorithm uses the same mutation operator but mutates each edge with probability . Finally both algorithms accept the mutated solution if it has a strictly greater fitness value than .

Similar to what we had for the classical vertex cover problem, the fitness function for the weighted version of vertex cover consists of 3 parts as follows:

 f(s)= m∑i=1si−(n∑i=1w(vi)+1)⋅|{e∈E∣e∩VC(s)=∅}| −(m+1)⋅(n∑i=1w(vi)+1) ⋅|{v∣∑j∈{1,⋯,m}∣ej∩{v}≠∅sj>w(v)}|. (2)

The first part is the sum of weights of edges, which should be maximized. Next there is a penalty for each of the uncovered edges. This part gives the priority to decreasing the number of uncovered edges and lets the algorithm accept a move that decreases the number of uncovered edges, even if the total weight is decreased at the same step. Finally we have a huge penalty for each vertex that violates its constraint. With this amount of the penalty, a solution which has a smaller number of violations is always better that the one with more violations.

In this paper, both dynamic settings that we are considering for the classical vertex cover problem, are investigated for the weighted vertex cover problem as well. Section 3 and Section 4 present the analysis for the classical vertex cover problem and the weighted vertex cover problem, respectively. We perform the runtime analysis with respect to the number of fitness evaluations of the algorithms.

## 3 Analysis of the Classical Vertex Cover Problem

In this section, the performance of (1+1) EA is studied on the dynamic version of the classical vertex cover problem. In Section 3.1, we improve the existing results on the re-optimisation time of this algorithm for the situation where a 2-approximate solution is given and an edge is dynamically removed from the graph. In the second part of this section, Section 3.2, we analyse the probabilistic dynamic setting for this problem, in which an edge is added to or deleted from the graph at each step of the algorithm with the probability . In the dynamic setting of Section 3.1, we assume that only one change happens and then we are given a large gap to re-discover a 2-approximate solution. This assumption is relaxed in the dynamic setting of Section 3.2, where multiple changes may happen before the algorithm finds a new 2-approximate solution.

### 3.1 Improving Re-optimisation Time of the (1+1) EA for Dynamic Vertex Cover Problem

In [16], using (1+1) EA with the edge-based representation (Algorithm 1) and the fitness function given in Equation (1), it was shown that if a -appro-ximate solution is given as the initial solution, after a dynamic deletion happens on the graph, a large number of edges can be uncovered and the re-optimization process takes expected time to find a -approximate solution. However, this upper bound is not tight, and is the same as the expected time of finding a -approximation from an arbitrary solution. In this section, we improve the upper bound on the expected time of re-optimising -approximation with this algorithm.

Figure 1 depicts the main challenge in the analysis of (1+1) EA when an edge is dynamically deleted from the graph. The set of all nodes of edges in Figure 0(a) is a 2-approximate solution for the minimum vertex cover problem. Let the dynamic change delete edge . This move uncovers all edges that are connected to and (Figure 0(b)). In order to cover all these uncovered edges, the algorithm needs to pick only two new edges that add and to the cover set. However, (1+1) EA can perform a multiple bit-flip that makes the situation more complicated. For example, at the same time that is added to the solution, edges can be removed from the solution (Figure 0(c)). Although this solution has fewer uncovered edges and will be accepted by the algorithm, it is more difficult to achieve a maximum matching from this solution and the algorithm needs to do at least 4 bit-flips to re-optimize the problem.

We divide the analysis into two phases, where both take expected steps. In the first phase we prove that the number of uncovered edges is decreased to a constant, and in the second phase, we show that all the edges become covered.

Consider a solution that is a matching but not a maximal matching. The cover set, , derived from this solution is not a complete cover. Let be the minimal set of vertices that are required to be added to to make it a complete cover ( in Figure 0(b)). Initially, this set consist of at most two nodes (both nodes of the deleted edge), but during the run of the (1+1) EA, when some nodes are removed from this set, new nodes can be added to it, since more than one mutation can happen at the same step.

We define the set of nodes as the following. Initially, let consist of all nodes of that are connected to more than 5 uncovered edges (e. g. in Figure 0(b)). Since the number of nodes in is at most 2 at the beginning of the process, the initial number of nodes in is also bounded by 2. During the process of the algorithm, more nodes with this property are added to , but we only add them to if at the same step, at least one other node from is included in the new solution and removed from .

In the analysis of the first phase, using the drift on the number of nodes in , we show that this set becomes empty in . After this point, no nodes can be added to , due to definition of . Let be the subset of uncovered edges that do not have a node in . We prove that at the end of the first phase, consists of a constant number of edges, and using the drift analysis on , we show that all edges are covered in .

Let and denote at step of the run of the algorithm, and the drift on the size of this set, respectively. In order to find , we first introduce a partitioning on the selected edges and prove a property (Lemma 1) about this partitioning and the number of uncovered edges.

Let be the set of selected edges in solution , that deselecting each of them uncovers covered edges. Moreover, each covered edge of the graph is either covered by one node or two nodes of the induced node set of . Let the set of edges that are covered from both ends be , i. e. . According to the definitions of and and the total number of covered edges, the following lemma gives us an equation that helps us in the proof of Lemma 2.

###### Lemma 1.

For any solution , , where is the number of uncovered edges of solution and is the total number of edges.

###### Proof.

Let us first consider all covered edges except those that are in . By definition of , , deselecting each edge of uncovers edges. This implies that all of these edges are only covered by the deselected edge and none of them is uncovered by deselecting another edge. Therefore, each covered edge that is not in , is counted at most once in .

On the other hand, by definition of , none of the edges of are uncovered when one of the edges of , is deselected. Therefore, edges of are not counted in . Moreover, the number of covered edges is , which completes the proof. ∎

Using Lemma 1 in the following lemma we find a lower bound on the value of .

###### Lemma 2.

At each step of (1+1) EA, .

###### Proof.

By definition of , changes to this set can only happen at the steps where at least one node of this set is included in the solution. Moreover, a node of will be included in the solution if exactly one of its adjacent uncovered edges is selected, which happens with a probability of at least at each step. In the proof of this lemma, we filter the steps and only consider the steps in which at least one node of is included, and show that the expected change on in those steps is at least ; therefore, .

From this point of the proof, we filter the steps and only consider the steps in which at least one node of is included. Let the drift on in these steps be denoted by . We aim to find a lower bound on .

Let denote the probability that the whole move of a step is accepted. When one node of is included and no other mutations happen at the same step, the move is accepted by the algorithm. Therefore,

 PAcc≥(1−1m)m−1≥1e. (3)

Moreover, let denote the event that the whole move in a step is accepted. Also let denote the probability of mutating an edge , under the condition that event has occurred. By the definition of conditional probability, we know that

 P(bit∣Acc)=P(bit∩PAcc)PAcc≤P(bit)PAcc.

Using Equation (3), we get:

 P(bit∣Acc)≤eP(bit),

where is the unconditional probability of flipping , which is . This implies that

 P(bit∣Acc)≤em. (4)

The drift on can be presented as

 Ef(ΔtC1)=E+f(ΔtC1)−E−f(ΔtC1),

where is the expected number of nodes that are removed from at each step, and is the expected number of nodes that are added to at each step. Since we are only considering the steps in which at least one node of is included, we have

 E+f(ΔtC1)≥1.

Here we find an upper bound on . Nodes can only be added to , when a selected edge that covers more than 6 edges is deselected. We need to find the expected number of mutating edges of type , . Since deselecting each of these edges can add at most 2 nodes to , is upper bounded by:

 E−f(ΔtC1)≤2∞∑i=6|Ei(s)|⋅P(% bit∣Acc).

From Equation (4), we get:

 E−f(ΔtC1)≤2∞∑i=6|Ei(s)|⋅em≤1m∞∑i=62eii⋅|Ei(s)|≤2e6m∞∑i=6i⋅|Ei(s)|.

On the other hand, Lemma 1 implies that , which gives us:

 E−f(ΔtC1)≤2e(m−k)6m≤2e6. (5)

Therefore, the drift on is

 Ef(ΔtC1)=E+f(ΔtC1)−E−f(ΔtC1)≥1−2e6, (6)

which completes the proof. ∎

In the following lemma, we prove that the set becomes empty in expected time . Moreover, in Lemmata 4 to 6, we prove that the total number of uncovered edges at the beginning of the second phase is a constant. Then in Lemma 7 we find the drift on during the second phase, which helps us with the proof of Theorem 8.

###### Lemma 3.

Starting with a situation where , , the expected time until the algorithm reaches a situation where is at most .

###### Proof.

According to Lemma 2, the drift on is at least . Therefore, since the algorithm starts with and the minimum value of before reaching is 1, by multiplicative drift analysis, we find the expected time of at most

 1+ln(c)6−2eem=em(1+ln(c))6−2e

to reach a solution where . ∎

###### Lemma 4.

Starting with , a constant integer, the expected total number of steps at which a node can be removed from is upper bounded by .

###### Proof.

Similar to the proof of Lemma 2, for the proof of this lemma we filter the steps and only consider the steps at which at least one node of is included, because no change on can happen in all other steps.

In the proof of Lemma 2, we proved in Equation (6) that the drift on on the filtered steps is . Using additive drift analysis and the assumption that at the start of the process, we can conclude that we reach in expected filtered steps. ∎

###### Lemma 5.

Starting with , a constant integer, the expected number of edges that can be added to by the end of the first phase is upper bounded by .

###### Proof.

During the process of the algorithm, at the steps where does not face a change, a change on the total number of uncovered edges can only happen through . Therefore, at these steps, to have an accepted move, the number of edges of can not increase. The reason is that the fitness function is defined in such a way that increasing the total number of uncovered edges is not accepted. Hence, in order to find the increments on the number of edges of , we only need to consider the steps in which at least one node from is included. We apply the same filtering on the steps that we had for the proof of Lemma 2 and find the expected number of edges that are added to in those steps.

In Equation (4) of the proof of Lemma 2, we found the minimum probability of flipping an edge, under the condition that the move is accepted. Based on this probability, for each the expected number of edges that are deselected from at each filtered step is . Since each of them uncover edges, the expected number of uncovered edges will be . On the other hand, Lemma 1 implies that . Therefore, we conclude that the expected number of uncovered edges that are added to at each filtered step is at most .

Moreover, according to Lemma 4, the expected number of steps at which a node from can be included in the solution is upper bounded by . This implies that the expected increase on by the end of the first phase is upper bounded by

 (c1−2e/6)⋅e=ce1−2e/6.

###### Lemma 6.

Starting with , a constant integer, the expected number of edges in by the end of the first phase is upper bounded by .

###### Proof.

By definition of and , at the beginning of the process, all uncovered edges that do not have a node in , must have a node in . The number of these nodes is upper bounded by , because . Also, the number of uncovered edges that are adjacent to each of them is at most . Therefore, at the start of the process, .

Moreover, according to Lemma 5, starting with , a constant integer, the expected number of edges that can be added to by the end of the first phase is upper bounded by . Together with the initial number of uncovered edges in , the total number of edges in is in expectation upper bounded by . ∎

Denoting at step of the algorithm by and the drift on at that step by , the following lemma proves a lower bound on , during the second phase of the analysis.

###### Lemma 7.

At a step of (1+1) EA after reaching , the drift on is .

###### Proof.

After reaching , the edges of are the only uncovered edges of the solution. Therefore, due to the definition of the fitness function, never increases during the run of the algorithm.

Moreover, selecting one edge of reduces the number of uncovered edges by at least one, and the move is accepted if no other mutations happen at the same step, which happens with probability . There are edges in this set, resulting in mutually exclusive events of improving single mutation moves at each step. Therefore we can conclude that . ∎

We now prove the main theorem of this section. A dynamic change affects the graph by either deleting an edge or adding it. However, it is already shown that (1+1) EA restores the quality of -approximation when a new edge is added dynamically in expected time  [16]. In Theorem 8 we prove that the expected re-optimisation time of (1+1) EA after a dynamic deletion is also .

###### Theorem 8.

Starting with a -approximate solution , which is a maximal matching, (1+1) EA rediscovers a -approximation when one edge is dynamically deleted from the graph in expected time .

###### Proof.

Let be the edge that is deleted from the graph. If then is still a maximal matching and corresponds to a -approximate vertex cover. If , then it is deleted from the solution as well. The new is still a matching but may not be a maximal matching. By Lemma 22 of [16], we know that a non-matching solution is never accepted by the algorithm; therefore, we only need to find the expected time to reach a solution with no uncovered edges and know that it is a maximal matching, which induces a -approximate solution.

The number of uncovered edges of after the dynamic deletion can be in , but all of them can be covered by including the two nodes of the deleted edge; therefore, holds just after the dynamic change. Moreover, according to Lemma 3, in expected time the first phase ends, as the algorithm reaches a situation where .

At the beginning of the second phase, the set , which includes all the uncovered edges of the current solution, can have a size between 0 and , but due to Lemma 6, we know that the expected size of this set is . If we denote by , the required time until reaching , then by the law of total expectation we have

 E[T]=E[E[T∣|Eu|]].

Furthermore, according to Lemma 7, during the second phase of the analysis, we have . Moreover, the minimum value of before reaching is 1. Therefore, using multiplicative drift analysis, we have

 E[T∣|Eu|]≤1+ln(|Eu|)1em=em+emln(|Eu|),

which implies

 E[T]≤E[em+emln(|Eu|)]=em+em⋅E[ln(|Eu|)].

Now, by applying Jensen’s Inequality, we find that , which together with the above inequality implies

 E[T]≤em+em⋅ln(E[|Eu|])≤em+em⋅ln(10+2e1−2e/6)=O(m).

The last inequality holds due to Lemma 6.

Altogether, since both phases of our analysis until finding a 2-approximate solution take , the theorem is proved. ∎

### 3.2 Complexity Analysis for the Dynamic Vertex Cover Problem With Probabilistic Dynamic Changes

In this section, we consider the probabilistic setting for the dynamic vertex cover problem, in which a dynamic change happens on the graph at each step of the algorithm with probability . Similar to the previous section, we assume that the maximum number of edges in the graph is . The analysis of this section shows that when is sufficiently small, (1+1) EA can find a 2-approximate solution in expected polynomial time. Moreover, we show that if a maximal matching solution is provided before the first dynamic change, (1+1) EA can re-discover a 2-approximate solution in expected linear time. Assuming that , in the first theorem of this section we show that our upper bound holds.

We use the same definition of and that we had in section 3.1, except that when a dynamic change happens new nodes that can cover more than 5 edges are added to , if they are adjacent to the edge that has been dynamically deleted. If we did not have dynamic changes, the expected changes of and were the same as the previous section. Observe that with this definition, each dynamic change can add at most 2 nodes to , and 10 edges to . We start with a couple of lemmata that show the drift on and an upper bound on the expected time until . Then in Lemmata 11 and 12 we find the expected increase that happens on during the process of the algorithm until reaching . Moreover, in Lemmata 13 and 14, we find the expected change that happens on during a phase in which holds. Finally, using all these lemmata, we prove the main results of this section in Theorems 15 and 16.

###### Lemma 9.

If , unless we reach a situation where , at each step of (1+1) EA, .

###### Proof.

The drift on the size of in the dynamic setting that we are analysing in this section consists of the expected changes that the (1+1) EA makes on in addition to the expected changes that are caused by the dynamic changes of the graph. We denote the latter by . Lemma 2 gives us the drift on obtained by the (1+1) EA with the value of at least . Therefore, for the total drift on we have

 E(ΔtC1)≥6−2eem⋅|C1|+E(ΔDC1).

Each dynamic change adds at most two new nodes to . Moreover, a dynamic change takes place in each step with the probability . Thus, the expected increase on caused by a dynamic change in each step is at most and we have

 E(ΔtC1)≥6−2eem⋅|C1|+−22000em.

Knowing that we find

 E(ΔtC1)≥2000(6−2e)|C1|−22000em≥1996|C1|4000em.

###### Lemma 10.

Starting with a situation where , , the expected time until the algorithm reaches a situation where is upper bounded by . Furthermore, with probability at least , for , the required time until reaching is upper bounded by .

###### Proof.

With a similar argument as we had in Lemma 3, by means of Lemma 9 and multiplicative drift analysis, we find the expected time of at most

 1+ln(c)19964000em=4000em(1+ln(c))1996=O(mln(c))

to reach a solution where . Moreover, by multiplicative drift tail bounds, we can conclude that with probability at least the number of required steps until reaching the desired solution is at most

 r+ln(c)19964000em=4000em(r+ln(c))1996.

###### Lemma 11.

Starting with a situation where , until the algorithm reaches , the expected number of steps in which changes, is at most .

###### Proof.

By definition of , changes on this set can only happen at the steps where either a dynamic change happens or at least one node of this set is included in the solution. From Equation 6, we know that the expected change that happens on at the steps where at least one node of this set is included in the solution, is at least . Here, we first find the expected increase that dynamic changes can cause on , which together with the initial value of gives us the expected total decrease that should happen on to reach 0. Then we use the constant drift of Equation 6, to find the expected number of required steps of that kind.

According to Lemma 10 in expectation it takes steps to reach , during which, in expectation dynamic changes happen. Since , the expected number of dynamic changes in this phase is . Each dynamic change increases by at most 2. Therefore, together with the initial value of the expected total decrease that needs to happen on to reach is .

Now we only need to count the number of steps in which at least one node of is added to the solution. Since at each of these steps is reduced by an expected value of , denoting the expected number of these steps until reaching by , and the total required decrease by , we have

 E[TC1]=E[E[TC1∣C1D]]=E⎡⎣C1D1−2e6⎤⎦.

By linearity of expectation, we have

 E[TC1]=E[C1D]1−2e6≤2+4(1+ln(2))19961−2e6<22+731996.

Together with the expected number of steps where a dynamic change happens, the expected number of steps in which changes, is upper bounded by

 22+731996+2(1+ln(2))1996<22+771996.

Let and denote and at step of the run of the algorithm, respectively. Moreover, let denote the expected change that happens on the size of this set from step to , when and . In the following lemma, we find an upper bound on .

###### Lemma 12.

Consider steps and where and . We have .

###### Proof.

During the process of the algorithm, at the steps where neither a dynamic change happens nor changes, to have an accepted move, the number of edges of cannot increase. Therefore, we only find the expected increase on during the dynamic changes and also during the steps where is changed.

Let

be a random variable denoting the number of dynamic changes that happen before reaching

. The expected increase of as a result of these changes, denoted by , is , because each dynamic change increases by at most 10. Similar to the proof of Lemma 11, we can show that the expected value of is at most . By the law of total expectation and by linearity of expectation we find the expected value of as

Let be the number of steps at which changes before reaching , and be the number of edges that are deselected from the solution at these steps. In Equation (4) of the proof of Lemma 2, we found an upper bound on the probability of flipping an edge, under the condition that the move is accepted. Based on this probability, the expected number of edges that are deselected from the solution at each step is at most . Therefore, the expected number of these edges at steps is . Moreover, by Lemma 11 we have . By the law of total expectation, we find .

Since each of these edges, when deselected, uncover at most edges, at most edges are added to at these steps. Denoting the increase of as a result of these changes by , by the law of total expectation we find the expected value of as .

Together with the increase that happens at the steps of dynamic changes, the expected increase on until reaching is upper bounded by

 220e+770e1996+341996<600+21301996.

###### Lemma 13.

Consider a phase of steps in which dynamic changes do not happen and holds. Let and be the value of at the start and end of this phase, respectively. We have

 E[|Estru|−|Eendu|∣|E