Evolutionary Algorithms (EAs) are stochastic search algorithms, which have been used to solve many optimization problems in real-world applications [34, 47, 51]. As one of the primary operators in the framework of EAs, the mutation operator has significant influence on the performance of an EA. During the past decades, various strategies of controlling the parameters of the mutation operator have been developed to promote the performance of EAs. Some of them were concerned with the mutation operators for binary search space (e.g., [3, 20, 21, 29, 43, 45, 48]), while some were dedicated to continuous search space [1, 23, 42, 50]. In this paper, we restrict our investigation to the former.
As the most commonly used mutation operator for binary search space, the so-called bitwise mutation operator flips each bit of an individual (solution) with a uniform probability, where is called the mutation rate. Early investigations often employed a fixed mutation rate over the whole optimization process , but further studies have revealed that a time-variable mutation rate scheme might be better than a fixed mutation rate. According to Bäck , Hinterding et al.  and Thierens , there are three classes of time-variable mutation rate schemes: dynamic mutation rate schemes, adaptive mutation rate schemes and self-adaptive mutation schemes. During the past decades, a large number of empirical investigations have been dedicated to show the advantages of various time-variable mutation rate schemes. Holland 
proposed a time-variable mutation rate scheme for Genetic Algorithm (GA). After fourteen years, Fogarty designed a number of dynamic mutation rate schemes with which the performance of GAs was significantly improved on some static optimization problems. Inspired by Evolution Strategies (ES), Bäck  proposed a self-adaptive mutation scheme for GAs for the continuous search space. Later he carried out both theoretical analysis and empirical study to show that for multimodal static optimization problems there might exist some “Optimal Mutation Rate Schedule” that can accelerate the search of EAs . Srinivas and Patnaik proposed a GA with adaptive mutation rate scheme and adaptive crossover scheme (Adaptive Genetic Algorithm, AGA) in . They empirically showed that AGA outperformed Simple GA (SGA) on a number of static benchmark optimization problems. Smith and Fogarty  carried out empirical comparisons between an EA with self-adaptive mutation rate scheme and a number of EAs with different fixed mutation rates, and they found that the former outperforms the latter on a number of static optimization problems. Thierens  designed two adaptive mutation rate schemes, whose advantages over a fixed mutation rate were demonstrated by experimental studies.
In comparison with empirical studies, the theoretical investigations on time-variable mutation rate schemes are much fewer. Droste, Jansen and Wegener [18, 29] analyzed a EA with a dynamic mutation rate scheme, and they proved that the expected number of generations spent by this EA on the so-called PathToJump problem is . In contrast, with a probability that converges to (with respect to the problem size ), the EA with the fixed mutation rate will spend a super-polynomial number of generations to optimize the above PathToJump problem. On the basis of  and , Jansen and Wegener  carried out further theoretical investigations for the above EA (with a dynamic mutation rate), and they presented a number of time complexity results on some well-known benchmark problems, such as OneMax [3, 12, 19] and LeadingOnes [12, 41]. The investigations in  show us that the specific dynamic mutation rate scheme outperforms (in terms of the runtime of EA) the fixed mutation rate significantly on PathToJump, while is slightly inferior on OneMax and LeadingOnes. As we might expect, there have also been a few studies demonstrating the potential weakness of some specific time-variable mutation rate schemes (e.g., ), but it is a drop in the bucket compared with the positive results. Besides, these negative results only showed that there is no time-variable mutation rate scheme is universally good for an EA over a large variety of problems, which did not exclude the usefulness of time-variable mutation rate schemes. In other words, no general result is available so far demonstrating the complete failure of the time-variable mutation rate schemes under certain conditions.
In this paper, we prove for both individual-based and population-based EAs that time-variable mutation rate schemes (including any dynamic, adaptive and self-adaptive mutation schemes) cannot significantly outperform a well-chosen fixed mutation rate on some Dynamic Optimization Problems (DOPs) [9, 33]. Unlike static optimization problems that maintain problem instances, stationary objective functions and constraints consistently, for DOPs, “the objective function, the problem instance, or constraints may change over time” . In most investigated DOPs, the above changes of DOPs will directly result in the movements of global optima [7, 37, 49]. However, such movements of different DOPs can follow distinctive manners. To representatively characterize moving global optima in DOPs, we employ the so-called BDOP class in our theoretical investigations, which models the movements of global optima as random walks of binary strings in the solution space. The BDOP class is a representative DOP model for theoretical investigations, which consist of DOPs with different dynamic degrees. Concretely, the dynamic degree of a DOP belonging to the BDOP class (named BDOP in the paper) mainly depends on the value of the so-called shifting rate , which is a parameter controlling the random walks of BDOPs’ global optima. A larger tends to lead to larger frequency and size of change for the moving global optimum.
Intuitively, the simplest way to cope with a DOP is to regard the DOP as a new static optimization problem after every change. In this way, a DOP can be solved by traditional EAs or other problem-specific algorithms designed for static optimization problems. However, this kind of approaches require vast computational resources (e.g., computation time), which is not practical for many real-world DOPs (e.g., dynamic vehicle routing problem  and online scheduling problem ). Under this circumstance, intensive investigations have been carried out to understand the behaviors of EAs on DOPs or to design better EAs for DOPs, among which the theoretical investigations are often restricted to the EA. From a theoretical perspective, Rohlfshagen et al. studied how the frequency and magnitude of the optimum’s movement influence the performance of the EA for dynamic optimization . Surprisingly, on two specific problems called Magnitude and Balance, they showed that larger frequency and magnitude do not necessarily lead to a worse performance of the EA. Stanhope and Daida  studied the fitness dynamics of the EA on the so-called dynamic BitMatching problem that was frequently used in previous empirical investigations [17, 49, 52]. By a similar approach, Branke and Wang  compared the and EAs on the dynamic BitMatching problem. Droste [16, 17] studied the time complexity of the EA with both one-bit mutation and bitwise mutation (with mutation rate ) on the dynamic BitMatching problem. These results established the theoretical foundations of the field. However, theoretical investigation that aims at revealing the relationship between the adaptation of mutation rate and the performance of EAs on DOPs still lacks, though there has been some related empirical studies [38, 45].
In this paper, we demonstrate theoretically the relationship between time-variable mutation schemes and the performances of both individual-based and population-based EAs on the BDOP class. In our study, we still adopt the classical time complexity criterion, the first hitting time [24, 25, 26], to measure the performances of EAs theoretically. Based on the measure, an EA is said to be efficient on a DOP, if and only if the corresponding first hitting time is polynomial in the problem size with a probability that is at least the reciprocal of a polynomial function of the problem size . Otherwise, the EA is said to be inefficient (on the DOP). According to the popular perspective that adaptation of mutation rate might be helpful, one might expect that there are some time-variable mutation scheme that can improve the performances of EAs so that they can cope with BDOPs efficiently. However, in this paper, we find that once the asymptotic order of the shifting rate is larger than (i.e., ), the BDOPs will become essentially hard for EA with any time-variable mutation scheme in which the mutation rate at every generation is upper bounded by ; When the asymptotic order of the shifting rate is larger than (i.e., ), the BDOPs will become essentially hard for the EA111The EA is a population-based EA which maintains a unique parent individual and generates offspring individuals at each generation. ( can be any polynomial function of ) with any time-variable mutation scheme in which the mutation rate at every generation is upper bounded by . Case studies on an instance of the BDOP class called the BitMatching problem further demonstrate the limitations of any time-variable mutation scheme via concrete time complexity results on both and EAs, and also show the positive impact of population on the performances of EAs.
The main contributions and their significance of this paper are summarized as follows: First, our investigation provides theoretical evidence for the view that, compared with a fixed mutation rate, adopting some time-variable mutation rate scheme is not always a significantly better choice. It is the first time that some general results for any time-variable mutation rate scheme are given and proven rigorously. Second, this paper substantially increases the understanding of the role of mutation in the context of DOP, and the relationship between time-variable mutation rate schemes and time complexities of EAs are investigated theoretically in depth. Third, by comparing individual-based and population-based EAs on BDOPs, our investigations revealed theoretically for the first time that population may have positive impact on the performances of EAs for solving DOPs.
The rest of the paper is organized as follows: Section 2 introduces the DOPs and algorithms analyzed in this paper. Section 3 studies the impact of time-variable mutation rate schemes on performance of the EA over the BDOP class. Section 4 further studies the impact of time-variable mutation rate schemes on performance of the EA over the BDOP class. Section 5 offers some discussions, interpretations and generalization for the theoretical results obtained in this paper. Section 6 concludes the whole paper.
2 Problem and Algorithm
In this section, we introduce some preliminaries for this paper, including the notations of asymptotic orders, the BDOP class and EAs discussed in this paper.
To facilitate our analysis, we first introduce some notations that are used in comparing the asymptotic growth order of functions. Let and be two positive functions of , then :
, iff : , ( is asymptotically bounded above by up to a constant factor, and the asymptotic order of is no larger than that of );
, iff (The asymptotic order of is no smaller than that of );
, iff and both hold (The asymptotic order of is the same to that of );
, iff ( approximates when , and the asymptotic order of is smaller than that of );
, iff (The asymptotic order of is larger than that of ).
Moreover, to distinguish the polynomial functions from those super-polynomial functions of , we utilize the following notations:
both hold iff ;
both hold iff .
In this paper, the above notations are further utilized in representing the asymptotic orders related to probabilities:
Definition 1 (Overwhelming Probability and Super-polynomially Small Probability).
A probability is regarded as “an overwhelming probability” if and only if there exists some super-polynomial function of the problem size () and a positive integer such that holds. A probability is regarded as “a super-polynomially small probability” (or “a probability that is super-polynomially close to ”) if and only if there exists some super-polynomial function of the problem size () and a positive integer such that holds.
In the following parts of the section, we will first describe the general DOP model for our theoretical analysis. After that, we will present the concrete DOP analyzed in this paper. Finally, we introduce the EA and time-variable mutation rate scheme.
2.2 A Theoretical DOP Model
In this subsection, we define a theoretical model for dynamical optimization problems on binary search space. Briefly, our DOP model characterizes a common feature of most DOPs investigated by the evolutionary computation community[33, 39, 45, 49, 52], that is, the global optimum of a DOP is probabilistically changing over time. Like those DOPs intensively studied by the community, the DOP model allows uncertain events to occur at discrete time points only and accomplish without delay. In this way, the shortest duration of a stationary objective function can be guaranteed, which greatly facilitates optimization algorithms, and, in the meantime, reflects reasonable simplifications that are widely employed when solving sophisticated DOPs in practice. Taking the dynamic vehicle routing problem as an example, it is difficult to always take any real-time factor into account when searching for the optimal routing. Instead, such factors are often temporarily collected, and contribute to the modification of objective function only at some discrete time points.
Concretely, we define the DOP phase to be the time interval in which the objective function of a DOP remains stationary. For the sake of simplicity, we assume the duration of each DOP phase is such that the time point index (, is the starting time point) can be utilized to distinguish one DOP phase from another. In the DOP phase, the change with respect to the objective function occurs and finishes at time point . Within the DOP phase, the objective function, denoted by , remains stationary and only has a unique global optimum in the search space. The overall goal of solving the DOP is to maximize the objective function in the presence of movements of the global optimum. Besides, we do not consider any constraint-based stationary objective function in our investigations, and models involving such settings will be left as our future work. By summarizing the above descriptions, we present the following definition:
Definition 2 (DOP Model).
A DOP is a maximization problem whose stationary objective function may change at any time point . At the DOP phase, the fitness (value of objective function) with respect to a given solution is given by , where is the stationary objective function at the DOP phase.
As stated, a notable characteristic of most DOPs is that their global optima may change over time. In this paper, we characterize the movements of global optima as a kind of pure random walks in the binary solution space, which is a simple and natural way of modeling stochastic movements in theory:
Definition 3 (Bitwise Shifting Global Optimum (BSGO)).
The global optimum of a DOP is called a BSGO, if it is shifting following the rule , where is the global optimum at the DOP phase, flips every bit of the input binary string with a probability of , and is called the shifting rate.
The DOPs whose unique global optima are BSGOs form the BDOP class:
Definition 4 (BDOP Class).
For a DOP following the model defined in Definition 2, if it only has a unique global optimum and the optimum is a BSGO, then the DOP is a BDOP, and we also say that the DOP belongs to the BDOP class.
In the rest of the paper, theoretical investigations will be carried out on the BDOP class, where the algorithms for solving such optimization problems will be introduced in the next subsection.
2.3 Time-variable Mutation Rate Schemes and Evolutionary Algorithms
In this paper, both individual-based and population-based EAs will be employed in our theoretical analysis, and the aim is to demonstrate the impact of time-variable mutation rate schemes on different EAs when solving DOPs. Concretely, the individual-based EA studied in this paper is called the EA. At each generation, the EA maintains a unique parent individual, and the parent individual can only generate a unique offspring individual via mutation; The selection operator preserves the one with better fitness between the parent and offspring individuals (i.e., parent + offspring). A concrete description of the EA studied in this paper is given below:
Algorithm 1 ( Ea).
Choose the initial individual randomly by the uniform distribution over the whole search space. Set the initial generation index
randomly by the uniform distribution over the whole search space. Set the initial generation index. The generation of the EA consists of the following steps:
Mutation: Each bit of the parent individual is flipped with the probability of , where is the problem size (i.e., length of the binary string). After that, an offspring individual is obtained.
Fitness Evaluation: Evaluate the fitness of and based on the stationary fitness function at the generation ( DOP phase).
Selection: If , then set ; Otherwise, set .
If the given stopping criterion is met after the generation, then the EA stops; Otherwise, set and a new generation begins.
A significant difference between the above algorithm and the EA for static problems  is that, at every generation the former evaluates not only the fitness of the offspring but also that of the parent, while the latter only evaluates the fitness of the offspring. The reason of employing two fitness evaluations in one generation of our EA is that the fitness of the parent individual may change in response to the change of objective function. Another difference between the above EA and the traditional EA is that the former allows the mutation rate to vary over generation (i.e., the EA adopts some time-variable mutation rate scheme), while the latter only adopts the fixed mutation rate , where is the problem size. To make our description formal, the concrete definition of time-variable mutation rate scheme for our EA is given below:
Definition 5 (Time-variable mutation rate scheme for Ea).
The time-variable mutation rate scheme of the EA is a mapping . Such a scheme sets the mutation rate at the generation be , where is the problem size.
In addition to studying the time-variable mutation scheme used in the above EA, we also study the time-variable mutation schemes in the context of the following EA ( is polynomial in ):
Algorithm 2 ( Ea).
Choose the initial individual randomly by the uniform distribution over the whole search space. Set the initial generation index . The generation of the EA consists of the above steps:
Mutation: The parent individual generates () offspring individuals independently. When generating the offspring individual (), each bit of the parent individual is flipped with the probability of .
Fitness Evaluation: Evaluate the fitness of , , , and based on the stationary fitness function at the generation ( DOP phase).
Selection: If , then set ; Otherwise, set .
If the given stopping criterion is met after the generation, then the EA stops; Otherwise, set and a new generation begins.
The EA is a population-based EA adopting the offspring-population strategy. To be specific, unlike those population-based EAs which maintain multiple parents and offsprings at each generation, the EA maintains a unique parent individual and a population of offspring individuals generated from the same parent. To guarantee that there is a unique parent at each generation, the selection operator imposes extremely high selection pressure, and preserves the one with the best fitness among the total individuals (i.e., parent + offsprings). Also, the EA can be considered as a special case of the EA, where the selection operator always copies the selected individual for times to construct the population of the next generation. In this paper, it is worth noting that the EA introduced above allows different offsprings at the same generation to be generated via distinct mutation rates, which offers larger freedom for the adaptation of mutation rates than using the same mutation rate in generating all offspring individuals. Formally, the time-variable mutation rate scheme utilized by our EA is defined as follows:
Definition 6 (Time-variable mutation rate scheme for Ea).
The time-variable mutation rate scheme of the EA is a mapping . Such a scheme sets the mutation rate for obtaining the offspring individual at the generation to be , where is the problem size.
In our theoretical investigations, we assume that there are always enough parallel computational resources available to support a polynomial number of simultaneous fitness evaluations, which is critical to make the EA valid for solving DOPs. Moreover, we also assume that the () generation of every EA starts at the beginning of the DOP phase, and finish at the end of the DOP phase, which is important for theoretical analysis since it can avoid the degenerate cases where the objective function changes when an EA is carrying out fitness evaluations.
2.4 Measure of Time Complexity
So far we have introduced the problem and algorithm investigated in this paper. In this subsection, we present the measure of performances of EAs, which is indispensable to our theoretical studies. Traditionally, the performance of an EA on a static optimization problem can be measured by the first hitting time [24, 25, 26, 54, 53]. This concept measures the number of generations needed by an EA to find the optimum of a static optimization problem, which can be generalized to facilitate theoretical analysis evolutionary dynamic optimization. For and EAs on DOPs, we formally define the first hitting time as follows:
Definition 7 (First hitting time).
On a DOP , the first hitting time of a EA ( is polynomial in ), denoted by , is defined as follows:
where is the parent individual at the generation, and are the offspring individuals generated by . Setting in the above definition and replacing the notation with yield the first hitting time of the EA.
Based on the problem and algorithms introduced in this section, in the rest of the paper we study time-variable mutation rate schemes in terms of first hitting times of EAs.
3 EA with Time-Variable Mutation Schemes
During the past decade, a number of studies have been dedicated to prove or validate that some specific time-variable mutation rate schemes are helpful to improve performances of EAs, although it is unclear whether this is generally true. In this section, we present several theoretical results concerning the performance of the EA with different time-variable mutation rate schemes on BDOPs. In the first subsection, we offer a general result, demonstrating that the BDOPs with shifting rate cannot be solved efficiently by the EA with any time-variable mutation rate scheme satisfying . In the second subsection, we generalize the above result to a specific BDOP called the BitMatching problem, and show that the EA with any time-variable mutation rate scheme (i.e., ) fails to optimize the BitMatching problem efficiently when the shifting rate is .
3.1 A General Result
The BDOPs studied in this subsection are with shifting rate , which implies that the shifting rate of the BDOPs satisfies that . The average movement of global opttimum at each DOP phase (which is equivalent to a generation of EA, as mentioned), measured by the Hamming distance, is larger than . This includes the case that at each DOP phase the global optimum changes by less than a bit on average. Intuitively, such a small movement speed of global optimum seems not to seriously affect the optimization process, and the EA may probably cope with such situations by switching to appropriate mutation rates. In this subsection it is discovered by Theorem 1 that even such small movements will have significant influence on the first hitting time of the EA with different time-variable mutation schemes. The main result in this subsection is formally presented as follows:
Given any BDOP with shifting rate and any time-variable mutation rate scheme , the first hitting time of the EA is super-polynomial with an overwhelming probability.
The above theorem holds when , which means that the largest mutation rate that the EA can switch to is . Within the interval , the EA can adjust its time-variable mutation rate freely following an oracle, i.e., at each generation the EA can even choose the mutation rate best suiting the current situation. The theorem can be proven given the following lemma first:
Given any BDOP with shifting rate and any time-variable mutation rate scheme , the first hitting time of the EA is super-polynomial with an overwhelming probability.
Proof Idea of Lemma 1. Generally speaking, when proving Lemma 1, we should keep in mind that the EA can adjust its time-variable mutation rate following an oracle. To be specific, we must carry out a best-case analysis so as to bound all potential behaviors of catching the global optimum of a BDOP. Given a solution , define the number of matching bits of the solution (to the global optimum) to be the problem size minus the Hamming distance between the solution and the current global optimum (i.e., ). A formal definition of Hamming distance mentioned above is given as below:
Definition 8 (Hamming distance).
The Hamming distance between two solutions and () is given by .
Let and be the parent and offspring individuals of generation () of the EA, respectively. Let be the one with higher fitness between and (at the generation):
Let , , and
It follows from the above definitions that
The definition of () also implies the overall mapping that maps to , is determined by not only the EA, but also the optimum-shifting in BDOP. The reason is that, the EA maps one solution to another only, while the optimum-shifting in BDOP can be considered as a mapping that describes the movements of the global optimum. The above two factors, together with , determine the Hamming distance between a solution and the current global optimum at the generation.
To explain the proof idea, we define a number axis (real number line) with respect to the number of matching bits (to the current global optimum), ranging from to . As illustrated in Fig. 1, we further define several intervals on the axis:
The First Forbidden Interval and First LongJump Interval are defined as follows:
First Forbidden Interval: The First Forbidden Interval is the interval , where is the problem size.
First LongJump Interval: The First LongJump Interval is the interval .
In addition, we let such that . The above decomposition is illustrated in Fig. 1. The purpose of defining the three intervals is to quantitatively characterize three general states of the EA respectively. Intuitively speaking, for the First Forbidden Interval, demonstrates the situation that the better one between the parent and offspring at the generation is very close to the moving global optimum of a BDOP. Similarly, belonging to the First LongJump interval () indicates the situation that the solution found by the EA is extremely far away from the global optimum, but it can reach the First Forbidden Interval by an extremely long jump resulted from a very large mutation rate (i.e., ). Finally, belonging to the interval represents the situation that the EA is still far from finding the optimum, no matter what concrete mutation rate the EA adopts.
In our best-case analysis for proving Lemma 1, as long as the EA has found solutions belonging to the intervals and , we optimistically consider that the EA has reached the global optimum. By noting an important fact in Definition 1 that the EA starts in (i.e., the number of matching bits of the initial solution belongs to ) with an overwhelming probability, to prove Lemma 1, we only need to prove that the transition from to and is very unlikely to happen.
Formally, we need to prove the following propositions when proving Lemma 1:
Proposition A1.1: .
Proposition A1.2: .
Proposition A1.3: .
For the detailed proof following the above sketch, interested readers can refer to the appendix of the paper.
From Lemma 1, when proving Theorem 1 we only need to cope with smaller shifting rates satisfying the conditions and . Given the condition , we let in the proof of Lemma 1, where is an arbitrary positive constant. Having identified the above conditions, we define as follows:
Further, let be defined by
The purpose of introducing the notations and is to further define subintervals for the interval with respect to the number of matching bits of a solution to the current optimum, in addition to , and . Concretely, we consider the following new intervals:
Some intervals are introduced below:
Second Forbidden Interval: The Second Forbidden Interval is the interval , where is the problem size.
Primary Adjacent Intermediate Interval: The Primary Adjacent Intermediate Interval is the interval .
Secondary Adjacent Intermediate Interval: The Secondary Adjacent Intermediate Interval is the interval .
Second LongJump Interval: The Second LongJump Interval is the interval , where is the problem size.
Primary Remote Intermediate Interval: The Primary Remote Intermediate Interval is the interval .
Secondary Remote Intermediate Interval: The Secondary Remote Intermediate Interval is the interval .
The above intervals, along with and , are illustrated in Fig. 2. Generally speaking, the above interval decomposition inherits and generalizes the intuitive idea utilized in the proof of Lemma 1. Similar to and in the proof of Lemma 1, the Second Forbidden Interval is used to characterize the state that a solution found by the EA is very close to the current global optimum; The Second LongJump Interval is used to characterize the state that a solution found by the EA is extremely far away from the current global optimum, which is very likely to further reach the optimum by employing an extremely large mutation rate (e.g., ). However, due to different values of shifting rates ( holds in Lemma 1 while and hold in Theorem 1), the concrete sizes of and are different from those of and respectively. In addition to the configurations of “forbidden” and “long-jump” intervals, here we employ some extra intervals which serve as intermediate intervals for reaching the “forbidden” and “long-jump” intervals, and . Concretely, at the very beginning of optimization, the solution found by the EA belongs to the interval defined in the last subsection. To find a solution in by employing a small mutation rate, the EA must find some solution in the Secondary Adjacent Intermediate Interval first. Afterwards, the EA needs to travel through the Primary Adjacent Intermediate Interval , i.e., try to find solutions that exceeds and finally reach . On the other hand, to find a solution in by employing an extremely large mutation rate, the EA has to find some solution in the “long-jump” interval first. Nevertheless, to reach , the EA has to find some solution in the Secondary Remote Intermediate Interval first, and afterwards travel through the Primary Remote Intermediate Interval so as to reach . In our best-case analysis, once the EA has found solutions belonging to the intervals and , we optimistically consider that the EA has reached the global optimum.
Proposition B1.1: .
Proposition B1.2: that satisfies and ,
Proposition B1.5: To travel through , the EA will spend a super-polynomial number of generations with an overwhelming probability.
One may note that Propositions B1.1, B1.3 and B1.5 are in response to our discussions in the previous paragraph. In addition, Propositions B1.2 and B1.4 are both formal propositions concerning the gain of the better solution (between the parent and offspring at the generation) found by the EA, compared with that of the previous generation. However, the two propositions tackle different conditions concerning the shifting rate of BDOP (i.e., ), mutation rate () and so on, which are crucial for the formal proof of Propositions B1.3 and B1.5. A detailed proof of Theorem 1 is in the appendix of the paper.
Theorem 1 presents a general result showing that any BDOP with shifting rate is essentially hard to the EA with any time-variable mutation rate scheme satisfying . These essentially hard BDOPs, which characterize the movements of global optimum as random walks in the binary solution space, demonstrate the potential weakness of time-variable mutation rate scheme on DOPs with moving optimum. However, it is worth noting that the results obtained in this section focus on those time-variable mutation rate schemes that satisfy , i.e., the largest value that the mutation rate can take is . It is unclear whether an even larger mutation rate that exceeds would help, though knowing when to apply such an extreme mutation rate relies on some ideal oracle (i.e., we know the consequence of such decisions in advance). To further validate the potential failure of all time-variable mutation rate schemes on some specific BDOP, we will extend Theorem 1 for the BDOP class to a concrete example, named the BitMatching problem.
3.2 A More Precise Result on BitMatching
In this subsection, we draw our attentions to a specific example of BDOPs class called BitMatching. Concretely, the BitMatching problem is a BDOP using the following stationary function at the DOP phase:
where is the global optimum of BitMatching at the DOP phase.
To have a more comprehensive understanding of the performances of time-variable mutation rate schemes on BitMatching, here we consider the time complexity of finding approximate solutions with some certain quality, instead of considering only the time complexity of finding the moving global optimum. Here, the following specific characteristic for approximate solutions of DOPs, named the best-case DOP approximation ratio, is taken into account:
Definition 11 (Best-case DOP approximation ratio).
Suppose we have a maximization DOP and an optimization algorithm . Let be the optimal value of the objective function of the DOP at the generation (), be the best solution found by the algorithm at the generation, be the value of the objective function with respect to the solution at the generation. For the algorithm , the ratio is called the best-case DOP approximation ratio at the generation.
Given the above definition, The definition of first hitting time in Eq. 1 can be generalized to the so-called -first hitting time concerning the time for finding the solutions that reach a certain best-case DOP approximation ratio, say, :
Definition 12 (-first hitting time for Ea).
On a DOP , the -first hitting time of an EA, denoted by , is defined as follows:
where and .
The BitMatching problem is a special case of the BDOP class. By generalizing the proof ideas mentioned in the last subsection, we are able to obtain a number of new theoretical results. The following lemma can be derived from Lemma 1:
Given the BitMatching problem with shifting rate and any time-variable mutation rate scheme of the EA, the -first hitting time of the EA is super-polynomial with an overwhelming probability.
Given the BitMatching problem with shifting rate , and any time-variable mutation rate scheme of the EA, the -first hitting time of EA is super-polynomial with an overwhelming probability, where is defined by
The proofs of Lemma 2 and Theorem 2 are similar to those of Lemma 1 and Theorem 1 respectively. Interested readers can refer to the appendix for details. Lemma 2 and Theorem 2 will lead to the following corollary about the most commonly used fixed mutation rate directly, which was proven by Dorste :
The first hitting time of the EA with the fixed mutation rate on BitMatching problem with is super-polynomial with an overwhelming probability.
Combining the above corollary with Theorem 2, we obtain an interesting result:
Given the BitMatching problem with , both the EA with any time-variable mutation scheme and the EA with the most commonly used fixed mutation rate performs inefficiently.
Clearly, the corollary indicates that no time-variable mutation rate schemes significantly outperforms the most commonly used fixed mutation rate when the EA is employed as the optimizer of the BitMatching problem with shifting rate . Moreover, Droste  proved that the EA with the fixed mutation rate can reach the global optimum of BitMatching with shifting rate with a polynomial average first hitting time:
Theorem 3 (Droste ).
The mean first hitting time of the EA with the fixed mutation rate on BitMatching with is polynomial in the problem size .
Given that fixed mutation rates are only special cases of time-variable mutation schemes, the above result can also be interpreted as that there exists some time-variable mutation scheme with which the EA can solve BitMatching with with a polynomial mean first hitting time. It follows from Theorem 2 that the BitMatching problem with is the hardest BitMatching on which a EA can guarantee efficient performance. In the next section, we show that by adopting a population, a EA with some time-variable mutation schemes can break the above limitation. However, when the shifting rate , a EA with different time-variable mutation schemes will still encounter bottleneck when optimizing BDOPs.
4 EA with Time-Variable Mutation Schemes
So far we have analyzed the effectiveness of time-variable mutation schemes in the context of the EA. In this section, our analysis will be carried out in the context of a population-based EA called EA. A case study on the BitMatching problem will be given to show the overall impact of population and time-variable mutation schemes.
4.1 A General Result
The EA studied in this paper follows the framework presented in Algorithm 2. The time variable mutation schemes for the EA, defined in Definition 6, allows the EA to utilize distinct mutation rates in generating different offsprings in the same generation. However, when solving BDOPs, such an EA may still be inefficient when the shifting rate of a BDOP exceeds :
Given any BDOP with shifting rate and any time-variable mutation rate scheme , the first hitting time of the EA is super-polynomial with an overwhelming probability, where the offspring size is a polynomial function of .
Apparently, over the BDOP class, Theorem 4 shows a theoretical limitation for time-variable mutation schemes associated with the EA. Nevertheless, the theorem sheds some light on potential efficient performances of the EA with time-variable mutation schemes on those BDOPs whose shifting rate is between and (note that Theorem 1 tells us that the EA performs inefficiently on these BDOPs). In fact, compared with the EA, the EA has indeed been strengthened by two factors. First, the offspring-population strategy, which is a specific way of utilizing population, offers larger selection pressure for the EA. When optimizing DOPs, this feature is very helpful for tracking the movement of global optimum. Second, in one generation the EA is capable of exploring different subsets of the search space via distinct step sizes. Owing to these two factors, it can be expected that the EA can significantly outperform the EA on some BDOPs. In the next subsection, we present such a theoretical example.
4.2 Case Studies on BitMatching
As in Section 3.2, we still employ the BitMatching problem as an example of the BDOP class. We show that the EA with time-variable mutation schemes can improve the performance of the EA. Meanwhile, by the same theoretical result we are able to demonstrate that the general limitation of EA over the BDOP class, predicted by Theorem 4, is almost tight. The main result of this subsection is as follows:
Given the BitMatching problem with shifting rate , if
Cond. 1 the time-variable mutation rate scheme satisfies
Cond. 2 the polynomial offspring size of the EA satisfies
then the mean first hitting time of the EA with the above time-variable mutation rate scheme is bounded from above by .
Theorem 5 demonstrates that, when a time-variable mutation rate scheme satisfies certain conditions, then the EA adopts such a mutation scheme will perform efficiently on the BitMatching problem whose shifting rate is no larger than . The proof of the theorem utilizes the drift analysis technique proposed by He and Yao :
Lemma 3 (Drift Analysis ).
Let () be the population at the generation of the EA, be the distance metric measuring the distance between the population and the global optimum at the generation, and be a super-martingale describes an EA, if for any time , if and
then the mean first hitting time satisfies
where is the initial population of the EA, is the set of all populations.
Before providing the formal proof for Theorem 5, we introduce some notations. Let be the parent individual of generation () of the EA, and be the offspring individuals generated by . Let be the one with highest fitness among :
Let be the one with higher fitness between and :
Let , , and . Specifically, for the BitMatching problem, it is clear that is the larger one between and , i.e., . Furthermore, define () and the corresponding generalized notation as follows:
where the “” at both sides can be replaced simultaneously by “”,“”,“” or“”. is independent of the generation index since at each DOP phase the global optimum moves with the same shifting rate, as defined in Definition 3. Define () and the corresponding generalized notation as follows:
where the “” at both sides can be replaced simultaneously by “”,“”,“” or“”. Based upon the above lemma and notations, we provide the proof of Theorem 5.
, we need to define a suitable distance function, and estimate the corresponding one step mean drift at the() generation of the EA. Given the union of the parent and its offsprings as a whole population , the distance function (), which measures the distance between and the moving global optimum at the generation, is formally defined by
where is the global optimum at the generation (). Then denote by the one step mean drift at the generation, conditional on that the largest number of matching bits found at the generation is :