Estimating Approximation Errors of Elitist Evolutionary Algorithms

09/03/2019 ∙ by Cong Wang, et al. ∙ Wuhan University of Technology 0

When EAs are unlikely to locate precise global optimal solutions with satisfactory performances, it is important to substitute the hitting time/running time analysis with another available theoretical routine. In order to bring theories and applications closer, this paper is dedicated to perform an analysis on approximation error of EAs. First, we proposed a general result on upper bound and lower bound of approximation errors. Then, several case studies are performed to present the routine of error analysis, and consequently, validate its applicability on cases generating transition matrices of various shapes. Meanwhile, the theoretical results also show the close connections between approximation errors and eigenvalues of transition matrices. The analysis validates applicability of error analysis, demonstrates significance of estimation results, and then, exhibits its potential to be applied for theoretical analysis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

For theoretical analysis, convergence performance of evolutionary algorithms (EAs) is widely evaluated by the expected first hitting time (FHT) and the expected running time (RT) [19], which quantify the respective numbers of iteration and function evaluations (FEs) to hit the global optimal solutions. General methods for estimation of FHT/RT have been proposed by theories of Markov chains [8, 4], drift analysis[12, 5], switch analysis [23] and application of them via partition of fitness levels [6], etc.

Although popularly employed in theoretical analysis, simple application of FHT/RT is not practical when the optimal solutions are difficult to hit. One of these “difficult” cases is optimization of continuous problems. Optimal sets of continuous optimization problems are usually zero-measure set, which could not be hit by generally designed EAs in finite time, and so, FHT/RT could be infinity for most cases. A remedy to this difficulty is to take a positive-measure set as the destination of population iteration. So, it is natural to take an approximation set for a given precision as the hitting set of FHT/RT estimation [3, 14, 25, 2]

. Another “difficult” case is the optimization of NP-complete (NPC) problems that cannot be solved by EAs in polynomial running time. For this case, the exponential FHT/RT is not interesting to algorithm developers, and it is much more important to investigate the expected FHT/RT to obtain approximate solutions. By investigating various simplified EAs and NPC combinatorial optimization problems, researchers estimated the approximation ratio these EAs can achieved in polynomial expected FHT/RT 

[24, 17, 26, 27, 22, 20].

However,the aforementioned methods could become impractical once we have little information about global optima of the investigated problems, and then, it is difficult to “guess” what threshold can result in polynomial FHT/RT. Since the approximation error after a given iteration number is usually employed to numerically compared performance of EAs, some researchers tried to analyze EAs by theoretically estimating the expected approximation error. Rudolph [21] proved that under the condition , the sequence converges in mean geometrically to , that is, . He and Lin [7] studied the geometric average convergence rate of the error sequence , defined by Starting from , it is straightforward to claim that .

A close work to analysis of approximation error is fixed budget analysis proposed by Jansen and Zarges [15, 16], who aimed to bound the fitness value within a fixed time budget . However, Jansen and Zarges did not present general results for any time budget . In fixed budget analysis, a bound of approximation error holds for some small but might be invalid for a large one. He [9] made a first attempt to obtain an analytic expression of the approximation error for a class of elitist EAs. He proved if the transition matrix associated with an EA is an upper triangular matrix with unique diagonal entries, then for any , the relative error is expressed by where are eigenvalues of the transition matrix (except the largest eigenvalue ) and are coefficients. He et al. [11] also demonstrated the possibility of approximation estimation by estimating one-step convergence rate , however, it was not sufficient to validate its applicability to other problems because only two studied cases with trivial convergence rates are investigated.

This paper is dedicated to present estimation of approximation error depending on any iteration number . We make the first attempt to perform error analysis by a general method, and demonstrate its feasibility by case studies. Rest of this paper is presented as follows. Section 2 presents some preliminaries. In Section 3, a general result on the upper and lower bounds of approximation error is proposed, and some case studies are performed in Section 4. Finally, Section 5 concludes this paper.

2 Preliminaries

In this paper, we consider a combinatorial optimization problem

(1)

where has only finite available values. Denote its optimal solution as , and the corresponding objective value . Quality of a feasible solution is quantified by its approximation error . Since there are only finite solutions for (1), there exist finite feasible values of , denoted as . Obviously, the minimum value is the approximation error of the optimal solution , and so, takes the value 0. If for a feasible solution , we call that is located at the status . Then, there are totally statuses for all feasible solutions. Status consists of all optimal solutions, called the optimal status; other statuses are the non-optimal statuses.

An elitist EA described in Algorithm 1 is employed to solve problem (1). When the one-bit mutation is employed, it is called a random local search (RLS); if the bitwise mutation is used, it is named as a (1+1) evolutionary algorithm ((1+1)EA). Then, the error sequence is a Markov Chain

. Assisted by the initial probability distribution of individual status

, the evolution process of (1+1) elitist EA can be depicted by the transition probability matrix

(2)

where is the probability to transfer from status to status .

1:counter ;
2:randomly initialize a solution ;
3:while the stopping criterion is not satisfied do
4:   generate a new candidate solution from by mutation;
5:   set individual if ; otherwise, let ;
6:   ;
7:end while
Algorithm 1 A Framework of the Elitist EA

Partition the transition probability matrix as

(3)

where , ,

(4)

Thus, the expected approximation error at the iteration [10] is

(5)

where , , is the sub-matrix of transition probability between non-optimal statuses. Then, in the following we only consider the transition submatrix for estimation of approximation error.

Since an elitist strategy is employed, the transition matrix is upper triangular. Then, is upper triangular, either. According to the shape of , we can further divide searching process of the elitist EA into two different categories.

  1. Step-by-step Search: If the transition probability satisfies

    (6)

    it is called a step-by-step search. For this case, the elitist EA cannot transfer between non-optimal statues that are not adjacent to each other, and the transition submatrix is

    (7)
  2. Multi-step Search: If there exists some such that , we called it a multi-step search. A multi-step search can transfer between inconsecutive statuses, which endows it with better global exploration ability, and sometimes, better convergence rate.

Note that this classification is problem-dependent because the statuses are defined via the problem to be optimized. So, the RLS could be either a step-by-step search or a multi-step search. However, the (1+1)EA is necessarily a multi-step search, because the bitwise mutation can jump between any two statuses. When in (3) is non-zero, column sums of submatrix is less than 1 which means it could probably jump from at least one non-optimal status directly to the optimal status. So, a step-by-step search represented by (7) must satisfies .

3 Estimation of General Approximation Bounds

3.1 General Bounds of the Step-by-step Search

Let be the submatrix of a step-by-step search. Its eigenvalues are , , which represents the probability of remaining at the present status after one-step iteration. Then, it is rational to declare that greater the eigenvalues are, slower the step-by-step search converges. Inspired by this idea, we can estimate general bounds of a step-by-step search by enlarging and reducing the eigenvalues. Achievement of the general bounds is based on the following lemma.

Lemma 1

Denote

(8)

Then, is monotonously increasing with , .

Proof

This lemma could be proved by mathematical induction.

  1. When , we have

    Note that is not greater than 1 because it is an element of the probability matrix. Then, from the truth that , we conclude that is monotonously increasing with , . Meanwhile,

    (9)
  2. Suppose that the result hold for , that is,

    (10)

    and is monotonously increasing with for all . First, the monotonicity implies thatp

    (11)

    Meanwhile, definition (8) implies , that is,

    So, ,

    (12)

    Combining (10), (11) and 2, we know that

    which means is monotonously increasing with for all .

In conclusion, is monotonously increasing with , .

If we enlarge or shrink all eigenvalues to the maximum value and the minimum value, respectively, we can get two transition submatrices and , where

(13)

, . Then, depicts a searching process converging slower than the one represents, and is the transition matrix of a process converging faster than what represents.

Theorem 3.1

The expected approximation error is bounded by

(14)
Proof

Since , Lemma 1 implies that is also monotonously increasing with , . So, we get the conclusion that

Theorem 3.1 provides a general result about the upper and the lower bounds of approximation error. From the above arguments we can figure out that the lower bounds and the upper bounds can be achieved once the transition submatrix degenerates to and , respectively. That is to say, they are indeed the “best” results about the general bounds. Recall that . Starting from the status, is the probability that the (1+1) elitist EA stays at the status after one iteration. Then, greater is, harder the step-by-step search transfers to the sub-level status . So, performance of a step-by-step search depicted by , for the worst case, would not be worse than that of ; meanwhile, it would not be better than that of , which is a bottleneck for improving performance of the step-by-step search.

3.2 General Upper Bounds of the Multi-step Search

Denoting the transition submatrix of a multi-step search as

(15)

we can bound its approximation error by defining two transition matrices

(16)

and

(17)
Lemma 2

Let , and be the transition matrix defined by (15), (16) and (17

), respectively. Given any sorted vector

satisfying and the corresponding initial distribution , it holds that

Proof

It is trivial to prove that . Because has part of non-zero elements of , is a partial sum of . Since all items of is nonnegative, it holds that . The second inequality can be proved by mathematical induction.

  1. When , denote

    (18)
    (19)

    where . Combining with the fact that , we conclude that . Then, it holds that

  2. Suppose that the result holds when , that is, . Because , it holds that

    (20)

    Meanwhile, because , we know . Then, the assumption implies that

    Combining it with (20), we can conclude that

    So, the result also holds for the case .

In conclusion, it holds that .

Theorem 3.2

The approximation error of the multi-step search defined by (15) is bounded by

(21)

where .

Proof

From Lemma 2 we know that

(22)

Moreover, By Theorem 3.1 we know that

(23)

Combing (22) and (23) we get the theorem proved.

3.3 Analytic Expressions for Computation of General Bounds

Theorems 3.1 and 3.2 show that computation of general bounds for approximation errors is based on the computability of and , where and are defined by (13) and (17), respectively.

  1. Analytic Expression of : The submatrix can be split as , where

    Because multiplication of and is commutative, the binomial theorem [1] holds and we have

    (24)

    where

    (25)

    Note that is a nilpotent matrix of index  111In linear algebra, a nilpotent matrix is a square matrix such that for some positive integer . The smallest such is called the index of  [13]., and

    (26)

    Then, (25), (26) and (24) imply that

    1. if ,

      (27)
    2. if ,

      (28)
  2. Analytic Expression of : For the diagonal matrix , it holds that

    (29)

4 Case-by-case Estimation of Approximation Error

In section 3 general bounds of approximation error are obtained by ignoring most of elements in the sub-matrix . Thus, these bounds could be very general but not tight. In this section, we would like to perform several case-by-case studies to demonstrate a possible unverse method for error analysis, where the RLS and (1+1)EA are employed solving the popular OneMax problem and the Needle-in-Haystack prlbem.

Problem 1

(OneMax)

Problem 2

(Needle-in-Haystack)

4.1 Error Estimation for the OneMax Problem

Application of RLS on the unimodal OneMax problem generates a step-by-step search, the transition submatrix of which is

(30)

Eigenvalues and corresponding eigenvectors of

are

(31)

Note that has distinct eigenvalues, and so, can be diagonalized [18]. Then, we would like to estimate the approximation error by diagonalizing the transition matrix .

Theorem 4.1

The expected approximation error of RLS for the OneMax problem is

(32)
Proof

Denote . Then we know that

(33)

has distinct eigenvalues, and so, can be diagonalized as  [18]. Then, we have

(34)

where , ,

(35)

Substituting (35) into (34) we get the result

Theorem 4.2

The expected approximation error of (1+1)EA for the OneMax problem is bounded from above by

(36)
Proof

According to the definition of population status, we know that the status index is the number of 0-bits in . Once one of 0-bits is flip to 1-bit and all 1-bits keep unchanged, the generated solution will be accepted, and the status transfers from to . Recalling that probability this case happen is , we know that

Denote

and we know that

(37)

With distinct eigenvalues, can be diagonalized:

(38)

where , . and are the eigenvalues and the corresponding eigenvectors:

(39)

It is obvious that is invertible, and its inverse is

(40)

Similar to the result illustrated in (35), we know that

(41)

Combing (37), (38), (39), (40) and (41) we know that

4.2 Error Estimation for the Needle-in-Haystack Problem

Landscape of the Needle-in-Haystack problem has a platform where all solutions have the same function value , and only the global optimum has a non-zero function value . For this problem, the status is defined as total number of 1-bits in a solutions .

Theorem 4.3

The expected approximation error of RLS for the Needle-in-Haystack problem is bounded by

(42)
Proof

When the RLS is employed to solve the Needle-in-Haystack problem, the transition submatrix is

(43)

Then,

(44)

Since

we can conclude that

Both the upper bound and the lower bound converge to the positive . Because the RLS only searches adjacent statuses, it cannot converge to the optimal status once the initial solution is not located at statuses .

Theorem 4.4

The expected approximation error of (1+1)EA for the Needle-in-Haystack problem is bounded by

(45)
Proof

When the (1+1)EA is employed to solve the Needle-in-Haystack problem, the transition probability matrix is

(46)

Then,

(47)

Since