Global Convergence Analysis of the Flower Pollination Algorithm: A Discrete-Time Markov Chain Approach

04/21/2018 ∙ by Xingshi He, et al. ∙ 0

Flower pollination algorithm is a recent metaheuristic algorithm for solving nonlinear global optimization problems. The algorithm has also been extended to solve multiobjective optimization with promising results. In this work, we analyze this algorithm mathematically and prove its convergence properties by using Markov chain theory. By constructing the appropriate transition probability for a population of flower pollen and using the homogeneity property, it can be shown that the constructed stochastic sequences can converge to the optimal set. Under the two proper conditions for convergence, it is proved that the simplified flower pollination algorithm can indeed satisfy these convergence conditions and thus the global convergence of this algorithm can be guaranteed. Numerical experiments are used to demonstrate that the flower pollination algorithm can converge quickly in practice and can thus achieve global optimality efficiently.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Computational intelligence and optimization have become increasingly important in many applications, partly due to the explosion of data volumes driven by the Internet and social media, and partly due to the more stringent design requirements. In recent years, bio-inspired optimization algorithms have gained some popularity [1, 2]. Many new optimization algorithms are based on the so-called swarm intelligence with diverse characteristics in mimicking natural systems. Consequently, different algorithms may have different features and thus may behave differently, even with different efficiencies. However, it still lacks in-depth understanding why these algorithms work well and exactly under what conditions.

In fact, there is a significant gap between theory and practice. Most metaheuristic algorithms have successful applications in practice, but their mathematical analysis lacks far behind. In fact, apart from a few limited results about the convergence and stability concerning particle swarm optimization, genetic algorithms, simulated annealing and others

[3, 4, 5, 6], many algorithms do not have theoretical analysis. Therefore, we may know they can work well in practice, but we rarely understand why they work and how to improve them with a good understanding of their working mechanisms.

Among most recent, bio-inspired algorithms, flower pollination algorithm (FPA), or flower algorithm (FA) for simplicity, has demonstrated very good efficiency in solving both single objective optimization and multi-objective optimization problems [7, 9]. This algorithm mimics the main characteristics of the pollination process of flowering plants, which leads to both local and global search capabilities. As this algorithm is very new, there is no mathematical analysis yet.

The main purpose of this paper is to analyze the flower algorithm mathematically and try to prove its convergence properties. Therefore, this paper is organized as follows. In Section 2, the flower algorithm will be outlined briefly, followed by some simplifications so as to be used for the detailed mathematical analysis in Section 3 and Section 4. Then, in Section 5, some numerical benchmarks will be used to demonstrate the main characteristics of the convergence behaviour of the flower algorithm. Finally, conclusions will be drawn briefly in Section 6.

2 Flower Pollination Algorithm and Applications

2.1 Flower Algorithm

Flower pollination algorithm (FPA), or flower algorithm, was developed by Xin-She Yang in 2012 [7], inspired by the flow pollination process of flowering plants. The flower pollination algorithm has then been extended to deal with multiobjective optimization [8, 9]

. The diversity of flowering plants are amazing, and it is estimated that there are over a quarter of a million types of flowering plants in Nature and that about 80% of all plant species are flowering species. Flower pollination is typically associated with the transfer of pollen, and such transfer is often linked with pollinators such as insects, birds, bats and other animals. Pollination can take two major forms: abiotic and biotic. About 90% of flowering plants belong to biotic pollination. That is, pollen is transferred by a pollinator such as insects, bats and animals. In fact, some flowers and insects have co-evolved into a very specialized flower-pollinator partnership called flower constancy

[10]. For example, hummingbirds are a good example for flower constancy in pollination. Such flower constancy may have evolutionary advantages because this will maximize the transfer of flower pollen to the same or conspecific plants, and thus maximizing the reproduction of the same flower species. For the pollinators in the flower constancy partnership, they can minimize their efforts for searching for new flower patches and thus with a higher probability of nectar rewards from the same flower species.

Pollination can be achieved by self-pollination or cross-pollination. Self-pollination tends to be local and often occurs when there is no reliable pollinator available. On the other hand, biotic cross-pollination may occur at long distance, and the pollinators such as bees, bats, birds and flies can fly a long distance, thus they can be considered as the global pollination.

For simplicity in describing the flower algorithm, the following four rules can be summarized as follows [7, 9]:

  1. Biotic and cross-pollination can be considered as a process of global pollination, and pollen-carrying pollinators move in a way which obeys Lévy flights (Rule 1).

  2. For local pollination, abiotic and self-pollination can be used (Rule 2).

  3. Pollinators such as insects can develop flower constancy, which is equivalent to a reproduction probability that is proportional to the similarity of two flowers involved (Rule 3).

  4. The interaction or switching of local pollination and global pollination can be controlled by a switch probability , with a slight bias towards local pollination (Rule 4).

In order to formulate the updating formulae in the FPA, we have to convert the above rules into updating equations. For example, in the global pollination step, flower pollen gametes are carried by pollinators such as insects, and pollen can travel over a long distance because insects can often fly and travel in a much longer range. Therefore, Rule 1 and flower constancy can be represented mathematically as

(1)

where is the pollen

or solution vector

at iteration , and is the current best solution found among all solutions at the current generation/iteration. Here is the parameter that corresponds to the strength of the pollination, which essentially is also a step size. Since insects may move over a long distance with various distance steps, we can use a Lévy flight to mimic this characteristic efficiently. That is, we draw from a Lévy distribution [9, 11]

(2)

Here ) is the standard gamma function, and this distribution is valid for large steps . Though in theory the critical size should be sufficiently large, or even can be used in practice. Here, the notation ‘’ means to draw random numbers that obey the distribution on the right-hand side.

For the local pollination, both Rule 2 and Rule 3 can be represented as

(3)

where and are pollen from different flowers of the same plant species. This essentially mimics the flower constancy in a limited neighborhood. Mathematically, if and comes from the same species or selected from the same population, this equivalently becomes a local random walk if we draw

from a uniform distribution in [0,1].

In principle, flower pollination activities can occur at all scales, both local and global. But in reality, adjacent flower patches or flowers in the not-so-far-away neighborhood are more likely to be pollinated by local flower pollen than those far away. In order to mimic this feature, we can effectively use a switch probability (Rule 4) or proximity probability to switch between common global pollination to intensive local pollination. To start with, we can use a naive value of as an initially value. A parametric study showed that may work better for most applications. Preliminary studies suggest that the flower algorithm is very efficient, and has been extended to multi-objective optimization [8, 9].

It is worth pointing out that parameter tuning may be needed in all algorithms, and ideally a self-tuning framework can be used [12]. However, in our analysis of convergence, we assume that the parameter values are fixed, though such parameter values can be within a range. In addition, the representations of the solution vectors in the algorithm are simply vectors, not in any complicated forms such as quaternion representations [13].

2.2 Applications

Since the development of the basic flower pollination algorithm (FPA), there are a wide range of diverse applications of this algorithm with more than 500 research papers published so far in the literature. For example, a brief review by Chiroma et al. identified some of the earlier applications [20]. Therefore, it is not possible to review even a small fraction of the latest developments. Here, we only highlight a few recent papers. For example, Dubey et al. presented a hybrid FPA variant for solving multi-objective economic dispatch problems [21, 22], while Alam et al. carried out photovoltaic parameter estimation using FPA [24]. Structure optimization has also been investigated using FPA [23]

, and feature selection has been done using a clonal FPA by Sayed et al.

[25]. A modified FPA for global optimization has been proposed by Nabil [26].

In addition, Velamuri et al. used FPA to optimize economic load dispatch [27], while Rodrigues et al. developed a binary flower pollination algorithm to do EEG identification. Furthermore, Zhou et al. introduced an elite opposition-based FPA [29] and Mahdad et al. presented an adaptive FPA to solve optimal power flow problems [30], while Abdelaziz et al. solved placement problems in distribution systems using FPA [31]. New variants of FPA are still emerging [32].

Obviously, there are other important applications, but here we will focus on the mathematical analysis of the basic FPA. Therefore, we will start with the simplified version of FPA.

2.3 Simplified Flower Algorithm

As there are two branches in the updating formulas, the local search step only contributes mainly to local refinements, while the main mobility or exploration is carried out by the global search step. In order to simplify the analysis and also to emphasize the global search capability, we now use a simplified version of the flower algorithm. That is, we use only the global branch with a random number , compared with a discovery/switching probability . Now we have

(4)

where .

As the flower pollination algorithm is a stochastic search algorithm, we can summarize the simplified version as the following key steps:

  • Randomly generate an initial population of pollen agents at the positions, , then evaluate their objective values so as to find the initial best .

  • Update the new solutions/positions by

    (5)
  • Draw a random number from a uniform distribution . Update if . Then, evaluate the new solutions so as to find the new, global best at pseudo time/iteration .

  • If the stopping criterion is met, then is the best global solution found so far. Otherwise, return to Step 2 and continue.

3 Convergence Analysis

3.1 Gap Between Theory and Practice

There is a significant gap between theory and practice in bio-inspired computing. Nature-inspired metaheuristic algorithms work almost magically in practice, but it is not well understood why these algorithms work. For example, except for a few cases such as genetic algorithms, simulated annealing and particle swarm optimization, there are not many good results concerning the convergence analysis and stability of metaheuristic algorithms. The lack of theoretical understanding may lead to slow progress or even resistance to the wider applications of metaheuristics.

There are three main methods for theoretical analysis of algorithms, and they are: complexity theory, dynamical systems and Markov chains. On the one hand, metaheuristic algorithms tend to have low algorithm complexity, but they can solve highly complex problems. On the other hand, the convergence analysis typically use dynamic systems and statistical methods as well as Markov chains. For example, particle swarm optimization was analysed by Clerc and Kennedy [3] using simple dynamic systems, while genetic algorithms was analysed intensively in a few theoretical studies [14, 15, 16, 17].

For a genetic algorithm with a given mutation rate (), string length () and population size (), the number of iterations in genetic algorithm can be estimated by

(6)

where means taking the maximum integer value of , and is a function of , and [16, 17]. However, for other bio-inspired algorithms, especially new algorithms, theoretical understanding lacks behind, and thus there is a strong need for further studies in this area. There is no doubt that any new understanding will provide greater insight into the working mechanism of metaheuristic algorithms.

3.2 Convergence Criteria in Stochastic Search

For an optimization problem , a stochastic search algorithm , the th iteration will produce a new solution

(7)

where is the feasible solution space, and is the objective function. Here, is the visited solutions of algorithm during the iterative process.

In the Lebesgue measure space, the infimum of the search can be defined as

(8)

where denotes the Lebesque measure on the set . Here Eq.(8) represents the non-empty set in the search space, and the region for optimal solutions can be defined as

(9)

where and is a sufficiently large positive number. If any point in is found during the iteration, we can say the algorithm has found the globally optimal solution or its best approximation.

In order to analyze the convergence of an algorithm, let us first state the conditions for convergence [4, 18]:

  • Condition 1. If and , then .

  • Condition 2. For subject to ,

    where is the probability measure on of th iteration of the algorithm .

    It is worth pointing out that we focus on the minimization problems in our discussions.

Lemma 1. The global convergence of an algorithm. If is measurable and the feasible solution space is a measurable subset on , algorithm satisfies the above two conditions with the search sequence , then

(10)

That is, algorithm will converge globally [4, 18]. Here is the probability measure of the th solution on at the th iteration.

4 Markov Chains and Convergence Analysis

4.1 Definitions

Definition 1. The state and state space. The positions of pollen and its global best solution in the search history forms the states of flower pollen: , where and (minimization problems). The set of all the possible states form the state space, denoted by

(11)

Definition 2. The states and state space of the pollen group/population. The states of all solutions form the states of the group, denoted by . All the states of all the pollen form a state space for the group, denoted by

(12)

Obviously, contains the historical global best solution for the whole population and all individual best solutions in history. In addition, the global best solution of the whole population is the best among all , so that .

4.2 Markov Chain Model for Flower Algorithm

Definition 3. The state transition for pollen positions. For , the state transition from to can be denoted by

(13)
Theorem 1

The transition probability from state to in the flower algorithm is

(14)

where is the transition probability at Step 2 in the flower algorithm, and is the transition probability for the historical global best at this step. is the transition probability at Step 3, while is the transition probability of the historical global best.

Proof: In the simplified flower algorithm, the state transition from to only has one middle transition state , which means that and are valid simultaneously. Then, the probability for is

(15)

From Eq. (5), the transition probability for is

(16)

Since and are higher-dimensional vectors, the mathematical operations here should be interpreted as vector operations, while the means the volume of the hypercube.

The transition probability of the historical best solution is

(17)

From Step 3 in the simplified flower algorithm, we know that a random number is compared with the discovery probability . If , then the position/solution of pollen can be changed randomly; otherwise, it remains unchanged. Therefore, the transition probability for is

(18)

The transition probability for the historical best solution is

(19)

Definition 4. The group transition probability in flower algorithm. The group transition probability can be defined as for , and .

Theorem 2

In the simplified flower algorithm, the group transition probability from to in one step is

(20)

Proof: If the group states can be transferred from to in one step, then all the states will be transferred simultaneously. That is, , …, , and the group transition probability can be written as the joint probability

(21)
Theorem 3

The state sequence in the flower algorithm is a finite homogeneous Markov chain.

Proof: First, let us assume that all search spaces for a stochastic algorithm are finite. Then, and in any pollen state are also finite, so that the state space for flower pollen are finite. Since the group state consists of positions where is positive and finite, so group states are also finite.

From the previous theorems, we know that the group transition probability

(22)

for and is the group transition probability for . From Eq. (15), we have the transition probability for any pollen is

(23)

where , , and are all only depend on and at . Therefore, also only depends on the states at time . Consequently, the group state sequence has the property of a Markov chain.

Finally, , , and are all independent of , so is . Thus, is also independent of , which implies that this state sequence is also homogeneous. In summary, the group state sequence is a finite, homogeneous Markov chain.

4.3 Global Convergence of the Flower Algorithm

Definition 5. For the globally optimal solution for an optimization (or minimization) problem , the optimal state set is defined as .

Theorem 4

Given the position state sequence in the flower algorithm, the state set of the optimal solutions corresponding to optimal solutions form a closed set on .

Proof: For , the probability for is

Since for and , it holds that .

From Eqs. (17) and (19), we have , which leads to . This condition implies that is closed on .

Definition 6. For the globally optimal solution to an optimization problem , the optimal group state set can be defined as

(24)
Theorem 5

Given the group state sequence in the flower algorithm, the optimal group state set is closed on the group state space .

Proof: From Eq. (20), the probability

(25)

for and . Since and , in order to satisfy , there exists at least one position that will transfer from the inside of to the outside of . That is, . From the previous theorem, we know that is closed on , which means that . Therefore,

From the definition of a closed set, we can conclude that the optimal set is also closed on .

Theorem 6

In the group state space for flower pollen, there does not exist a non-empty closed set so that .

Proof: Reductio ad absurdum. Assuming that there exists a closed set so that and that for and , then Eq. (20) implies that

(26)

For each , it holds that . Since , then , implying that is not closed, which contradicts the assumption. Therefore, there exists no non-empty closed set outside in .

With the above definitions and results, it is straightforward to prove the following lemma:

Lemma 2. Assuming that a Markov chain has a non-empty set and there does not exist a non-empty closed set so that , then only if , and only if .

In addition, we have also have the following theorem:

Theorem 7

When the number of iteration approaches infinity, the group state sequence will converge to the optimal state/solution set .

Proof: Using the previous two theorems and Lemma 2, it is straightforward to prove this theorem.

Now it is ready to state the global convergence theorem.

Theorem 8

The flower algorithm has guaranteed global convergence.

Proof: First, the iteration process in the flower algorithm always keeps/updates the current the global best solution for the whole population, which ensures that it satisfies the the first convergence condition as outlined in the earlier section. From the previous theorem, the group state sequence will converge towards the optimal set after a sufficiently large number of iterations or infinity. Therefore, the probability of not finding the globally optimal solution is , which satisfies the second convergence condition. Consequently, the flower algorithm has guaranteed global convergence towards its global optimality.

5 Global Convergence and Numerical Experiments

Many optimization algorithms are local search algorithms, though most metaheuristic algorithms tend to be suitable for global optimization. For multimodal objectives with many local modes, many algorithms may be trapped in a local optimum. As we have shown that the flower algorithm has good global convergence property, it can be particularly suitable for global optimization. In order to show that the flower algorithm indeed has good convergence for various functions, we have chosen 5 different functions with diverse modes and properties.

The Ackley function [19] can be written as

(27)

which has the global minimum at .

The simplest of De Jong’s functions is the so-called sphere function

(28)

whose global minimum is obviously at . This function is unimodal and convex.

Rosenbrock’s function

(29)

whose global minimum occurs at in the domain where . In the 2D case, it is often written as

(30)

which is often referred to as the banana function.

Yang’s forest-like function

(31)

has the global minimum at , though the objective at this point is non-smooth.

Zakharov’s function

(32)

has its global minimum at in the domain .

Figure 1: Convergence of five test functions using the flower algorithm.

By using the flower algorithm with , , and a fixed number of iterations , we can find the global minima for all the above 5 functions for . The convergence graphs for all these functions are summarized and shown in Fig. 1. As we can see, they all converge quickly in an almost exponential manner, except for Rosenbrock’s function which has a narrow valley. Once the search has gone through some part of the valley during iterations, its convergence becomes exponentially with a higher slope, though the rate of convergence is still lower compared with those for other functions.

Though the theoretical analysis proves that FPA will converge, it is worth pointing out the the rate of convergence is still influenced by both the algorithmic structure and its parameter settings. The convergence analysis does not provide much information about how quickly the algorithm may converge for a given problem, and consequently parameter tuning may be needed in practice to find the best parameter settings to give a higher convergence rate.

6 Conclusions

The flower pollination algorithm is an efficient optimization algorithm with a wide range of applications. We have provided the first results on the convergence analysis of this algorithm. By using the Markov models, we have proved that the flower pollination algorithm has guaranteed global convergence, which laid the theoretical foundation for this algorithm and showed why it is efficient in applications. Then, we have used a set of five different functions with diverse properties to show that FPA can indeed converge very quickly in practice.

It is worth pointing out that the current results are mainly for the standard flower pollination algorithm. It will be useful if further research can focus on the extension of the proposed methodology to analyze the convergence of the full flower pollination algorithm and its variants. Ultimately, it can be expected that the proposed method can be used to analyze other metaheuristic algorithms as well.

References

References

References

  • [1] Kennedy J. and Eberhart R.C., (1995). Particle swarm optimization, in:

    Proc. of IEEE International Conference on Neural Networks

    , Piscataway, NJ. pp. 1942–1948.
  • [2] Yang X.S., (2014). Cuckoo Search and Firefly Algorithm: Theory and Applications, Studies in Computational Intelligence, vol. 516, Heidelberg: Springer.
  • [3] Clerc M. and Kennedy J., (2002). The particle swarm - explosion, stability, and convergence in a multidimensional complex space,

    IEEE Trans. Evolutionary Computation

    , 6 (1), 58–73.
  • [4] Jiang M., Luo Y.P., and Yang S.Y., (2007). Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm, Information Processing Letters, 102(1), 8-16.
  • [5] Ren Z.H., Wang J., and Gao Y.L., (2011). The global convergence analysis of particle swarm optimization algorithm based on Markov chain, Control Theory and Applications (in Chinese), 28(4), pp. 462–466.
  • [6]

    Yang X.S., (2011). Review of meta-heuristics and generalised evolutionary walk algorithm,

    Int. J. Bio-Inspired Computation, 3(2), 77-84 (2011).
  • [7] Yang X.S., (2012). Flower pollination algorithm for global optimization, in: Unconventional Computation and Natural Computation, Lecture Notes in Computer Science, Vol. 7445, pp. 240-249.
  • [8] Yang X.S., Karamanoglu M. and He X.S., (2013). Multi-objective flower algorithm for optimization, Procedia Computer Science, 18(1), pp. 861-868.
  • [9] Yang X.S., Karamanoglu M. and He X.S., (2014). Flower pollination algorithm: A novel approach for multiobjective optimization, Engineering Optimization, 46(9), 1222-1237.
  • [10] Waser N.W., (1986). Flower constancy: definition, cause and measurement, The American Naturalist, 127(5), 596-603.
  • [11] Pavlyukevich I., (2007). Lévy flights, non-local search and simulated annealing, J. Computational Physics, 226(9), 1830-1844.
  • [12] Yang X.S., Deb S., Loomes M. and Karamanoglu M., (2013). A framework for self-tuning optimization algorithm, Neural Computing and Applications, 23(7-8), 2051-2057.
  • [13] Fister I., Yang X.S., Brest J., First Jr. I., (2013). Modified firefly algorithm using quaternion representation, Expert Systems with Applications, 40(18), 7220-7230.
  • [14] Aytug H., Bhattacharrya S., Koehler G.J., (1996). A Markov chain analysis of genetic algorithms with power of 2 cardinality alphabets, Euro. J. Operational Research, 96(1), 195–201.
  • [15] Greenhalgh D. and Marshal S., (2000). Convergence criteria for genetic algorithms, SIAM J. Computing, 30(2), 269-282.
  • [16] Gutjahr W.J., (2010). Convergence Analysis of Metaheuristics, Annals of Information Systems, 10(1), 159-187.
  • [17] Villalobos-Arias M., Coello Coello C.A. and Hernández-Lerma O., (2005). Asymptotic convergence of metaheuristics for multiobjective optimization problems, Soft Computing, 10(8) 1001-1005.
  • [18] Wang F., He X.S., Wang Y., Yang S.M., (2012). Markov model and convergence analysis of cuckoo search algorithm, Computer Engineering, 38(11), 180-185.
  • [19] Ackley D.H., (1987). A Connectionist Machine for Genetic Hillclimbing, Kluwer Academic Publishers, (1987).
  • [20] Chiroma H., Shuib N.L.M., Muaz S.A., Abubakar A.I., Ila L.B., Maitama J.Z., (2015). A review of the application of bio-inspired flower pollination algorithm, Procedia Computer Science, 62, 435-441 (2015).
  • [21] Dubey H.M., Pandit M., Panigrahi B.K., (2015). Hybrid flower pollination algorithm with time-varying fuzzy selection mechanism for wind integrated multi-objective dynamic economic dispatch, Renewable Energy, 83, 188-202.
  • [22] Dubey H.M., Pandit M., Panigrahi B.K., (2015). A biologically inspired modified flower pollination algorithm for solving economic dispatch problems in modern power systems, Cognitive Computation, 7(5), 594-608.
  • [23] Bekdas G., Nigdeli S.M., Yang X.S., (2015). Sizing optimization of truss structures using flower pollination algorithm, Applied Soft Computing, 37, 322-331.
  • [24] Alam D.F., Yousri D.A., Eteiba M.B., (2015). Flower pollination algorithm based solar PV parameter estimation, Energy Conversion and Management, 101, 410-420.
  • [25] Sayed S.A., Nabil E., Badr A., (2016). A binary clonal flower pollination algorithm for feature selection, Pattern Recognition Letters, 77(1), 21-27.
  • [26] Nabil E., (2016). A modified flower pollination algorithm for global optimization, Expert Systems with Applications, 57(1), 192-203.
  • [27] Velamuri S., Sreejith S., Ponnambalam P., (2016). Static economic dispatch incorporating wind farm using flower pollination algorithm, Perspectives in Science, 8, 260-262.
  • [28] Rodrigues D., Silva G.F.A., Papa J.P., Marana A.N., Yang X.S., (2016). EEG-based person identification through binary flower pollination algorithm, Expert Systems with Applications, 62(1), 81-90.
  • [29] Zhou Y.Q., Wang R., Luo Q.F., (2016). Elite opposition-based flower pollination algorithm, Neurocomputing, 188, 294-310 (2016).
  • [30] Mahdad B. and Srairi K., (2016). Security constrained optimal power flow solution using new adaptive partitioning flower pollination algorithm, Applied Soft Computing, 46, 501-522.
  • [31] Abdelaziz A.Y., Ali E.S., Abd Elazim S.M., (2016). Flower pollination algorithm and loss sensitivity factors for optimal sizing and placement of capacitors in radial distribution systems, Int. J. Electrical Power and Energy Systems, 78(1), 207-214.
  • [32] Salgotra R. and Singh U., (2017). Application of mutation operators to flower pollination algorithm, Expert Systems with Applications, 79(1), 112-129.