I Introduction
Some evolutionary heuristics can be interpreted as discrete dynamical systems governing the movements of a set of agents (or particles) in the search space. This is well known for Particle Swarm Optimization (PSO) where the variation of the velocity of each particle in the swarm is defined by a control term made of a social component and a individual (or cognitive) component
[1, 2, 3, 4]. The social component can be interpreted as a behavior dictated by the knowledge acquired by the whole swarm of particles, while the cognitive component can be interpreted as a behavior dictated by the knowledge acquired by each individual particle.The same principle can be generalized and extended to other evolutionary heuristics such as Differential Evolution (DE) [5]
. The analysis of a discrete dynamical system governed by the heuristics generating the behavior of individuals in DE, and in Evolutionary Computation in general, would allow for a number of considerations on the evolution of the search process and therefore on the convergence properties of a global optimization algorithm. In particular, four outcomes of the search process are possible:

Divergence to infinity. In this case the discrete dynamical system is unstable, and the global optimization algorithm is not convergent.

Convergence to a fixed point in . In this case the global optimization algorithm is simply convergent in and we can define a stopping criterion. Once the search is stopped we can define a restart procedure. Depending on the convergence profile, the use of a restart procedure can be more or less efficient.

Convergence to a limit cycle in which the same points in are resampled periodically. Even in this case we can define a stopping criterion and a restart procedure.

Convergence to a strange (chaotic) attractor. In this case a stopping criterion cannot be clearly defined because different points are sampled at different iterations.
All outcomes are generally interesting to understand the evolution process. Identifying under which conditions divergence occurs is important to properly design an algorithm, or to define the appropriate setting, in particular in the case of automatic adaptation of some key parameters. Divergence can be seen as the opposite of the intensification process and can increase diversity and exploration. The convergence to a chaotic attractor can represent an interesting case of dense random sampling of an extended region. When this happens, the algorithm repeatedly samples the same region but never resamples the same points. This mechanism could be useful to reconstruct an extended set with a specific property (e.g. , with the objective function and a threshold).
The main interest, in this paper, is in the most commonly desired outcome: the convergence to a fixed point (see for example [6], in the context of global root finding, for methods to eliminate undesirable behaviors). The convergence to a fixed point, even different from an optimum, can be used to induce a restart of the search process with minimum waste of resources (as it will be demonstrated in this paper).
The analysis of the dynamical properties of the dynamical system associated with a particular heuristic can give some insights into the balance between global and local exploration and the volume of the search space that is covered during the search. Extensive work on the dynamics of Genetic Algorithms and general Evolutionary Algorithms can be found in the studies of PrügelBennett et al.
[7, 8] and Beyer [9]. Nonevolutionary examples of the use of dynamical system theory to derive effective global optimization schemes can be found in the works of Sertl and Dellnitz [10].In this paper we propose a discrete dynamical system, or discrete map, governing the evolution of a population of individuals, being each individual a point in a dimensional domain . From the discrete dynamical system we derive a variant of Differential Evolution and we study its convergence properties. In particular, it is proven that, under some assumptions, the dynamics of the proposed variant of DE converges to a fixed point or to a level set.
Note that, the proofs proposed in this paper considers the whole dimensional discrete dynamical system associated to the evolution of the population.
The theoretical results are then used to derive a novel algorithm that performs better than DE strategies DE/rand/1/bin and DE/best/1/bin[11] on some difficult space trajectory design problems. The novel algorithm is based on a hybridization of the proposed DE variant and the logic behind Monotonic Basin Hopping (MBH) [12].
The paper is organized as follows: after introducing the dynamics of a population of agents, three convergence theorems are demonstrated. Then an inflationary DE algorithm based on a hybridization between a variant of DE and MBH is derived. A description of the test cases and testing procedure follows. After presenting the results of the tests, the search space is analyzed to derive some considerations on the general applicability of the results.
Ii Agent Dynamics
In this section we start by defining a generic discrete dynamical system governing the motion of an agent in a generic search space . From this general description we derive a variant of what Storn and Price defined as the basic DE strategy[11].
If we consider that a candidate solution vector in a generic
dimensional box,(1) 
is associated to an agent, then the heuristic governing its motion in can be written as the following discrete dynamical system:
(2) 
with
(3) 
The function is a selection operator that can be either if the candidate point is accepted or if it is not accepted. Different evolutionary algorithms have different ways to define .
The control defines the next point that will be sampled for each one of the existing points in the solution space, the vectors and define the current state of the agent in the solution space at stage of the search process, and define the state of the agent in the solution space at stage of the search process and is a viscosity, or dissipative coefficient, for the process. Eq. (3) represents a limit sphere around the agent at stage of the search process. Different evolutionary algorithms have different ways to define , and (see for example the dynamics of PSO [1]). In this paper we will focus on the way , and are defined in Differential Evolution.
Now consider the defined by:
(4) 
where , and are integer numbers randomly chosen in the interval of indexes of the population, and is a mask containing a random number of and according to:
(5) 
with randomly chosen in the interval , so that e contains at least one nonzero component, the dimension of the solution vector,
a random number taken from a random uniform distribution
and a constant. The product between and in Eq. (4) has to be intended componentwise. The index can be chosen at random (this option will be called exploration strategy in the remainder of this paper) or can be the index of the best solution vector (this option will be called convergence strategy in the remainder of this paper). Selecting the best solution vector or a random one changes significantly the convergence speed of the algorithm. The selection function can be either 1 or 0 depending on the relative value of the objective function of the new candidate individual generated with Eq. (4) with respect to the one of the current individual.In other words the selection function can be expressed as:
(6) 
Note that in this paper we consider minimization problems in which the lowest value of is sought.
If one takes , and , then map (2) reduces to:
(7) 
In the general case the indices , and can assume any value. However, if the three indexes , and are restricted to be mutually different, map (7), with defined in (4), e defined in (5) and defined in (6), is the DE basic strategy DE/rand/1/bin defined by Storn and Price in [11]. If is taken as the index of the best individual , and , and are mutually different, then one can obtain the DE strategy DE/best/1/bin. In fact, if (4) is substituted in (7) one gets:
(8) 
Now, if the selection function does not accept the candidate point , then , therefore (8) reduces to (i.e. the state of the agent remains unchanged). If the candidate point is accepted instead, then the new location of the agent is:
(9) 
The quantity in square brackets is the basic DE strategy DE/rand/1/bin or the variant DE/best/1/bin, respectively for random or . The mask together with represent the crossover operator in the basic DE strategy [11], with a vector of ’s and the products and that are both componentwise. In fact, is made of the components of that correspond to the zero elements of e, and the components of that correspond to the nonzero elements of e. If the assumption of mutually different indexes is dropped then map (7) can be seen as a further variant of the basic DE strategy.
In compact matrix form for the entire population, map (7) can be written as:
(10) 
with the th line of matrix is point .
To be more precise, in the case , then every component violating the boundaries and is projected back into by picking the new value , where is taken from a random uniform distribution . The interest is now in the properties of map (10). We start by observing that if , for , the global minimizer is a fixed point for map (7) since every point is such that .
Then, let us assume that at every iteration we can find two connected subsets and of such that , and let us also assume that while (recall that and denote the populations at iteration and respectively). If is the lowest local minimum in , then is a fixed point in for map (7). In fact, every point generated by map (7) must be in and .
Moreover, under the above assumptions the reciprocal distance of the individuals cannot grow indefinitely because of the map (7), therefore the map cannot be divergent.
Finally, a matrix whose lines are replications of a unique point , is always a fixed point for the map (10).
Now one can consider two variants of Differential Evolution: one in which index can be equal to index but both are different from and one in which index can be equal to index but both are different from . In these cases two interesting results can be proven. First of all we prove that if can be equal to index then the population can collapse to a single point in .
Theorem II.1
If, for every , is equal to with strictly positive probability and (i.e., the set of best points within the population is made up by a single point, multiple copies of which possibly exist),then the population collapses to a single point with probability 1 for under the effect of the discrete dynamical system (7)
Proof.
If is equal to with strictly positive probability, and for every , then map (7) at each iteration can generate, with strictly positive probability, a displacement for each member , . This happens if, for each , the following event, whose probability is strictly positive, occurs
(11) 
Then, at each iteration we have a strictly positive probability that the two or more individuals collapse into the single point and for the whole population collapses to a single point with probability one. ∎
After a collapse, the population cannot progress further and needs to be restarted. It is important to evaluate the probability of a total collapse, in fact if the collapse is progressive the population can keep on exploring but if the collapse is instantaneous the evolution ceases. Let us analyze the worst case in which, . Then if for all the individuals and we have a total instantaneous collapse. The probability of having is and the probability of having individuals collapsing at the same iteration is . Furthermore, given a the probability to have is . Thus the total probability of an instantaneous total collapse is . The event has positive but small probability to happen. The complementary probability is and the probability to have at least one total collapse after generations is . Therefore, allowing the indexes to assume the value introduces the following interesting property. If the population is stagnating, and the condition holds true, eventually there will be a total collapse and the population can be restarted with no risk to interrupt the evolutionary process.
If is equal to with strictly positive probability but both are always different from , then convergence to a fixed point can be guaranteed if the function is strictly quasiconvex [13] in , and is compact and convex. In other words, under the given assumptions, the population will converge to a single point. We immediately remark that such point is not necessarily a local minimizer of the problem.
Lemma II.2
Let be a continuous and strictly quasiconvex function on a set and let us assume that is compact, convex and is not a singleton. Then, the following minimization problem with has a strictly positive minimum value for small enough:
(12) 
Proof.
Since is strictly quasiconvex , ; furthermore, the feasible region is nonempty (if is small enough and is not a singleton) and compact. Therefore, according to Weierstrass’ theorem the function attains its minimum value over the feasible region. If we denote by a global minimum point of the problem, then we have
(13) 
∎
Theorem II.3
Assume that index can be equal to . Given a function that is strictly quasiconvex over the compact and convex set , and a population , then if and , for , the population converges to a single point in for with probability one.
Proof.
By contradiction let us assume that we do not have convergence to a fixed point. Then, it must hold that:
(14) 
At every generation the map can generate with a strictly positive probability, a displacement for , where and identify the individuals with the maximal reciprocal distance, such that the candidate point is with . Since the function is strictly quasiconvex, the candidate point is certainly better than and, therefore, is accepted by . Now, in view of Eq. (14) and of Lemma (II.2) we must have that,
(15) 
Such reduction will occur with probability one infinitely often, and consequently the function value of at least one individual will be, with probability one, infinitely often reduced by . But in this way the value of the objective function of such individual would diverge to , which is a contradiction because is bounded from below over the compact set . ∎
If a local minimum satisfies some regularity assumptions (e.g., the Hessian at the local minimum is definite positive), then we can always define a neighborhood such that: (i) map (7) will be unable to accept points outside the neighborhood; (ii) the function is strictly convex within the neighborhood. Therefore, if at some iteration the population belongs to such a neighborhood, we can guarantee that map (10) will certainly converge to a fixed point made up by replications of a single point belonging to the neighborhood. For general functions, we can not always guarantee that the population will converge to a fixed point, but we can show that the maximum difference between the objective function values in the population converges to 0, i.e. the points in the population tend to belong to the same level set. The proof is closely related to that of Theorem II.1 and we still need to assume that indices and can be equal.
Theorem II.4
Assume that index can be equal to . Given a function , limited from below over , and a population , then if and , for , the following holds
(16) 
as with probability one.
Proof.
Let denote the set of best points in population , i.e.
(17) 
At each iteration there is a strictly positive probability that the whole population will be reduced to at the next iteration. To show this it is enough to substitute the condition stated in (11) with the following condition
(basically, with respect to Theorem II.1, we only drop the requirement that at each iteration the best value of the population is attained at a single point).
In other words, there is a strictly positive probability for the event that the population at a given iteration will be made up of points all with the same objective function value. Therefore, such an event will occur infinitely often with probability one. Let us denote with the infinite subsequence of iterations at which the event is true, and let
(18) 
be the difference in the objective function values at two consecutive iterations and (note that, since at iterations , the objective function values are all equal and any can be employed in the above definition). It holds that for all
(19) 
Therefore, if we are able to prove that , as , then we can also prove the result of our theorem. Let us assume by contradiction that . Then, there will exist a such that infinitely many times. But this would lead to function values diverging to and, consequently, to a contradiction. ∎
As a consequence of these results, for the choice of the index , and nonmutually different, a possible stopping criterion for the dynamics in Eq. (7) would be to stop when the difference between the function values in the population drops below a given threshold. However, this could cause a premature halt of the evolutionary process. Indeed, even if at some iteration the function value at all points of the population is equal, this does not necessarily mean that the algorithm will be unable to make further progress (unless all points in the population are multiple copies of a unique point). Therefore, since the evolution definitely ceases when the population contracts to a single point, we can alternatively use as a stopping criterion the fact that the maximum distance between points in the population drops below a given threshold.
It is important to observe that the contraction of the population does not depend on whether the function is minimized or maximized but depends only on the definition of and on whether the function is bounded or unbounded.
): a) max and min distance of the individuals in the population from the origin, b) eigenvalues with the number of evolutionary iterations.
To further verify the contraction properties of the dynamics in Eq. (7) one can look at the eigenvalues of the matrix .
If the population cannot diverge, the eigenvalues cannot have a norm always . Furthermore, according to Theorem II.3 if the function is strictly quasiconvex in , the population converges to a single point in , which implies that the map (7) is a contraction in and therefore the eigenvalues should have a norm on average lower than 1. This can be illustrated with the following test: Consider a population of 8 individuals and a enclosing the minimum of a paraboloid with the minimum at the origin. For a =1.0 and =0.8, we compute, for each step , the distances of the closest and farthest individuals from the local minimum and the eigenvalues of the matrix . Fig. 1 shows the behavior of the eigenvalues and of the distance from the origin. From the figure, we can see that for all iterations, the value of the norm of all the eigenvalues is in the interval except for one eigenvalue at iteration 12. However, since every expansion is not accepted by the selection function and for each iteration a number of eigenvalues have modulus lower than 1, the population contracts as represented in Fig. 1(a).
If multiple minima are contained in , then it can be experimentally verified that the population contracts to a number of clusters initially converging to a number of local minima and eventually to the lowest among all the identified local minima.
The local convergence properties of map (7) suggest its hybridization with the heuristic implemented in Monotonic Basin Hopping (MBH).
MBH, first introduced in [14] in the context of molecular conformation problems) is a simple but effective global optimization method. At each iteration MBH: (i) generates a sample point within a neighborhood of size of the current local minimum (e.g., by a random displacement of each coordinate of the current local minimum); (ii) starts a local search from the newly generated sample point; (iii) moves to the newly detected local minimum only if its function value is better than the function value at the current local minimum. The initial local minimum is usually randomly generated within the feasible region. Moreover, if no improvement is observed for a predetermined number of sample points , a restart mechanism might be activated. The neighborhood of the local minimum is defined as . A proper definition of is essential for the performance of MBH: too small a size would not allow MBH to escape from the current local minimum, while too large a value would make the search degenerate in a completely random one. The local search performed at each iteration can be viewed as a dynamical system where the evolution of the systems at each iteration is controlled by some map. Under suitable assumptions, the systems converge to a fixed point. For instance, if is convex and in a small enough domain containing a local minimum which satisfies some regularity conditions, Newton’s map converges quadratically to a single fixed point (the local minimum) within the domain. This observation leads to the above mentioned hybridization of DE with MBH: the dynamical system corresponding to a local search is replaced by the one corresponding to DE. More precisely, if map (7), either for a cluster or for the entire population , contracts, we can define a bubble , around the best point within the cluster , and reinitialize a subpopulation in . Such operation is performed as soon as the maximum distance among the elements in the cluster collapses below a value , where is the maximum recorded during the convergence of the map (10).
In order to speed up convergence, the best solution of the cluster is refined through a local search started at it, leading to a local minimum , which is saved in an archive . The bubble is defined, similarly to the neighborhood of MBH, as . The overall process leads to Algorithm 1. Note that the contraction of the population given, for example, by the metric , is a stopping criterion that does not depend explicitly on the value of the objective function but on the contractive properties of the map in Eq. (7). Some remarks follow.

Convergence of DE to a single point can not always be guaranteed, and consequently, we can not always guarantee that the contraction condition will be satisfied at some iteration. Therefore, in order to take into account this possibility, we need to introduce a further stopping criterion for DE, such as a maximum number of iterations. We point out, however, that such alternative stopping criterion has never become active in our experiments.

Even when DE converges to a single point, this is not guaranteed to be a local minimum. For this reason we always refine the best observed solution through a local search.

If the search space is characterized by a single funnel structure [15], the restart of the population in the bubble allows the algorithm to move towards the global optimum by progressively jumping from one minimum to a better one. On the other hand, if multiple funnels or multiple isolated minima exist, a simple restart of the population inside a bubble might not be sufficient to avoid stagnation. A way to overcome this problem is to use global resampling: when the value of the best solution does not change for a predefined number of iterations , the population is restarted. The restart procedure collects the solutions in the archive into clusters with baricenter for , then each agent of the new population is generated so that . Note that a restart mechanism for DE was previously proposed also by Peng et al. [16] in a variant of the adaptive Differential Evolution JADE. However, in the work of Peng et al. the restart criterion and restart strategy are substantially different from the ones proposed here. For example, although we record the local minima in an external archive, we do not prevent the algorithm from searching in the surroundings of the recorded minima. On the contrary, we combine a local restart in a bubble surrounding the final point returned by DE, according to the heuristics of MBH, with a more global restart sampling outside the bubbles. A complete review of the existing variants of Differential Evolution can found in the work of Neri et al. [17]. Other restart mechanisms have recently been adopted to improve other evolutionary algorithms such as GCMAES [18] or hybrid methods [19], but also in these cases the restart criterion and restart strategy are substantially different from the ones proposed here.

A question, mainly of theoretical interest, is about convergence. General results stating conditions under which convergence to the global minimum (with probability 1) is guaranteed can be found, e.g., in [20] and [21]. Such conditions are not fulfilled by standard DE, while IDEA fulfills them if parameter employed in the restart mechanism is ”small enough” (if too large, some portions of the feasible region, possibly including the global minimum, might remain unexplored). We point out that, although in practice we are not interested in the behavior of an algorithm over an infinite time horizon, global convergence justifies rerunning the search process with an increased number of function evaluations should the results be unsatisfactory. As we will show in the remainder of this paper, IDEA produces performance steadily increasing with the number of function evaluations without the need to change its settings.
Iii Trajectory Model and Problem Formulation
The modified differential evolution algorithm derived in Section II is applied to the solution of four real world cases. The four cases are all multigravity assist (MGA) trajectory design problems, three of them with deep space manoeuvres (DSM) and one with no DSM’s. In this section we describe the trajectory model and we formulate the global optimization problem that will be tackled through Algorithm 1.
Iiia Trajectory Model with no DSM’s
Multigravity assist transfers with no deep space maneuvers can be modeled with a sequence of conic arcs connecting a number of planets. The first one is the departure planet, the last one is the destination planet and at all the intermediate ones the spacecraft performs a gravity assist maneuver. Given planets with , each conic arc is computed as the solution of a Lambert’s problem [22] given the departure time from planet and the arrival time at planet . The solution of the Lambert’s problems yields the required incoming and outgoing velocities at a swingby planet and (see Fig. 2). The swingby is modeled through a linkedconic approximation with powered maneuvers[23], i.e., the mismatch between the required outgoing velocity and the achievable outgoing velocity is compensated through a maneuver at the pericenter of the gravity assist hyperbola. The whole trajectory is completely defined by the departure time and the transfer time for each leg , with .
The normalized radius of the pericenter of each swingby hyperbola is derived a posteriori once each powered swingby manoeuvre is computed. Thus, a constraint on each pericenter radius has to be introduced during the search for an optimal solution. In order to take into account this constraint, the objective function is augmented with the weighted violation of the constraints:
(20) 
for a solution vector:
(21) 
IiiB Trajectory Model with DSM’s
A general MGADSM trajectory can be modeled through a sequence of legs connecting celestial bodies (Fig. 3)[24]. In particular if all celestial bodies are planets, each leg begins and ends with an encounter with a planet. Each leg is made of two conic arcs: the first, propagated analytically forward in time, ends where the second solution of a Lambert’s problem begins. The two arcs have a discontinuity in the absolute heliocentric velocity at their matching point . Each DSM is computed as the vector difference between the velocities along the two conic arcs at the matching point. Given the transfer time and the variable relative to each leg , the matching point is at time , where is the final time of the leg . The relative velocity vector at the departure planet can be a design parameter and is expressed as:
(22) 
with the angles and respectively representing the declination and the right ascension with respect to a local reference frame with the axis aligned with the velocity vector of the planet, the axis normal to orbital plane of the planet and the axis completing the coordinate frame. This choice allows for an easy constraint on the escape velocity and asymptote direction while adding the possibility of having a deep space maneuver in the first arc after the launch. This is often the case when the escape velocity must be fixed due to the launcher capability or to the requirement of a resonant swingby of the Earth (EarthEarth transfers). In order to have a uniform distribution of random points on the surface of the sphere defining all the possible launch directions, the following transformation has been applied:
(23) 
It results that the sphere surface is uniformly sampled when a uniform distribution of points for is chosen. Once the heliocentric velocity at the beginning of leg , which can be the result of a swingby maneuver or the asymptotic velocity after launch, is computed, the trajectory is analytically propagated until time . The second arc of leg is then solved through a Lambert’s algorithm, from , the Cartesian position of the deep space maneuver, to , the position of the target planet of phase , for a time of flight . Two subsequent legs are then joined together with a gravity assist manoeuvre. The effect of the gravity of a planet is to instantaneously change the velocity vector of the spacecraft.
The relative incoming velocity vector and the outgoing velocity vector at the planet swingby have the same modulus but different directions; therefore the heliocentric outgoing velocity results to be different from the heliocentric incoming one. In the linked conic model, the spacecraft is assumed to follow a hyperbolic trajectory with respect to the swingby planet. The angular difference between the incoming relative velocity and the outgoing one depends on the modulus of the incoming velocity and on the pericenter radius . Both the relative incoming and outgoing velocities belong to the plane of the hyperbola. However, in the linkedconic approximation, the maneuver is assumed to occur at the planet, where the planet is a point mass coinciding with its center of mass. Therefore, given the incoming velocity vector, one angle is required to define the attitude of the plane of the hyperbola . Although there are different possible choices for the attitude angle , the one proposed in Ref. 7 has been adopted (see Fig. 4), where is the angle between the vector , normal to the hyperbola plane , and the reference vector , normal to the plane containing the incoming relative velocity and the velocity of the planet .
Given the number of legs of the trajectory , the complete solution vector for this model is:
(24)  
where is the departure date. Now, the design of a multigravity assist transfer can be transcribed into a general nonlinear programming problem, with simple box constraints, of the form:
(25) 
One of the appealing aspects of this formulation is its solvability through a general global search method for box constrained problems. Depending on the kind of problem under study, the objective function can be defined in different ways. Here we choose to focus on minimizing the total of the mission, therefore the objective function is:
(26) 
where is the velocity change due to the DSM in the th leg, and is the maneuver needed to inject the spacecraft into the final orbit.
Iv Test Problems
We consider a benchmark made of four different testcases: two versions of the MGA transfer from the Earth to Saturn of the CassiniHuygens mission, a multigravity assist transfer to the comet 67P/ChuryumovGerasimenko (similar to the Rosetta mission), and a multigravity assist rendezvous transfer with midcourse manoeuvres to Mercury (similar to the Messenger mission). Algorithm 1, called IDEA, is applied to the solution of the four cases and compared to standard Differential Evolution [5] and Monotonic Basin Hopping [14, 25].
Iva Cassini with no DSM’s
The first test case is a multi gravity assist trajectory from the Earth to Saturn following the sequence EarthVenusVenusEarthJupiterSaturn (EVVEJS). There are six planets and the transfer is modeled as in Section IIIA, thus the solution vector is:
(27) 
The final is the needed to inject the spacecraft into an ideal operative orbit around Saturn with a pericenter radius of 108950 km and an eccentricity of 0.98. The weighting functions are defined as follows:
(28)  
with the minimum normalized pericenter radii , , and . For this case the dimensionality of the problem is six, with the search space defined by the following intervals: MJD2000, d, d, d, d, d. The best known solution is km/s, with =[–789.75443770458, 158.301628961437, 449.385882183958, 54.7050296906556, 1024.5997453164, 4552.72068790619].
IvB Cassini with DSM’s
The second test case is again a multi gravity assist trajectory from the Earth to Saturn following the sequence EarthVenusVenusEarthJupiterSaturn (EVVEJS), but a deep space manoeuvre is allowed along the transfer arc from one planet to the other according to the model presented in Section IIIB. Although from a trajectory design point of view, this problem is similar to the first test case, the model is substantially different and therefore represents a different problem from a global optimization point of view. Since the transcription of the same problem into different mathematical models can affect the search for the global optimum, it is interesting to analyze the behavior of the same set of global optimization algorithms applied to two different transcriptions of the same trajectory design problem.
Here, is defined as the modulus of the vector difference between the velocity of Saturn at arrival and the velocity of the spacecraft at the same time. For this case the dimensionality of the problem is 22, with the search space is defined by the following intervals: MJD2000, km/s, , , d, d, d, d, d, , , , , , , , , , , , , . The best known solution is km/s, =[–780.917853635368, 3.27536879103551, 0.782513100225235, 0.378682006044345, 169.131920055057, 424.13242396494, 53.296452710059, 2199.98648654574, 0.795774035295027, 0.530055267286, 0.126002760289258, 0.0105947672634, 0.0381505843013, 1.35556902792788, 1.05001740672886, 1.30699201995999, 71.3749247783128, 3.15842153037544, 3.53046280721895, 3.12561791754698, 3.08422162979462].
IvC Rosetta Mission
The third test case is a multi gravity assist trajectory from the Earth to the comet 67P/ChuryumovGerasimenko following the gravity assist sequence that was planned for the spacecraft Rosetta: EarthEarthMarsEarthEarthComet. The trajectory model is the one described in Section IIIB but the objective function does not include .
For this case the dimensionality of the problem is 22, with the search space is defined by the following intervals: MJD2000, km/s, , , d, d, d, d, d, , , , , , , , , , , , , .
The best known solution is =1.34229 km/s, with solution vector =[1542.65536672006, 4.48068107888312, 0.935220667497966, 0.9909562486258, 365.24235847396, 707.540858648698, 257.417859715383, 730.483434305258, 1850, 0.310501108489873, 0.809061227121068, 0.0124756484551758, 0.0466967002704, 0.43701236871638, 1.8286351998512, 1.05, 2.80973511169638, 1.18798981835459, 2.61660601734377, –0.215250274241349, 3.57950314115394, 3.46467471264343].
IvD Messenger Mission
The fourth problem is a multigravity assist trajectory from the Earth to planet Mercury following the sequence of planetary encounters of the first part of the Messenger mission. As for the previous test case, the trajectory model is the one described in Section IIIB.
For this case the dimensionality of the problem is 18, with the search space is defined by the following intervals: MJD2000, km/s, , , d, d, d, d, , , , , , , , , , .
The best known solution is = 8.631 km/s, with solution vector =[1171.14619813253, 1.41951376601752, 0.628043728560056, 0.500000255697689, 399.999999999969, 178.921469111868, 299.279691870106, 180.689114497891, 0.236414009949924, 0.0674215615945254, 0.832992171208578, 0.312514378885353, 1.7435422021558, 3.03087330660319, 1.10000000000119, 0.219820823285448, 0.477475660779879, 0.225898117795826].
Note that, the search space for each one of the trajectory models is normalized so that is a unit hypercube with each component of the solution vector belonging to the interval . Furthermore, for all cases, solution algorithms were run for a progressively increasing number of function evaluations from 100000 to 1.25 million.
V Testing Procedure
The modified DE algorithm will be compared against standard DE and MBH following a rigorous testing procedure. A detailed description of the testing procedure can be found in [26] and it is here summarized in Algorithm 2 for a generic solution algorithm and a generic problem . Here denotes the best point observed during the th run of algorithm .
The index of performance is the number of successes of the algorithm . In the following we use the values reported above in place of and we consider only and not .
The success rate,
, will be used for the comparative assessment of the algorithm performance instead of the commonly used best value, mean and variance. Indeed, the distribution of the function values is not Gaussian. Therefore, the average value can be far away from the results returned with a higher frequency from a given algorithm. In the same way, the variance is not a good indicator of the quality of the algorithm because a high variance together with a high mean value can correspond to the case in which 50% of the results are close to the global optimum with the other 50% far from it. Finally, statistical tests, such as the ttest, that assume a Gaussian distribution of the sample can not be applied to correctly predict the behavior of an algorithm. Instead, the success rate gives an immediate and unique indication of the algorithm effectiveness, and, moreover, it can be always represented with a binomial probability density function (pdf), independent of the number of function evaluations, the problem and the type of optimization algorithm.
A key point is setting properly the value of
to have a reliable estimate of the success probability of an algorithm, or success rate
. Since the success is binomial (assumes values that are either 0 or 1) we can set a priori the value of to get the required confidence on the value of. A commonly adopted starting point for sizing the sample of a binomial distribution is to assume a normal approximation for the sample proportion
of successes (i.e. , where is the unknown true proportion of successes) and the requirement that
Comments
There are no comments yet.