1 Introduction
Manyobjective Optimization Problems (MaOPs) are one of the biggest openproblems in applied soft computing. The complexities of the realworld problems give rise to the class of heuristic algorithms with population features, which are often recognized as ManyObjective Evolutionary Algorithms (MaOEAs)
Louafi et al. (2017); Li et al. (2017); Ferreira et al. (2017); Du et al. (2018).The difficulty of MaOPs increase dramatically with the increment of the dimensionality of the objective space, i.e. the number of objectives Purshouse and Fleming (2007); Wang et al. (2017). The deterioration in performance inspires new heuristics. Recent literature reveals the trend of combining different strategies to achieve synergistic performance, since sticking to one direction alone leads to the overfitting on problems of certain types and less robustness.
In this paper, we propose a reference vector based algorithm, yet hybridizing the ideas from Pareto dominance relations. The proposed algorithm TEEA focuses on the scenario of the objective space being haunted by infeasible parts that change the distribution of the PF while it is evolving, which is particularly meaningful for difficult problems where proximity of population to the true PF should not be expected. TEEA is with a selection engine and an adaptation engine, capable of adapting to the various characteristics of the Feasible Objective Space (FOS) intime, which can be seen as a generalization of the adaptive reference vector based algorithms aiming to adapt the reference vectors for the true PF only.
The rest of this paper is organized as follows: Section 2 gives the preliminaries of researches on the MaOPs including the basics of reference vector based MaOPs, the focus and goal of this paper, the related works and then provides the ideas that yield the contribution of this work. Section 3 gives the details of the proposed algorithm TEEA. Section 4 presents the experimental studies, which include the investigations of the characteristics and the comparative results with other stateoftheart algorithms on a standard benchmark suite.
2 Preliminaries
Reference vector based approaches are one of the most popular families of MaOEAs for their many attractive properties on problems with modest number of objectives (e.g. ): applicability onto the complicated scenarios Chugh et al. (2018), human preference articulation Cheng et al. (2016) and high efficiency, etc.. In this section, we will first focus on giving the basic knowledge about the problem that we want to address in this paper, then discuss the related works, as well as our ideas of motivation and inspiration.
2.1 Basics & Challenge of Infeasibility
The goal for MaOEAs is to find a solution set whose objective value vectors in the objective space constitute a “good” representative set of the true PF. To be a good representative set, the finite individuals^{1}^{1}1we overload the term “individual” to call one solution in the solution space and the corresponding objective vector in the objective space. in the set should well present the spatial properties of the infinite true PF hypersurface. With such recognition, researchers summarize the goal of MaOEAs into obtaining proximity and diversity simultaneously, where proximity ensures that the individuals are close to the true PF and diversity ensures that the limited individuals can well represent the distribution of the true PF.
Without the loss of generality, assuming that the goal of each objective is minimization and the objectives are strictly positive, here we introduce some basic knowledge about reference vector based MaOEAs, which is to be used in the later sections.
Definition 2.1 (Feasibility)
Given a manyobjective function , where is the solution space defined for the problem (domain of ), is the dimensionality of search space and is the dimensionality of the objective space, the feasible objective space is defined as
Also, an objective vector is said to be infeasible if .
Definition 2.2 (Paretodominate)
Given an MaOP and two individuals , is said to Paretodominate (dominate) if and only if is less than or equal to elementwise and with at least one element strictly less than that of , i.e.
Definition 2.3 (Pareto Front)
Given the manyobjective function to be minimized and the set of individuals ever found in the optimization process, the Pareto Front (PF, current PF) is a subset of that dominates all other elements in , i.e.
People also overload the term “current PF” for the imaginary hypersurface that the current PF locates on.
Definition 2.4 (True Pareto Front)
Given the manyobjective function to be minimized, the true Pareto Front (true PF) is a subset of that dominates all other elements in , i.e.
Definition 2.5 (Infeasibility of FOS, Full FOS & Partial FOS)
Given MaOP and , the trivial infeasible part of the objective space is defined as the set of points in that could dominate at least one point in the true PF, i.e.
Additionally, if , the problem is said to have a full feasible objective space (full FOS); Else, the problem is said to have nontrivial infeasible parts in the objective space or simply a partial feasible objective space (partial FOS).
Problems with full FOS are also recognized as the simplexlike problems Cheng et al. (2016); Tian et al. (2017). A full FOS occupies the space of except for the trivial infeasible part where points dominate the true PF. A partial FOS has more infeasible parts where points are dominated by the true PF.
Definition 2.6 (Reference Vectors & Reference Points)
Given the manyobjective function , a set of reference vectors is a finite subset of .
For any reference vector , the unit norm vector is said to be the corresponding reference point.
Reference vectors and reference points have injective correspondence. We call the scaled reference vectors reference points for they locate on the subspace of the unit hyperplane with
norm .Definition 2.7 (Activation of Reference Vector, Association & Adjacent Space)
Given a set of reference vectors and an individual , a reference vector is said to be activated by if
where is the angle between and . Also, is said to be attached or associated to if is activated by . The set of all the possible points that can activate given is recognized as the adjacent space of given .
If a reference vector is activated, there must be individuals nearby the direction the reference vector points towards. An MaOP with full FOS is expected to activate all reference vectors if they are roughly evenly distributed towards every direction in .
Definition 2.8 (Infeasibility of PF, Full PF & Partial PF)
Given MaOP and , the MaOP is said to have a fullycovering PF (or full PF) if
Else, the problem is said to have a partiallycovering PF (or partial PF) .
Theorem 2.1
MaOPs with partial true PFs must have partial FOS.
Proof 2.1
Partial PFs cannot dominate all points in a FOS with only the trivial infeasible part. Thus the FOS must have a nontrivial infeasible part.
An MaOP with full FOS must have full PF, but the converse is not true.
Reference vectors are often used as the guidelines for evolutionary selection. The individuals associated to a reference vector will be selected using some criterion that penalizes the deviation from the reference vector’s direction for diversity and simultaneously awards the individuals with shorter projection lengths on the reference vector’s direction for proximity, given the reference vectors are correctly set. However, for problems with partial FOS, fixed reference vectors are no longer good guidelines, as indicated in Fig. 1.
Reference vector based algorithms with a set of fixed reference vectors are designed for full FOS problems. Previous works show that fixed reference vectors, if not set with prior knowledge, are misleading for the selection of individuals toward diversity for problems with partial true PFs Zhao et al. (2018); Ge et al. (2018). But what do we do if the distribution of the current PF is changed due to the encountered nontrivial infeasible parts of the objective space? Can we adapt the reference vector intime s.t. they approximate the ideal guidelines at the intersection points of the guidelines with the current PF?
2.2 Related Works & Motivations
There are a lot of existing works for adjusting the reference vectors with the goal of handling irregular PFs.
Some works focus on dealing with the curvature of true PF, MOEA/DAWA deletes the overcrowded reference vectors and inserting new reference vectors in the sparser spaces Qi et al. (2014); RVEA in Cheng et al. (2016) tilts the unit hyperplane to cater to the distribution of the individuals to address the problems caused by the different magnitudes of objectives; Wu et al. (2018) proposes to use Gaussian process to fit the hypersurface of the current PF for the distribution of reference vectors.
There are also works focusing on the infeasibility of the true PF, i.e. partial true PFs Ge et al. (2018): Approaches like ANSGAIII Jain and Deb (2014) and RVEA* Cheng et al. (2016) employ the strategy of inserting extra reference lines into crowded areas with the risk of the disturbance of uniformity of the distribution of reference vectors. To deal with the disturbance of uniformity, Zhao and Ge et al. proposed a series of reference vector algorithms with interactive components capable of learning the complex effective areas (scaled projection of the true PF onto the unit hyperplane with norm 1) if the problems are easy enough to converge early.
These works all share one underlying assumption: the MaOP is easy enough for selection engines to obtain population at the vicinity of the true PF. But for the problems with infeasible parts in the objective space and high difficulty, e.g. problems with large dimensionality of the solution space and partial FOS, such assumption will not hold. The inadaptability of the existing algorithms on these problem motivate us to address such problem. In 2013, Wang et al. contributed a prototype algorithm PICEAw Wang et al. (2013) which coevolves the set of reference vectors alongside the population for a similar purpose, though not tested and still have the problems for the disturbance of uniformity and so on. Can we extend the adaptation of reference vectors to a intime manner s.t. they might appropriately serve as the right guidelines of selection for the evolving population? The answer is yes  we found a viable solution in the interacting component framework.
2.3 Inspiration
Here we propose a general idea to the intime adaptability to the population in the FOS: if we are able to track the current PF of the evolution reasonably precisely, we can use the tracked PF to somehow adapt the reference vectors, in which case we can approximate the ideal selection guidelines, indicated in Fig. 1, using the reference vectors!
The problems left for implementing such idea are mainly two:

How to track the current PF in an effective and efficient way?

How to adjust the references using the tracked PF?
Our solution to the first problem, i.e.
tracking the current PF, is not a parametric model since such model may accidently do interpolation and extrapolation within the infeasible spaces. Thus, we use something simpler: since individuals found during the evolution process never violate the feasibility of the FOS, we just buffer them in an archive! Such idea has been exploited for similar purposes in
Praditwong and Yao (2006); Wang et al. (2015).The second problem is way more tricker. Our solution, the main novelty of this paper, is a delicate mechanism constituted of two subroutines that propagate the distribution of the current PFs onto layers of reference vectors corresponding to different densities. Such mechanism can effectively adapt to shrinking or expanding crosssections of the current PF and the FOS without with marginal suffering of the disturbance of uniformity.
In this paper, we provide an algorithm TEEA that contains two assisting archives and two interacting engines, aiming to provide versatility for feasibility change of the FOS, as demonstrated in Fig. 2.
3 Teea
3.1 Individual Archive
The current PF is constituted by the objective vectors of the individuals that have never been dominated by the individuals ever found in the optimization process. The information of the current PF is necessary to identify and adapt to the changes in the FOS. However, since typical selection methods for population only leaves individuals, parts of the information about the current PF will be potentially lost. The individuals that have not been dominated should be archived and maintained as another population (apart from the “population” we use to generate offsprings) to keep such information, but not in an everincreasing manner: we should archive a reasonable amount of individuals that have not been dominated which could well reflect the spatial distribution of the current PF. This calls a proper elimination method for the most undesirable archived individuals. Thus we seek to employ cascade clustering Zhao et al. (2018); Ge et al. (2018), a population selection strategy that keeps the distribution of the population as complete as possible, for the update of such archive, which we named the Individual Archive (IA). Whenever new individuals are found, we use cascade clustering to find and archive the cluster centers (the best individual for the adjacent spaces of reference vectors w.r.t. the criterion of cascade clustering) among the population constituted of newly evolved individuals and the archived individuals. Different to employing cascade clustering for population selection, since IA maintenance only seeks to track the distribution of PF, we only have to buffer the cluster centers that reflect the distribution of PF and discard the rest. This means that IA keeps at least individuals, where is the number of participating reference vectors.
Assuming the number of reference vectors is a moderate function of , i.e. , the maintenance of IA is running at the complexity level , which depends on the complexity of cascade clustering.
3.2 RA: Reference Archive
The distribution of the reference vectors that participate the selection should change intime to cater with the shape of the current PF. We propose another archive for the reference vectors, which we named the Reference Archive (RA). Since the shape of current PF changes, the local densities of reference vectors should also change. Thus we have designed RA to be hierarchical: each layer inside RA only contains reference vectors generated using the same density; layers with different densities can be combined to achieve higher density. The details of updating the RA will be introduced with the adaptation engine of the reference vectors.
3.3 Selection Engine
This paper provides no novelty in neither the generation of offsprings nor selection of population. We employ the selection method of cascade clustering proposed and ameliorated in Zhao et al. (2018); Ge et al. (2018). Briefly speaking, cascade clustering selects a population that is evenly spread w.r.t. the given reference vectors in a cascade style to achieve proximity and diversity. It has shown stateoftheart effectiveness and efficiency when compared to stateoftheart reference vector based selection methods. Also, it has flexible interfaces for the adaptations of reference vectors, which suits the need of this paper. We used this to maintain the IA, now we also use this to select the population. Using cascade clustering both to maintain IA and to select population, we can make sure that the reference vectors that could be activated will always keep at least one associated individual, without the fear of losing them in the population selection. Surprisingly, we can also prove that using cascade clustering to do two tasks sequentially is equivalent to doing them two at the same time. The combined selection and update of the IA is demonstrated in Fig. 3.
The overall runtime complexity for the process is , assuming . We provide the pseudo code of the selection based on cascade clustering in Alg. 2, without diving into its details since we made no amelioration. For more details, refer to Ge et al. (2018).
Cascade clustering additionally returns the indices of the active reference vectors, which correspond to the reference vectors attached with nondominated individuals, telling the adaptation where the reference vector should be i.e. the distribution of the current PF.
3.4 Adaptation Engine
Reference vectors serve as guidelines for the selection of individuals towards proximity and diversity in the objective space. For problems with full FOS, simple reference vectors generated could serve as the ideal guidelines. However, for objective spaces with unfeasible parts, the ideal guidelines become much more complicated and unadjusted reference vectors are far from ideal guidelines, as demonstrated in Fig. 1 (a). For these objective spaces, sticking to the fixed reference vectors hurts diversity and potentially proximity as well. It is needed to adjust the reference vectors intime s.t. they are always good approximations to vectors towards the intersection points of the guidelines and the current PF, as illustrated in Fig. 4.
One possible solution to such approximation for guidelines of arbitrarily but continuously shrinking and expanding FOS is evenly distributing sufficient number of reference vectors that intersect with the current PF^{2}^{2}2The conjecture is metaphoric: the current PF is assumed to be a hypersurface but actually just a set of points in the objective space; Intersection points of reference vectors with the current PF means the orthogonal projection of the objective vectors of the individuals onto the associated reference vectors.. We can easily come up with a brute force method: buffer sets of evenly distributed reference vectors with different generation densities. After each update of the IA, starting from the set of reference vectors corresponding to the lowest density^{3}^{3}3Any generation density corresponding to generating less than reference vectors should be excluded from the possible selections of generation density., see how many reference vectors intersect with the current PF and continue to do so until the number of active reference vectors is around .
The problem with this brute force method is that it is too costly: each adaptation costs approximately
. We seek to contribute a smooth and gradual approximation to such brute force method with significantly less computational cost for adaptations. The idea is simple: for the moment of adjustments, if the number of intersection points of reference vectors and the current PF is below
, perhaps due to the shrinkage or insufficient covering of the of current PF, we initialize the “shrink” subrountine: generate a set of reference points with a higher density and save this set in the top layer of RA but only enable those within the adjacent spaces of all currently active reference points (disabled points will not participate in the selection). Then we get all the enabled reference vectors stored in every layer of RA to as the participating set of reference vectors. The set addition of evenly distributed reference points yields approximately evenly distributed reference vectors that will potentially intersect the current PF; If the number of intersection points is greater than , which is perhaps due to the expansion of the distribution of current PF, we initialize the “expand” subrountine: enable the reference points in the lower layers that are within the adjacent spaces of the active reference points in the densest layer and then delete the densest layer^{4}^{4}4In implementation, simply disable it.Precalculating the layers of reference vectors and the angles between them reduces the complexity of this process to , which is exactly what we aim for: significantly lower than the brute force method of .
In the RA, the reference vectors will be uniformly increased and uniformly reduced, thus the disturbance of the uniformity of the reference vectors is relatively low. Visualization of simulated initializations of adaptation is presented in Fig. 5, with the pseudocode given in Alg. 3.
3.5 Combining All Together
We now give the proposed algorithm TEEA which combines the proposed components. At the beginning, to initialize, individual solutions are randomly generated and is initialized with reference points uniformly generated on the unit hyperplane as the base layer. Then, the main cycle loops:

Evolve the old population to get the offsprings by employing a certain heuristic;

Pick sized population and new individual archive out of the union of the last population, the newly evolved offsprings and the individuals in the individual archive using cascade clustering. Additionally, provide the activity of reference vectors.

Check the stability of the activity of the reference vectors, report whether it is appropriate to adjust the distribution of the reference vectors in the RA according to IA. If not continue;

Initialize adaptation, which may trigger one of the two subroutines. The set of participating reference vectors will be provided to the selection engine for selection.
When the termination criteria are satisfied, we pick final population using the individuals buffered in the IA using cascade clustering. Note that in order to make TEEA more stable, i.e. reduce the number of unnecessary adaptation, we introduce a mechanism of checking the stability of activity: if the activity of each reference vector does not change for generations, the mechanism reports stable else unstable; Else, the adaptation will not be triggered.
4 Experimental Studies
This section gives the analyses for the effectiveness of TEEA on benchmark problems, including the validation of the proposed components, comparison with the stateoftheart algorithms, etc..
4.1 Settings
The settings of the experiments are identical to those for the CEC’2018 MaOP competition Cheng et al. (2017), which uses the MaF benchmark suite, where scalable manyobjective benchmark functions. Among them, cases are with infeasible parts in the objective spaces (MaF1, MaF2, MaF4, MaF6  MaF9, MaF15), cases are with full FOS (MaF3, MaF5, MaF10  MaF14). Specifically, MaF8 and MaF9 are with narrow FOS, MaF6 is with extremely narrow (degenerate) FOS, MaF3 and MaF13 are hard to converge, MaF7 is with disconnected FOS and PF, MaF14 and MaF15 are with largescale solution space. Each benchmark function is scaled to three separate test cases, in which , or . There is an injective mapping between and for each function, and the number of maximum evaluations for each algorithm is set to be . All algorithms are restrict to using the population size of , as well as the classical simulated binary crossover and polynomial mutation optimizer.
Results given in this section are averaged over independent runs on identical platforms with MATLAB R2019a. The source code^{5}^{5}5The MATLAB source code will be published in Mingde Zhao’s github: https://github.com/PwnerHarry/ of TEEA is implemented with PlatEMO 2.0 Tian et al. (2017).
We evaluate the quality of the obtained populations using the Inverse Generational Distance based on norm (IGD, often abbreviated as IGD), a problem dependent evaluation criterion of both proximity and diversity Czyzżak and Jaszkiewicz (1998). IGD calculates the average minimum distance from the sample points on the true PF to the points of the population. The smaller the IGD, the better the proximity and diversity.
4.2 Hyperparameters
Here we want to determine a fixed set of hyperparameters for all the following experiments, instead of overfitting them for each problem, for fairness of comparison. Also, we want to check if the proposed algorithm TEEA is sensitive to these hyperparameters. There are two hyperparameters in TEEA, the window size
to suggest adaptation moments and tolerance ratio for the sensitivity of adaptation w.r.t. the reference vector activities.Note that when the number of objectives is large, becomes sensitive for the difficulty in generating uniform reference vectors. When the number of evaluations given is low, becomes sensitive in a problemspecific way for it controls the tradeoff between adaptation accuracy (the accuracy of the moments of the adaptation being initialized) and number of adaptations before finish. We select the test case DTLZ7 Deb et al. (2005) specifically since it has fractal FOS and intermediate number of evaluations, which is likely an expected scenario for the experiments to come.
For the sake of fair comparison, the set of problems we use for hyperparameter analysis does not have intersection with the experiments later. The results are given in Tab. 1. Observing from the results, we can find an approximate interval for the hyperparameters to achieve similarly good performance. We choose and since the change of performance around this point is relatively modest and also for its good performance.
\  0.05  0.1  0.15  0.2  0.25 

10  6.63e1  4.85e1  3.44e1  3.00e1  3.25e1 
15  6.52e1  4.65e1  3.02e1  2.93e1  3.19e1 
20  4.75e1  4.23e1  2.80e1  2.85e1  3.13e1 
25  3.18e1  2.99e1  2.78e1  2.78e1  3.02e1 
50  5.66e1  4.85e1  4.21e1  4.33e1  4.65e1 
Color indicators are added to for assisting the reading of the results. The greener the better performance, the redder the worse.  
Averaged from runs on DTLZ7 with , and 
4.3 Comparative Tests on MaF Benchmark Suite
We try to analyze the characteristics of TEEA by comparing it with stateoftheart algorithms including CVEA3 Yuan et al. (2018), BCEIBEA Li et al. (2016), fastCAR Zhao et al. (2018), CLIA Ge et al. (2018), ARMOEA Tian et al. (2017), ANSGAIII Jain and Deb (2014) and RVEA* Cheng et al. (2016). Among them, fastCAR, CLIA, ANSGAIII and RVEA* are reference vector based with adaptation methods. The characteristics of the compared algorithms are presented in Tab. 2.
Algorithms  Comments 

TEEA  The algorithm proposed in this paper. 
CVEA3  Cost value based MaOEA 3, bestperforming (1st) participants of CEC’2018 MaOP competition Yuan et al. (2018). The evolution operator has been set to default for fair comparison. 
BCEIBEA  Bicriterion variant of IBEA Zitzler and Künzli (2004), one of the most bestperforming (3rd) participants of CEC’2018 MaOP competition Li et al. (2016). 
fastCAR  Reference vector based MaOEA with periodic adaptation based on margin learning, one of the most bestperforming (4th) participants of CEC’2018 MaOP competition Zhao et al. (2018). 
CLIA  Improvement upon fastCAR. Reference vector based MaOEA with incremental learning of the PF via component interactions Ge et al. (2018). 
ARMOEA  An indicator based MaOEA based on adaptive reference points Tian et al. (2017). 
ANSGAIII  NSGAIII with reference vector adaptation Jain and Deb (2014). 
RVEA*  RVEA with reference vector adaptation Cheng et al. (2016). 
The averaged IGD results of the algorithms are presented in Tab. 3, with additional statistical information including Friedman tests and paired tests, etc.. The Friedman tests suggest that with high confidence we can say that TEEA the rankings of the algorithms are effective, therefore telling that TEEA achieves the overall performance. The paired tests suggest that TEEA beats several compared stateoftheart algorithms and achieves similar performance with others. The Friedman tests, tests as well as straight preservations from the result table all suggest that TEEA achieves undeniably competitive performance on such set of complex benchmarks problems.
Problem  M  TEEA  CVEA3  BCEIBEA  fastCAR  CLIA  ARMOEA  ANSGAIII  RVEA*  

Mean  Std  Mean  Std  Mean  Std  Mean  Std  Mean  Std  Mean  Std  Mean  Std  Mean  Std  
F1  5  1.08e1  7.25e4  1.13e1  1.26e3  1.08e1  2.70e4  1.12e1  1.20e2  1.10e1  7.55e4  1.41e1  1.89e3  1.58e1  1.45e2  1.46e1  8.94e3 
10  2.33e1  7.45e3  2.52e1  4.42e3  2.37e1  8.84e3  2.96e1  4.67e7  2.39e1  3.09e3  2.52e1  1.29e3  2.73e1  1.11e2  3.47e1  2.86e2  
15  2.67e1  8.85e3  3.28e1  4.51e3  2.95e1  3.09e3  3.24e1  8.07e12  2.63e1  2.81e3  2.88e1  1.25e2  3.15e1  6.29e3  4.32e1  3.34e2  
F2  5  9.65e2  2.85e3  9.35e2  1.93e3  8.72e2  1.16e3  1.08e1  1.86e1  9.60e2  1.67e3  1.11e1  1.03e3  1.06e1  1.80e3  1.09e1  1.95e3 
10  1.53e1  2.99e3  1.76e1  4.34e3  1.73e1  5.04e3  2.65e1  2.08e1  1.60e1  1.62e3  2.06e1  8.73e3  2.32e1  2.45e2  2.71e1  1.19e2  
15  1.64e1  4.35e3  1.98e1  2.26e2  2.86e1  8.86e3  2.57e1  1.18e1  1.66e1  2.52e3  2.26e1  1.02e2  2.20e1  1.28e2  2.84e1  2.12e2  
F3  5  6.82e2  1.90e3  5.93e2  7.58e4  1.64e1  4.48e2  6.43e2  9.99e1  6.36e2  1.12e3  9.71e2  1.74e3  8.30e2  2.45e2  7.68e2  2.65e3 
10  8.36e2  2.20e3  1.69e1  1.19e2  6.50e1  3.02e1  8.31e2  1.00e0  7.45e2  2.60e3  4.08e0  1.18e1  5.61e6  2.01e7  7.73e0  8.66e0  
15  9.11e2  1.27e3  1.88e1  3.38e2  5.45e1  1.27e1  8.97e2  1.00e0  8.56e2  6.07e4  5.48e1  1.51e2  2.35e2  6.16e2  2.66e1  6.95e1  
F4  5  1.77e0  3.25e2  1.70e0  2.24e2  3.07e0  1.03e0  2.01e0  9.79e2  1.90e0  6.47e2  2.45e0  9.25e2  2.35e0  1.49e1  2.09e0  9.62e2 
10  7.50e1  2.25e0  5.10e1  2.12e0  9.34e1  2.53e0  7.81e1  3.13e6  1.00e2  2.36e0  9.68e1  6.42e0  9.74e1  6.05e0  7.97e1  7.62e0  
15  2.80e3  1.23e2  1.49e3  1.58e1  2.74e3  1.02e3  3.24e3  1.10e11  3.62e3  1.62e1  4.01e3  5.26e2  3.74e3  3.39e2  2.77e3  2.46e2  
F5  5  1.96e0  9.55e3  2.28e0  1.10e0  1.75e0  3.35e2  1.97e0  8.12e1  1.93e0  8.36e3  2.39e0  8.01e1  1.97e0  4.90e3  2.26e0  8.41e1 
10  8.82e1  1.19e0  6.04e1  1.99e1  4.88e1  1.70e0  8.82e1  9.68e1  7.80e1  1.03e0  9.87e1  4.11e0  8.08e1  4.10e0  7.03e1  1.39e1  
15  2.35e3  3.07e2  1.38e3  1.15e3  1.37e3  5.75e1  2.43e3  9.90e1  2.29e3  2.05e2  3.31e3  4.44e2  2.32e3  6.54e2  2.54e3  7.97e2  
F6  5  1.82e3  1.43e4  2.20e3  6.87e5  2.10e3  7.29e6  6.51e3  1.29e1  2.40e3  3.32e4  4.22e3  5.63e5  1.82e2  1.14e2  2.37e2  4.98e3 
10  2.75e3  6.25e4  1.99e3  4.35e5  4.57e1  3.57e1  2.49e2  9.91e2  3.40e2  1.28e4  1.08e1  1.40e1  3.02e0  4.22e0  4.39e2  4.22e2  
15  1.03e2  9.25e3  2.43e2  8.09e2  7.46e1  7.25e3  6.56e2  9.24e2  4.24e2  1.50e2  2.38e1  1.25e1  1.55e1  1.37e1  1.68e1  1.91e1  
F7  5  2.78e1  3.25e3  2.89e1  1.51e1  2.27e1  4.63e3  2.91e1  2.57e1  2.70e1  6.11e3  3.29e1  7.55e3  2.81e1  1.68e2  2.11e1  3.92e3 
10  8.35e1  1.03e1  8.49e1  3.44e3  8.28e1  5.58e2  1.60e0  1.61e1  8.53e1  3.69e2  1.57e0  9.55e2  1.18e0  9.98e2  9.09e1  1.36e1  
15  1.80e0  3.40e1  1.64e0  4.62e2  2.16e0  1.77e1  1.48e1  1.91e2  2.16e0  1.67e1  4.06e0  6.97e1  3.17e0  4.49e1  1.77e0  4.43e1  
F8  5  9.17e2  4.32e3  8.11e2  7.52e3  7.35e2  6.34e4  9.86e2  1.22e1  7.88e2  2.65e3  1.29e1  4.45e3  1.50e1  1.74e2  2.67e1  5.42e2 
10  1.35e1  4.99e3  1.25e1  8.02e3  1.09e1  5.52e4  2.98e1  6.80e3  1.46e1  4.02e3  1.37e1  4.17e3  3.28e1  6.06e2  9.74e1  1.68e1  
15  1.93e1  1.87e2  1.60e1  1.06e2  1.32e1  7.36e4  6.05e1  5.66e5  1.90e1  9.46e3  1.73e1  6.24e3  3.61e1  6.02e2  1.45e0  3.20e1  
F9  5  9.17e2  6.35e3  9.62e2  1.53e2  2.76e1  9.57e2  9.34e2  3.16e1  7.87e2  5.90e3  1.27e1  8.24e3  2.43e1  1.13e1  1.87e1  2.98e2 
10  1.85e1  1.27e2  1.98e1  7.86e2  2.49e0  3.42e2  2.32e1  1.35e2  3.38e1  6.58e2  1.77e1  8.20e3  5.98e1  2.14e1  9.31e1  2.16e1  
15  2.25e1  6.25e2  1.76e1  1.21e1  3.34e0  5.50e0  6.40e1  2.67e4  3.48e1  1.45e1  1.51e1  5.40e3  2.65e0  4.63e0  1.14e0  2.02e1  
F10  5  3.87e1  1.45e2  4.43e1  1.82e2  3.67e1  1.87e3  5.02e1  9.44e1  3.73e1  8.63e3  4.71e1  1.09e2  4.59e1  3.35e2  6.30e1  7.66e2 
10  1.02e0  2.98e2  1.40e0  5.19e2  9.66e1  1.94e2  1.02e0  9.98e1  1.05e0  3.73e2  1.20e0  5.94e2  1.07e0  4.19e2  1.35e0  9.24e2  
15  1.38e0  3.91e2  2.12e0  1.05e1  1.50e0  4.59e2  1.63e0  9.98e1  1.52e0  6.54e2  1.96e0  8.53e2  1.76e0  5.59e1  2.00e0  6.39e2  
F11  5  3.89e1  1.93e3  4.46e0  9.22e3  4.78e1  9.35e3  6.58e1  9.95e1  6.35e1  1.26e2  8.25e1  1.72e2  1.04e0  5.00e1  2.42e0  8.17e1 
10  1.15e0  2.26e2  1.47e0  4.80e2  1.18e0  2.21e2  4.76e0  9.96e1  3.00e0  1.28e0  2.56e0  4.39e1  5.70e0  6.26e1  1.06e1  2.20e0  
15  1.43e0  3.71e2  2.17e0  4.47e2  1.65e0  4.79e2  6.21e0  9.95e1  8.52e0  2.79e0  4.68e1  6.22e1  1.29e1  1.60e0  1.50e1  2.36e0  
F12  5  9.36e1  3.43e3  9.63e1  9.78e3  9.37e1  8.21e3  9.35e1  7.76e1  9.33e1  2.51e3  1.12e0  7.95e3  9.40e1  8.33e3  9.68e1  9.86e3 
10  4.60e0  1.81e2  4.17e0  2.26e2  4.15e0  8.14e3  4.60e0  8.98e1  4.35e0  2.06e2  4.69e0  1.44e2  4.55e0  2.34e1  4.49e0  4.47e2  
15  7.73e0  8.12e2  7.19e0  6.65e2  7.24e0  1.15e1  7.80e0  8.26e1  7.56e0  1.58e1  7.62e0  1.66e1  8.48e0  3.55e1  8.40e0  1.47e1  
F13  5  8.83e2  1.25e2  7.79e2  1.27e2  1.25e1  4.12e2  1.30e1  2.61e1  1.20e1  1.19e2  1.30e1  6.74e3  1.63e1  1.72e2  4.51e1  9.00e2 
10  1.12e1  5.25e2  9.98e2  6.79e3  1.22e1  7.88e3  3.23e1  1.32e1  2.17e1  1.15e2  1.22e1  7.21e3  2.30e1  2.00e2  4.52e1  9.87e2  
15  1.43e1  6.35e2  1.25e1  1.18e2  1.34e1  7.74e3  4.43e1  5.24e2  2.49e1  3.90e2  1.50e1  9.25e3  2.51e1  3.13e2  6.38e1  7.86e2  
F14  5  3.42e1  2.47e2  2.26e1  4.65e2  5.30e1  7.32e2  3.52e1  6.01e1  3.50e1  2.28e2  3.85e1  4.24e2  7.14e1  2.36e1  5.26e1  8.67e2 
10  5.49e1  7.73e2  8.49e1  1.69e1  2.45e1  3.53e1  6.95e1  5.24e1  5.54e1  5.09e2  6.76e1  8.35e2  2.34e0  1.31e0  7.22e1  5.71e2  
15  6.22e1  1.43e1  1.12e0  1.45e1  1.80e1  1.60e1  7.28e1  4.39e1  6.19e1  1.33e1  6.04e1  6.46e2  1.32e0  2.61e1  8.92e1  1.30e1  
F15  5  3.06e1  4.33e2  2.33e1  5.41e2  9.52e1  4.41e2  4.21e1  4.28e2  3.68e1  4.02e2  6.13e1  2.57e2  9.94e1  9.40e2  6.17e1  4.06e2 
10  8.99e1  9.81e2  1.03e0  2.05e1  9.19e0  1.35e1  9.97e1  4.43e7  9.84e1  4.72e2  8.52e1  5.04e2  1.53e0  2.67e1  9.81e1  6.80e2  
15  1.02e0  1.35e1  1.93e0  4.95e1  1.42e0  9.17e2  1.14e0  5.14e11  1.04e0  4.51e2  1.35e0  7.91e2  3.89e0  2.03e0  1.27e0  5.40e2  
Friedman  5  2.57  3.53  3.83  4.60  2.53  6.30  6.30  6.33  
10  2.50  3.37  3.77  5.17  3.93  4.87  6.40  6.00  
15  2.80  3.40  4.30  5.20  3.30  4.60  6.40  6.00  
partial  2.31  2.87  4.28  5.39  3.57  5.02  6.52  6.04  
full  2.97  4.39  3.72  4.19  2.97  5.61  5.97  6.17  
overall  2.62  3.43  3.97  4.99  3.26  5.26  6.37  6.11  
ttest  5  5/3/7  8/2/5  3/12/0  7/2/6  15/0/0  13/2/0  13/1/1  
10  7/3/5  8/3/4  9/5/1  11/1/3  10/5/0  11/3/1  11/2/2  
15  8/1/6  10/2/3  10/4/1  8/5/2  8/3/4  14/1/0  11/3/1  
partial  8/11/8  16/6/5  20/7/0  18/5/4  20/4/3  24/3/0  23/2/2  
full  11/1/6  11/0/7  4/14/0  0/18/0  13/3/2  12/5/1  14/2/2  
overall  19/9/17  26/7/12  22/23/0  23/10/12  34/6/5  40/3/2  38/3/4  
For Friedman test, , ; For test, . The results of the tests are provided as “”, where represents the mean value of the proposed algorithm, for a certain test case, is significantly less than the compared algorithm, represents a greater result, and represents that it is uncertain to say whether the mean value of the proposed algorithm TEEA is greater or less than the compared algorithm. 
To observe the performance of TEEA more intuitively, we provide the comparison graph between the obtained PF with median IGD among the independent runs and the ground truth PF on test cases with partial FOS. We can see from these figures that, apart from the noise caused by evolution, TEEA obtains intuitively satisfying performance.
To specify the performance on each type of algorithms, we offer two dichotomies for the test cases, first of which is based on the number of objectives and second is based on the feasibility of the FOS. We conduct Friedman tests and tests on the partitioned sets of the test cases and present the statistics in Tab. 3. Also, to intuitively compare the performance, we convert the results of the Friedman test into a radar diagram presented in Fig. 7. From the categorical observations we find that

TEEA has competitive performance within all partitions of test cases, ranking at least the 2nd among the stateoftheart algorithms;

TEEA has leading performance in test cases with partial FOS. This shows its effectiveness.

TEEA and CLIA achives highest performance on the test cases with full FOS, this shows the effectiveness of the selection operator cascade clustering;

TEEA has arguably similar performance (from test) when compared to CVEA3, which strongly shows the competitiveness of its performance.
4.4 Comparative Tests on Difficult Problem with Infeasible Parts in Objective Spaces
To validate the effectiveness of the main contribution of this work, we extend the dimensionality of the solution space of the MaF15 problem into for , constituting extremely difficult test cases. These test cases simulate the scenarios in which the population hardly reaches vicinity of the true PF and the objective space is with infeasible parts. We compare TEEA with the reference vector based opponents with adaptation, including fastCAR Zhao et al. (2018), CLIA Ge et al. (2018), ARMOEA Tian et al. (2017), ANSGAIII Jain and Deb (2014) and RVEA* Cheng et al. (2016). The results are given in Tab. 4.
MaF15  TEEA  fastCAR  CLIA  ANSGAIII  RVEA*  

M  D  Mean  Std  Mean  Std  Mean  Std  Mean  Std  Mean  Std 
5  100  3.06e1  3.25e3  4.21e1  4.28e2  3.68e1  4.02e2  9.94e1  9.40e2  6.17e1  4.06e2 
200  3.08e1  2.05e2  4.65e1  7.68e2  3.69e1  4.03e2  9.22e1  8.51e3  6.54e1  3.05e2  
300  3.08e1  1.85e2  4.77e1  8.25e2  3.69e1  5.79e2  8.75e1  5.44e2  6.60e1  2.48e2  
400  3.20e1  1.43e2  4.78e1  6.35e2  3.72e1  2.25e2  8.88e1  6.45e2  6.71e1  2.94e2  
500  3.28e1  7.65e2  4.90e1  4.52e2  3.76e1  5.85e2  8.93e1  5.33e2  6.40e1  3.45e2  
10  100  8.85e1  3.25e2  9.98e1  3.85e7  9.67e1  3.25e2  1.43e0  2.13e1  9.32e1  7.25e2 
200  9.02e1  4.67e2  9.97e1  4.43e7  9.84e1  4.72e2  1.53e0  2.67e1  9.81e1  6.80e2  
300  9.06e1  4.32e2  1.03e0  1.33e2  9.95e1  6.85e2  1.65e0  3.57e1  9.49e1  1.21e1  
400  9.10e1  5.25e2  1.25e0  2.25e1  1.13e0  8.25e2  1.43e0  3.65e1  1.03e0  1.33e1  
500  9.18e1  8.33e2  1.44e0  3.45e1  1.25e0  1.37e1  1.55e0  4.45e1  1.24e0  2.37e1  
For each test case, . Results averaged from indepedent runs on each test case. 
It can be observed that TEEA validates its own effectiveness by achieving the best performance with an overwhelming margin.
4.5 Stability Comparison using Confidence Interval Integration
Stability is of great importance for optimization algorithms. We analyze the stability of TEEA by comparing it with those of the stateoftheart algorithms. The evaluation criterion for stability is the scaled approximated size of area of the logarithmicallyscaled confidence intervals. Let
be the sampled statistical information (at time ) of the IGD of a certain algorithm on a certain problem, where is the mean IGD of independent runs at the sample times as well as and be the lowerbound and upperbound of the confidence interval calculated from the IGD values of the independent runs at the sample times, the stability criterion is computed as:Note that the criterion requires the sample of points are the same, which we set to
points per test case. The smaller the criterion, the better the stability (less variance). The results of such criterion on all test cases is provided in Tab.
5.Problem  TEEA  CVEA3  BCEIBEA  fastCAR  CLIA  ARMOEA  ANSGAIII  RVEA*  

F1  5  7.66e+1  8.48e0  3.54e+1  6.61e0  7.65e+1  2.36e+1  1.48e+1  9.06e+1 
10  8.51e+1  5.76e+1  8.02e+1  6.17e+1  3.98e+1  9.97e0  4.25e0  3.73e+1  
15  8.58e+1  5.37e+1  4.56e+1  8.65e+1  3.92e+1  9.31e0  7.05e+1  4.62e+1  
F2  5  4.20e+1  2.61e+1  5.85e0  2.41e+1  2.10e+1  1.06e0  2.37e+1  2.64e+1 
10  4.45e+1  5.72e+1  5.83e+1  2.75e+1  1.87e+1  7.71e+1  3.38e+1  6.65e0  
15  5.47e+1  3.01e+1  7.72e+1  6.79e0  3.28e+1  6.95e1  1.05e+1  3.20e+1  
F3  5  2.44e+1  7.18e+1  5.41e+1  9.99e+1  4.58e0  2.05e+1  3.40e+1  4.19e+1 
10  3.28e+1  6.34e0  4.30e+1  9.98e+1  3.88e+1  6.84e0  2.62e+1  8.45e+1  
15  8.71e0  5.34e+1  9.12e0  9.47e+1  7.35e+1  1.76e+1  7.07e+1  7.91e+1  
F4  5  4.90e+1  2.84e+1  1.26e0  9.31e+1  7.24e+1  9.01e+1  1.40e+1  9.14e+1 
10  3.08e+1  2.74e+1  3.66e+1  4.63e+1  3.10e+1  6.85e+1  6.69e+1  7.26e+1  
15  7.42e0  1.83e+1  7.31e1  4.01e0  1.99e+1  6.37e+1  4.90e+1  3.82e+1  
F5  5  8.52e+1  4.02e0  5.19e+1  8.92e+1  7.60e+1  9.12e+1  6.62e+1  8.93e+1 
10  7.61e0  3.00e+1  2.32e+1  9.05e+1  1.10e0  5.29e+1  5.80e+1  1.40e+1  
15  3.97e+1  5.21e0  6.87e+1  9.41e+1  3.04e+1  5.91e+1  7.63e+1  7.87e+1  
F6  5  1.42e+1  7.22e+1  7.19e+1  1.04e+1  5.13e+1  7.12e+1  4.92e0  5.76e+1 
10  7.82e+1  6.13e+1  5.35e+1  1.00e2  9.62e0  1.47e+1  6.20e+1  5.79e+1  
15  7.83e+1  8.18e+1  8.30e+1  8.90e+1  1.73e+1  9.30e0  1.23e+1  2.41e+1  
F7  5  4.63e+1  1.54e+1  6.54e+1  4.04e+1  7.49e+1  7.75e+1  2.16e+1  5.15e+1 
10  1.23e0  4.44e+1  6.19e+1  2.04e+1  5.43e+1  7.95e+1  9.23e+1  1.31e+1  
15  5.34e+1  6.39e+1  2.42e+1  2.31e+1  1.93e+1  7.98e+1  5.43e+1  5.65e+1  
F8  5  5.29e+1  7.62e+1  7.12e+1  7.84e0  4.07e+1  5.94e+1  2.41e+1  7.23e+1 
10  6.51e+1  8.74e+1  7.43e+1  7.57e+1  5.57e+1  5.67e+1  6.84e+1  2.13e+1  
15  2.28e+1  2.14e0  7.96e+1  6.27e+1  9.02e+1  7.18e+1  7.58e+1  4.45e+1  
F9  5  7.72e+1  1.75e+1  9.07e+1  4.41e+1  7.15e+1  7.80e+1  4.29e0  4.57e+1 
10  9.62e0  7.49e+1  5.32e+1  1.25e+1  7.56e+1  8.10e+1  3.33e+1  3.16e+1  
15  6.77e+1  7.34e0  7.32e+1  4.22e+1  1.56e+1  6.95e+1  5.69e+1  2.66e+1  
F10  5  1.48e+1  2.10e+1  2.23e+1  7.93e+1  7.66e+1  3.40e0  4.94e+1  7.28e+1 
10  4.39e+1  6.24e+1  2.53e+1  9.00e+1  5.39e+1  7.50e+1  6.06e+1  8.35e+1  
15  5.32e+1  1.80e0  6.00e+1  8.44e+1  7.52e+1  7.70e+1  7.33e+1  7.90e+1  
F11  5  2.58e+1  7.88e+1  8.80e+1  8.19e+1  1.01e+1  2.04e+1  5.74e+1  8.38e+1 
10  3.32e+1  6.84e+1  2.81e+1  9.06e+1  9.57e0  5.79e+1  7.81e+1  3.01e+1  
15  5.18e+1  5.81e+1  5.87e+1  9.04e+1  3.83e+1  7.89e+1  1.73e+1  3.30e+1  
F12  5  5.18e+1  9.13e+1  8.10e+1  8.30e+1  3.31e+1  7.71e+1  7.84e+1  9.43e+1 
10  2.49e+1  3.15e+1  8.68e+1  8.62e+1  3.04e+1  1.50e+1  3.29e+1  5.70e+1  
15  9.06e+1  7.20e+1  5.79e0  7.80e+1  1.69e+1  2.10e+1  4.74e+1  1.58e+1  
F13  5  9.41e0  9.46e0  5.91e+1  3.51e+1  6.57e0  7.79e+1  2.20e+1  9.25e+1 
10  6.34e+1  7.22e+1  8.76e+1  1.06e+1  5.89e0  6.97e+1  2.79e+1  8.51e+1  
15  7.31e+1  6.98e0  7.69e+1  7.05e+1  4.81e+1  8.30e+1  4.21e+1  7.60e+1  
F14  5  3.79e+1  6.72e+1  8.25e+1  6.34e+1  3.04e+1  5.58e+1  3.60e+1  8.34e+1 
10  8.89e+1  2.15e+1  4.91e+1  6.71e+1  6.94e+1  8.53e+1  9.88e0  6.74e+1  
15  1.51e+1  1.47e+1  2.00e+1  6.38e+1  1.08e+1  7.57e+1  3.37e+1  1.09e+1  
F15  5  5.78e+1  7.15e+1  5.93e+1  6.37e+1  5.52e+1  3.36e+1  8.84e+1  6.11e+1 
10  9.75e+1  2.78e+1  1.09e+1  5.57e+1  6.19e+1  6.54e+1  4.29e+1  7.47e+1  
15  1.14e+1  5.97e+1  8.79e+1  6.98e+1  5.66e+1  8.63e+1  2.55e+1  5.92e+1  
Friedman  4.07  4.20  4.93  5.38  3.40  4.84  4.02  5.16  
For Friedman test, , and . 
The Friedman test shows that TEEA has modest stability among all compared algorithms, showing that the achieved competitive performance in the experiments is in a degree reliable.
4.6 Component Validation
4.6.1 Effectiveness of Individual Archive
We demonstrate the effectiveness of IA by comparing the performance TEEA has achieved to the version of TEEA without IA (using the population to adjust references instead of IA). The results are given in Tab. 6.
Problem  TEEA  TEEA w/o IA  

Mean  Std  Mean  Std  
F1  5  1.08e1  7.25e4  1.16e1  7.84e4 
10  2.33e1  7.45e3  2.50e1  7.94e3  
15  2.67e1  8.85e3  2.85e1  8.96e3  
F2  5  9.65e2  2.85e3  1.00e1  2.89e3 
10  1.53e1  2.99e3  1.50e1  3.01e3  
15  1.64e1  4.35e3  1.67e1  4.53e3  
F3  5  6.82e2  1.90e3  6.76e2  1.94e3 
10  8.36e2  2.20e3  8.68e2  2.35e3  
15  9.11e2  1.27e3  8.93e2  1.38e3  
F4  5  1.77e0  3.25e2  1.78e0  3.49e2 
10  7.50e1  2.25e0  7.60e1  2.39e0  
15  2.80e3  1.23e2  2.95e3  1.28e2  
F5  5  1.96e0  9.55e3  1.98e0  9.93e3 
10  8.82e1  1.19e0  8.82e1  1.21e0  
15  2.35e3  3.07e2  2.51e3  3.21e2  
F6  5  1.82e3  1.43e4  1.95e3  1.53e4 
10  2.75e3  6.25e4  2.92e3  6.35e4  
15  1.03e2  9.25e3  1.08e2  1.01e2  
F7  5  2.78e1  3.25e3  2.85e1  3.50e3 
10  8.35e1  1.03e1  8.16e1  1.06e1  
15  1.80e0  3.40e1  1.81e0  3.67e1  
F8  5  9.17e2  4.32e3  9.15e2  4.34e3 
10  1.35e1  4.99e3  1.39e1  5.23e3  
15  1.93e1  1.87e2  2.07e1  1.87e2  
F9  5  9.17e2  6.35e3  9.71e2  6.87e3 
10  1.85e1  1.27e2  1.81e1  1.37e2  
15  2.25e1  6.25e2  2.40e1  6.56e2  
F10  5  3.87e1  1.45e2  3.96e1  1.48e2 
10  1.02e0  2.98e2  1.07e0  3.04e2  
15  1.38e0  3.91e2  1.35e0  4.06e2  
F11  5  3.89e1  1.93e3  3.96e1  1.96e3 
10  1.15e0  2.26e2  1.16e0  2.29e2  
15  1.43e0  3.71e2  1.52e0  3.75e2  
F12  5  9.36e1  3.43e3  9.69e1  3.42e3 
10  4.60e0  1.81e2  4.70e0  1.81e2  
15  7.73e0  8.12e2  7.77e0  8.34e2  
F13  5  8.83e2  1.25e2  9.39e2  1.27e2 
10  1.12e1  5.25e2  1.12e1  5.33e2  
15  1.43e1  6.35e2  1.43e1  6.39e2  
F14  5  3.42e1  2.47e2  3.55e1  2.54e2 
10  5.49e1  7.73e2  5.79e1  7.96e2  
15  6.22e1  1.43e1  6.52e1  1.54e1  
F15  5  3.06e1  4.33e2  3.26e1  4.48e2 
10  8.99e1  9.81e2  8.98e1  9.82e2  
15  1.02e0  1.35e1  1.04e0  1.46e1  
test  16/22/7 
It can be observed that generally TEEA with IA performs better than TEEA without IA. IA can bring changes for performance on different types of problems: reason for performing better on problems with full FOS, IA prevents the adaptation engine from mistakenly initiating due to the stochastic perturbations of the evolutionary process; On problems with partial FOS, apart from the first reason, the additional is that IA keeps all the individuals that could activate the reference vectors, whereas in the population, due to the limit of the size of population, useful cluster centers are often lost.
4.6.2 Effectiveness of Adaptation
To validate the effectiveness of RA, we feed the adaptation engine with different types of simulated tracked PFs to check if the adaptation engine can effectively adapt the reference vectors to the distribution of the current PF. To visually verify the effectiveness, we have crafted different simulated test cases, where respectively, the current PF is segmented into parts by the imaginary infeasible parts of the corresponding objective spaces. The results are given in Fig. 8.
Moreover, to test the Markov properties (the adaptation is expected to be Markovian since past adaptations should not influence the future), we shuffle the simulated test cases to possible permutations and output the similarity of the adaptation results (percentage of the identically enabled points in the RA) on these sequences in Tab. 7.
Similarity (%)  
a  b  c  d 
100%  100%  100%  100% 
Execution of adaptation on the permuted sequences is one by one, and continue upon the previous state of RA. 
The test demonstrate perfect Markovian behavior for the adaptation mechanism, but these simulated test cases cannot represent general test cases. However, the results demonstrate some level of reliable stability of the adaptation engine.
5 Conclusions
Aiming to address the problems caused by infeasible parts of MaOPs, this paper proposed a reference vector based MaOEA with interacting components. The algorithm TEEA tracks the current PF and uses such information to adapt the reference vectors which guides the evolution. The novelty in this paper mostly lies in the designed interaction scheme among the components and the method of adapting reference vectors. The effectiveness of the work has been validated thoroughly in the comparative experimental studies. It can be concluded that the proposed algorithm TEEA is effective in dealing with the MaOPs and particularly good at solving the problems with partially feasible objective spaces.
Acknowledgments
This work is supported by Mila (L’Institut québécois d’intelligence artificielle), the Scientific Computing Laboratory and Reasoning and Learning Laboratory, McGill University, the National Natural Science Foundation of China (61572104, 61103146, 61425002, 61751203), the Fundamental Research Funds for the Central Universities (DUT17JC04), and the Project of the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University (93K172017K03).
References
References
 Louafi et al. (2017) H. Louafi, S. Coulombe, M. Cheriet, Multiobjective optimization in dynamic content adaptation of slide documents, IEEE Transactions on Services Computing 10 (2017) 231–243.

Li et al. (2017)
L. Li, L. Jiao, J. Zhao,
R. Shang, M. Gong,
Quantumbehaved discrete multiobjective particle swarm optimization for complex network clustering,
Pattern Recognition 63 (2017) 1 – 14.  Ferreira et al. (2017) A. C. Ferreira, M. L. Nunes, J. C. Teixeira, L. A. Martins, S. F. Teixeira, S. A. Nebra, Design of a solar dish stirling cogeneration system: Application of a multiobjective optimization approach, Applied Thermal Engineering 123 (2017) 646 – 657.
 Du et al. (2018) W. Du, Y. Tang, S. Y. S. Leung, L. Tong, A. V. Vasilakos, F. Qian, Robust Order Scheduling in the Discrete Manufacturing Industry: A Multiobjective Optimization Approach, IEEE Transactions on Industrial Informatics 14 (2018) 253–264.

Purshouse and Fleming (2007)
R. C. Purshouse, P. J. Fleming,
On the evolutionary optimization of many conflicting
objectives,
IEEE Transactions on Evolutionary Computation 11 (2007) 770–784.
 Wang et al. (2017) H. Wang, Y. Jin, X. Yao, Diversity assessment in manyobjective optimization, IEEE Transactions on Cybernetics 47 (2017) 1510–1522.
 Chugh et al. (2018) T. Chugh, Y. Jin, K. Miettinen, J. Hakanen, K. Sindhya, A surrogateassisted reference vector guided evolutionary algorithm for computationally expensive manyobjective optimization, IEEE Transactions on Evolutionary Computation 22 (2018) 129–142.
 Cheng et al. (2016) R. Cheng, Y. Jin, M. Olhofer, B. Sendhoff, A reference vector guided evolutionary algorithm for manyobjective optimization, IEEE Transactions on Evolutionary Computation 20 (2016) 773–791.
 Tian et al. (2017) Y. Tian, R. Cheng, X. Zhang, F. Cheng, Y. Jin, An indicator based multiobjective evolutionary algorithm with reference point adaptation for better versatility, IEEE Transactions on Evolutionary Computation PP (2017) 1–1.
 Zhao et al. (2018) M. Zhao, H. Ge, H. Han, L. Sun, A manyobjective evolutionary algorithm with fast clustering and reference point redistribution, in: IEEE Congress on Evolutionary Computation (CEC), 2018, pp. 1–6.
 Ge et al. (2018) H. Ge, M. Zhao, L. Sun, Z. Wang, G. Tan, Q. Zhang, C. L. P. Chen, A manyobjective evolutionary algorithm with two interacting processes: Cascade clustering and reference point incremental learning, IEEE Transactions on Evolutionary Computation (2018).
 Qi et al. (2014) Y. Qi, X. Ma, F. Liu, L. Jiao, J. Sun, J. Wu, MOEA/D with adaptive weight adjustment 22 (2014) 231–264.
 Wu et al. (2018) M. Wu, K. Li, S. Kwong, Q. Zhang, J. Zhang, Learning to decompose: A paradigm for decompositionbased multiobjective optimization, IEEE Transactions on Evolutionary Computation (2018) 1–1.
 Jain and Deb (2014) H. Jain, K. Deb, An evolutionary manyobjective optimization algorithm using referencepoint based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach, IEEE Transactions on Evolutionary Computation 18 (2014) 602–622.
 Wang et al. (2013) R. Wang, R. C. Purshouse, P. J. Fleming, Preferenceinspired coevolutionary algorithms for manyobjective optimization, IEEE Transactions on Evolutionary Computation 17 (2013) 474–494.
 Praditwong and Yao (2006) K. Praditwong, X. Yao, A new multiobjective evolutionary optimisation algorithm: The twoarchive algorithm, in: International Conference on Computational Intelligence and Security, volume 1, 2006, pp. 286–291.
 Wang et al. (2015) H. Wang, L. Jiao, X. Yao, Two_Arch2: An improved TwoArchive algorithm for manyobjective optimization 19 (2015) 524–541.
 Cheng et al. (2017) R. Cheng, M. Li, Y. Tian, X. Xiang, X. Zhang, S. Yang, Y. Jin, X. Yao, Benchmark Functions for CEC’2018 Competition on ManyObjective Optimization, Technical Report, CERCIA, School of Computer Science, University of Birmingham Edgbaston, Birmingham B15 2TT, U.K., School of Computer Science and Technology, Anhui University Hefei 230039, China, School of Computer Science and Informatics, De Montfort University Leicester, LE1 9BH, U.K., Department of Computer Science, University of Surrey Guildford, Surrey, GU2 7XH, U.K., 2017.
 Tian et al. (2017) Y. Tian, R. Cheng, X. Zhang, Y. Jin, PlatEMO: A MATLAB platform for evolutionary multiobjective optimization [educational forum], IEEE Computational Intelligence Magazine 12 (2017) 73–87.

Czyzżak and Jaszkiewicz (1998)
P. Czyzżak, A. Jaszkiewicz,
Pareto simulated annealing: A metaheuristic technique for multipleobjective combinatorial optimization,
Journal of MultiCriteria Decision Analysis 7 (1998) 34–47.  Deb et al. (2005) K. Deb, L. Thiele, M. Laumanns, E. Zitzler, Scalable test problems for evolutionary multiobjective optimization, in: Evolutionary Multiobjective Optimization: Theoretical Advances and Applications, 2005, pp. 105–145.
 Yuan et al. (2018) J. Yuan, H. Liu, F. Gu, A cost value based evolutionary manyobjective optimization algorithm with neighbor selection strategy, in: IEEE Congress on Evolutionary Computation (CEC), 2018, pp. 1–8.
 Li et al. (2016) M. Li, S. Yang, X. Liu, Pareto or nonpareto: Bicriterion evolution in multiobjective optimization, IEEE Transactions on Evolutionary Computation 20 (2016) 645–665.
 Zitzler and Künzli (2004) E. Zitzler, S. Künzli, Indicatorbased selection in multiobjective search, in: Parallel Problem Solving from Nature  PPSN VIII: 8th International Conference, Birmingham, UK, 2004, pp. 832–842.
Comments
There are no comments yet.