Solving Large-Scale Multi-Objective Optimization via Probabilistic Prediction Model

07/16/2021 ∙ by Haokai Hong, et al. ∙ Xiamen University 0

The main feature of large-scale multi-objective optimization problems (LSMOP) is to optimize multiple conflicting objectives while considering thousands of decision variables at the same time. An efficient LSMOP algorithm should have the ability to escape the local optimal solution from the huge search space and find the global optimal. Most of the current researches focus on how to deal with decision variables. However, due to the large number of decision variables, it is easy to lead to high computational cost. Maintaining the diversity of the population is one of the effective ways to improve search efficiency. In this paper, we propose a probabilistic prediction model based on trend prediction model and generating-filtering strategy, called LT-PPM, to tackle the LSMOP. The proposed method enhances the diversity of the population through importance sampling. At the same time, due to the adoption of an individual-based evolution mechanism, the computational cost of the proposed method is independent of the number of decision variables, thus avoiding the problem of exponential growth of the search space. We compared the proposed algorithm with several state-of-the-art algorithms for different benchmark functions. The experimental results and complexity analysis have demonstrated that the proposed algorithm has significant improvement in terms of its performance and computational efficiency in large-scale multi-objective optimization.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Many optimization problems in the real world involve hundreds or even thousands of decision variables and multiple conflicting goals. These problems are defined as large-scale multi-objective optimization problems (LSMOP) Ponsich et al. (2013); Stanko et al. (2015). Large-scale multi-objective optimization problems have emerged from many real-world applications Hong et al. (2021). For example, in the problem of designing protein molecular structure, thousands of decision variables are considered. Besides, in a routing system with large number of network nodes, the distribution of transmission capacity of each node is regarded as a decision variable, whereas the energy consumption, robustness and stability of the network system constitute a large-scale multi-objectives optimization problem Tang et al. (2017). Since the size of the search space has an exponential relationship with the number of decision variables, which leads to the problem of dimensionality explosion in LSMOP Wang et al. (2015), finding the optimal solution in LSMOP is more challenging than ordinary multi-objective optimization problems(MOP) Durillo et al. (2010). Therefore, an excellent large-scale multi-objective optimization algorithm (LSMOA) should overcome this problem through searching in the decision space effectively, which is essential for solving LSMOP.

The fact that LSMOP is more difficult to solve than MOP with a small number of decision variables makes it challenging for the multi-objective evolutionary algorithm (MOEA) to effectively explore the search space. Generally speaking, MOEA is tend to converge to a local optimum prematurely, or it may converge to a relatively large area

Oldewage et al. (2017). Therefore, it has been proved through experience that most existing MOEAs cannot solve LSMOPs Deb et al. (2002); Zhang et al. (2018). In fact, most existing MOEAs improve the effectiveness of solving MOP by enhancing environmental selection (that is, the strategy of selecting solutions for the next generation from the current population) Deb et al. (2002); Tian et al. (2018); Zhang and Li (2007). However, the search efficiency is not sufficient for finding the optimal solution in a large search space. Thus, it is important to find a LSMOA that can search for the optimal solution in the space of LSMOPs.

In order to solve LSMOP, various LSMOAs have been presented in recent years Yang et al. (2008); Tian et al. (2020, 2021); Qian (2020). These proposed LSMOAs can be roughly divided into the following three categories. The first category of LSMOAs is benefits from the decision variable analysis. A large-scale evolutionary algorithm based on the decision variable clustering (LMEA) Zhang et al. (2018) is representative in this category, which divides the decision variables into two categories via clustering method. Then, by focusing on convergence and diversity respectively, two different strategies are used to optimize the variables related to convergence and diversity. The second category of LSMOAs is based on the decision variable grouping. This type of LSMOA divides the decision variables into several groups, and then optimizes each group of decision variables. For example, C-CGDE3 Antonio and Coello (2013) maintains several subsets of equal-length decision variables obtained by variable grouping as subpopulations. The third category is based on problem conversion. A representative framework of this category is called Large-scale Multi-Objective Optimization Framework (LSMOF) He et al. (2019)

. In LSMOF, the original LSMOP is converted into a low-dimensional single-objective optimization problem with some direction vectors and weight variables, the method aimed to guide the population towards the optimal solution.

In the methods of decision variable analysis and decision variable grouping, due to that fact that the algorithm relies on the research on decision variables, the algorithm complexity is relatively high. However solving LSMOP should not only consider how to escape from local optimum, but also how to reduce the impact of the number of decision variables on the complexity of the algorithm Qian and Yu (2017)

. Inspired by the advantages of Trend Prediction Model (TPM), we found that the Trend Prediction Model can be used to predict the optimal solution. The TPM is used to estimate and predict the probability of potential particles at each position in the next generation, thereby increasing the uncertainty of particle movement and achieving better prediction results. With the advantages of trend prediction model (TPM), the method not only maintains the diversity of the population

Jiang et al. (2021), but also keeps the computational complexity independent of decision variables. In the specific implementation, TPM was applied to sample individuals from the parent and expand into the offspring population.

In this paper, we integrate the Trend Prediction Model into a probabilistic prediction model to obtain a novel LSMOA algorithm. This method is a probabilistic prediction model for solving large-scale multi-objective optimization based on trend prediction model (LT-PPM). Our proposed algorithm selects individuals and expands them from the previous generation to achieve the purpose of expanding and generating a new offspring population via TPM, which ensures the diversity of the offspring population. The contributions of this work are summarized as follows.

  1. The proposed algorithm framework is an effective tool to solve the problem of large decision variable space and overcome the tendency to fall into a local optimum in LSMOP. Our method uses the trend prediction model to maintain the quality and diversity of the population simultaneously to escape from the local optimum, which provides a novel idea to solve LSMOP.

  2. Taking into account the high dimensionality of the decision variables of the LSMOP problem, the complexity of our proposed method is not directly related to the scale of the decision variables, but only related to the number of objectives and populations. While escaping from the local optimum, the influence of large-scale decision variables on the computational complexity is also eliminated in our method.

2 Preliminary Studies and Related Works

In this section, the definition of large-scale multi-objective optimization problems is presented. And then existing methods to solve LSMOPs are introduced.

2.1 Large-scale Multi-objective Optimization

The mathematical form of LSMOPs is as follows:

(1)

where is the -dimensional decision vector,
is the -dimensional objective vector.

It should be noticed that the dimension of decision vector is more than 100 in LSMOP Tian et al. (2019). When dealing with multi-objective optimization problems, evolutionary algorithms (EAs) are usually based on individual evolution strategies, genetic operators are used to evolve individuals, so that EAs have good global convergence and versatility. The evolutionary framework we use in this paper is an emerging branch in this field called the Probabilistic Prediction Model (PPM) Jiang et al. (2018, 2017), PPM promots the evolution of the entire population by establishing a probability model on the Pareto-optimal Front.

2.2 Related Works

In this part, we will introduce some state-of-art large-scale multi-objective optimization algorithms. Much progress has been made in solving LSMOPs. Most of existing methods can be divided into three categories: based on decision variable analysis, based on decision variable grouping and based on problem transformation.

The first category is based on the analysis of decision variables. Zhang et al. Zhang et al. (2018) proposed a large-scale evolutionary algorithm (LMEA) based on clustering of decision variables. Another representative of this type of algorithm is MOEA/DVA Ma et al. (2016), in which the original LSMOP is decomposed into many simpler sub-MOPs. Then, the decision variables in each sub-problem are optimized as an independent sub-component. The disadvantage of decision variable analysis-based LSMOAs is that the connection between decision variables could be incorrectly determined to update the solution, which may lead to local optimization Zhang et al. (2018). To solve large-scale MOPs that pose a great challenge to the balance between diversification and convergence by considering a correlation between position and distance functions, a novel indicator-based algorithm with an enhanced diversification mechanism was developed by Hong et al. Hong et al. (2019).

The second type of LSMOAs is decision variable grouping-based framework Yang et al. (2014). The method proposed by Antonio et al. Antonio and Coello (2013) maintained several independent subgroups. Each subgroup is a subset of the same-length decision variables obtained by variable grouping (for example, random grouping, linear grouping, ordered grouping or differential grouping). All sub-populations work together to optimize LSMOP in a divide-and-conquer manner. Zille et al. Zille et al. (2018) proposed a weighted optimization framework (WOF) for solving LSMOP. Decision variables are divided into many groups, and each group is assigned a weight variable. Then, in the same group, the weight variable is regarded as a sub-problem of a sub-space from the original decision space. Liu et al. Liu et al. (2020) proposed an algorithm in which the random dynamic grouping was used to substitute ordered grouping to improve the performance of the WOF framework.

In this category, there is a novel idea that based on competitive optimizer. Tian et al. Tian et al. (2019) proposed LMOCSO based on a competitive population optimizer to solve LSMOP. The method used a two-stage strategy to update the position of the particles. Another idea of this category is based on collaboration. Bergh et al. Bergh and Engelbrecht (2004)

proposed a method to transform particle swarm optimization algorithm into a collaborative framework. It divides the search space into low-dimensional subspaces to overcome the exponential increase in difficulty and ensures that it can search every possible area of the search space. And Shang et al.

Shang et al. (2016) proposed the idea of cooperative co-evolution, which was used to deal with large-scale multi-target capability routing problems. Another cooperative coevolutionary algorithm was proposed by Li et al. Li and Wei (2018), a fast interdependency identification grouping method was utilized for large-scale multi-objective problems.

However, for LSMOA based on the grouping of decision variables, two related variables can be divided into different groups, which may lead to local optimization Yang et al. (2014).

The third category is based on problem transformation. In order to enhance the pressure of convergence, one intuitive solution is to directly convert the definition of traditional Pareto dominance, such as -dominance Yuan et al. (2016), -optimality Zou et al. (2008), -dominance Hadka (2013), preference order ranking Pierro et al. (2007) and fuzzy dominance Wang and Jiang (2007). To enhance the convergence performance of evolutionary algorithm in solving large-scale multi-objective problems, Qin et al. Qin et al. (2021) proposed a method for generating guiding solutions along search directions defined by a few chosen individuals in the current parent population. Another kind of idea that belongs to this category is to combine traditional dominance with other convergence-related metrics. MOEAs that fall into this category include Grid-based Evolutionary Algorithm (GrEA) Yang et al. (2013), NSGA-II based on substitute distance assignment Köppen and Yoshida (2007), many-Objective Evolutionary Algorithm based on Directional Diversity and Favorable Convergence (MaOEA-DDFC) Cheng et al. (2015), Preference-Inspired Co-Evolutionary Algorithm (PICEA-g) Wang et al. (2013), and Knee point driven Evolutionary Algorithm (KnEA) Zhang et al. (2015).

In addition to the above-mentioned convergence-related problem transformation, for LSMOP, He et al. He et al. (2019) introduced a general framework called the large-scale multi-objective optimization framework (LSMOF). LSMOF reconstructs the original LSMOP into a low-dimensional single-objective optimization problem with some direction vectors and weight variables, aiming to lead the population to POS. Recently, He et al. He et al. (2020) designed an evolutionary multi-objective optimization (GMOEA) driven by GANs, which produces offspring from GANs.

As the number of decision variables increases linearly, the complexity (as well as volume) of the search space will increase exponentially, and the methods of analysis or grouping of decision variables presented above cannot tackle the curse of dimensionality. Therefore, in this work, we focus on how to use TPM to maintain population diversity to avoid falling into local optimum while avoiding the curse of dimensionality caused by large-scale decision variables to better solve LSMOP.

3 A Probabilistic Prediction Model Based on Trend Prediction Model

3.1 Overview

Based on the Generating-Filtering strategy, LT-PPM combines the importance sampling method with the non-parametric density estimation function to propose a different selection method for candidate solution. Specifically, LT-PPM uses a kernel density estimation method, a non-parametric probability density estimation method to assign a sparseness to each individual, and then uses importance sampling to select subsequent solutions. This method of selecting candidate solutions aims to ensure the balance between exploitation and exploration in terms of time and space: on the one hand, the sparseness will change with generation, on the other hand, sparseness itself will also guide the method to achieve the balance in the large-scale search space.

We first give the definition of particles, and then in the remainder of this chapter we will introduce importance sampling based on kernel density estimation, trend prediction model, LT-PPM and the complexity analysis of the algorithm.

Definition 1 (Particle)

For an optimization problem with optimization goals, the particle is a two-tuple , where represents a solution of the multi-objective optimization problem, and is a unit vector in -dimensional space representing the direction of the particle.

Note that the direction vector of the first generation population is randomly initialized. In the subsequent offspring population, the direction vector of each individual is the direction of movement when their parent particle produce them.

3.2 Importance Sampling Based on Kernel Density Estimation

Importance Sampling is defined as follows,

Definition 2 (Importance Sampling)

When calculating the expectation of the statistic

on the original probability distribution

, one can calculate the expectation of on a certain importance probability distribution ,

(2)

Assuming that the current population is
, and corresponding particles are . The probability density of the population in different regions reflects the degree of density in the search space: where the probability density is high, the population is denser. Therefore, the density of particle can be defined as its kernel density estimate,

(3)

where is the kernel function, and the parameter is used to adjust the width of the kernel density curve: when is larger, the distribution curve is smoother, when is smaller, the distribution curve is steeper.

The Sparseness of particle is defined as the reciprocal of Density,

(4)

The probability density of the population in different regions reflects the degree of density in the search space: where the probability density is high, the population is denser. Therefore, the density of particle can be defined as its kernel density estimate, The Sparseness of particle is defined as the reciprocal of Density. Based on the Sparseness of each particle, Importance Sampling for our method can be defined. Since each particle is randomly selected, particle

obeys a uniform distribution

. Then we have importance distribution,

(5)

The number of offspring that the particle will produce is proportional to . The expectation value of sparseness describes the sampling process, and its calculation formula is: . However, in MOPs, it is not easy to determine how many offspring the selected particle should produce, so we do not want to directly use this method for sampling. The original sampling process can be rewritten as,

(6)

where .

This means that we can select particles by the importance distribution , and then produce the same number of offspring. Algorithm 1 summarizes the above sampling process. In each iteration, first, the sparseness of each particle is calculated, and then the importance distribution is constructed, then an individual is sampled on the distribution , and the individual and its velocity are returned. This sampling method does not rely on the analysis of decision variables, avoiding the influence of large-scale decision variables on computational complexity.

Input: (Pareto-optimal Set), (bandwidth)
Output: (sampled particle), (solution of ), (velocity of )
1 size of ;
2 for  in  do
3       ;
4       ;
5      
6for  in  do
7       Constructe according to Equation 5 ;
8      
9Sampled based on ;
return ,
Algorithm 1 ISample()

3.3 Trend Prediction Model

After selecting the particles to be expanded via above selection process, it is necessary to sample on a specific distribution represented by selected particles to generate new individuals. Physically, if an object is not affected by external forces, it will maintain its original state of motion. Therefore, if a particle moves in the direction , the displacement direction

of the particle at the next moment is equal to

.

In order to effectively move to the POF, the randomness is added to the movement of the particle, so that the direction of movement only provides information on the trend of the movement instead of directly determining the place at the next moment. Such trend information can well guide the particles to update, thereby approaching the POF faster and more accurately.

Definition 3 (Trend Prediction Model, TPM)

For any given direction vector and distance measurement function . A probability distribution on is called as a trend prediction model (TPM).

For the two vectors and in the model, if it satisfies:

(7)

then it means that the vector is closer to the direction vector than , and should have a higher probability of occurrence.

Input: (Unit direction vector)
Output: (Random direction vector)
1 Dimension of ;
2 The subscript of the first non-zero vector in ;
3 for  do
4       if  then
5             = The vector that only the th element is and other elements are ;
6      else
7             = The vector that only the th element is and other elements are ;
8      
9 = Apply to the matrix ;
10 = Sampled from the dimensional unit hyperspherical coordinate system ;
11 ;
return
Algorithm 2 randpick()

In the trend prediction model, particle movement has a moving trend, which is indicated by the unit vector . The method aims to design the TPM distribution so that the closer the sampling vector to , the higher the sampling probability. For this, is introduced. Given the distance measurement method is the distance, the domain

is the unit hypersphere, and sampled from the Gaussian distribution with the expectation of 0 and the variance of

, which is the bandwidth. Since the domain is a unit hypersphere, is actually a probability distribution on a direction vector. In this distribution, directions close to the vector have a higher probability of being selected (that is, the angle with is small), otherwise the probability is low.

After sampling an angle on the system, it is necessary to randomly select one among all the vectors whose angle is with . In order to sample all vectors whose angle is with , we can construct a rotation matrix via Schmidt Orthogonalization to rotate the vector to the axis. Then, randomly select a vector on the dimensional hypersphere. Finally, the sampled vector is rotated back to the original coordinate space through the inverse transformation .

Algorithm 2 is the process of generating offspring direction vectors according to the parent individual by importance sampling. The complete trend prediction model is given in Algorithm 3, where ISample (POS) is Algorithm 1. In the specific strategy of generating offspring, after sampling the direction vector by Algorithm 2

, it is necessary to continue to determine the particle’s movement step length. The actual step length is sampled from a normal distribution with a expectation of 0 and a variance of

.

Input: (Pareto-Optimal Set), (bandwidth), (number of samples)
Output: (Sampled set)
1 ;
2 for  do
3       ;
4       = Sampled from ;
5       ;
6       Sampled from ;
7       ;
8       ;
9      
10return ;
Algorithm 3 TPM() (Trend Prediction Model)

3.4 Large scale TPM based Probabilistic Prediction Model (LT-PPM)

Combining the importance sampling and trend prediction model, the complete algorithm implementation of LT-PPM is given in Algorithm 4. The key steps of the algorithm are:

Generating: Select the particles to be expanded through importance sampling based on kernel density estimation, and then predict the next generation population through the trend prediction model;

Filtering: Screen out outstanding individuals from the parent population and offspring population.

Input: (size of restricted population), (size of population), (the maximum evaluations), (attenuation coefficient)
Output:
1 Random initial population ;
2 while Used evaluations  do
3       ;
4       Update and based on ;
5       Remove the densest individuals repeatedly until ;
6       ;
7      
8return ;
Algorithm 4 Large scale TPM based PPM

The algorithm framework of LT-PPM is briefly illustrated in conjunction with Figures 1(a) to 1(d). In a certain generation, the LT-PPM algorithm maintains a population and calculates the corresponding POS and POF (the solid line in Figure 1(a) represents the POF of this generation, and the dashed line represents the real POF). Then use the non-parametric kernel density estimation method to estimate the probability density of the distribution corresponding to the POS, and use the importance sampling method to select a particle. The principle is that particles with higher sparseness will have a higher probability of being selected (Figure 1(b)), while particles in dense particles have a lower probability of being selected. After that, apply the trend prediction model to the selected particles to obtain a series of new solutions (Figure 1(c), where gray dots represent the newly generated population). Finally, update POS and POF according to the newly generated individuals (Figure 1(d)).

Figure 1: Algorithm Framework of LT-PPM

3.5 Complexity Analysis

Assuming a objectives LSMOP with population, then the time to calculate all objectives is . In our algorithm, individuals are sampled in each iteration, and each individual requires time to sample, therefore the time complexity of the sampling process in each iteration is . Suppose the evaluation is , the actual interation times is , so the total complexity of the algorithm is .

The complexity of the algorithm is proportional to the square of the number of population and the number of objectives, and has no direct relation with the scale of decision variables of the problem. Therefore, our proposed LT-PPM can avoid the influence of large-scale decision variables on the computational complexity in solving large-scale multi-objective optimization problems.

4 Experiments and Analysis

4.1 Algorithms in Comparison and Test Problems

The proposed LT-PPM was experimented on large-scale multi-objective problems LSMOP1 - LSMOP9 Cheng et al. (2017) and compared with several state-of-the-art algorithms, including LMEA Zhang et al. (2018), LMOCSO Tian et al. (2019), NSGAIII Deb and Jain (2014) and RMMEDA Zhang et al. (2008). According to Tian et al. (2017), the parameters of LMEA, RMMEDA and LMOCSO algorithms are set to default values. For all test problems, the number of optimization objectives is set to 3. The remaining parameters are set as follows,

  1. Population size: The population size is set to 300.

  2. Termination condition: The maximum evaluation times of all compared LSMOAs is set to 100000.

  3. Parameter of LT-PPM: The bandwidth attenuation coefficient in algorithm 4 is set to 0.9.

4.2 Performance Indicators

  1. Schott’s Spacing (SP) Ismayilov and Topcuoglu (2018): It measures the uniformity of the solution found by the algorithm. The smaller the SP value, the more uniform the distribution of solutions obtained by the algorithm. It is calculated by the following formula:

    (8)

    where is the true POF of the multi-objective optimization problem, represents the Euclidean distance between the -th solution in and its closest solution, and represents the mean value of .

  2. Inverted Generational Distance (IGD) Zitzler et al. (2003): IGD is a metric used to quantify the convergence of the solution obtained by the multi-objective optimization algorithm. When the IGD value is small, the convergence of the solution is improved. IGD is defined as

    (9)

    where is the true POF of the multi-objective optimization problem, POF is the approximate set of POF obtained by the multi-objective optimization algorithm, and is the number of solutions in .

4.3 Performance on LSMOP Problems and Discussion

Problem Des. NSGAIII RMMEDA LMEA LMOCSO LT-PPM
1000 4.37E-01 4.36E-01 4.72E-01 4.75E-01 4.15E-01
LSMOP1 2000 4.43E-01 4.28E-01 4.33E-01 5.16E-01 4.93E-01
5000 4.35E-01 4.32E-01 4.19E-01 5.33E-01 3.15E-01
1000 1.68E-02 4.80E-01 5.34E-01 6.19E-02 3.10E-01
LSMOP2 2000 1.37E-02 4.81E-01 5.22E-01 5.27E-02 3.10E-01
5000 1.47E-02 5.44E-01 5.35E-01 5.24E-02 3.16E-01
1000 8.62E-01 9.27E-01 1.07E+00 6.67E-01 1.76E-01
LSMOP3 2000 8.77E-01 9.72E-01 1.02E+00 8.36E-01 2.09E-01
5000 9.23E-01 9.03E-01 1.09E+00 9.86E-01 3.66E-01
1000 5.24E-02 4.81E-01 5.12E-01 1.11E-01 2.97E-01
LSMOP4 2000 2.63E-02 5.13E-01 5.22E-01 7.39E-02 3.10E-01
5000 1.63E-02 5.14E-01 5.19E-01 5.62E-02 2.99E-01
1000 6.61E-01 7.61E-01 7.25E-01 7.13E-01 4.39E-01
LSMOP5 2000 5.75E-01 7.04E-01 7.74E-01 6.58E-01 4.10E-01
5000 5.53E-01 6.86E-01 7.68E-01 7.04E-01 3.70E-01
1000 9.77E-01 1.02E+00 1.15E+00 1.30E+00 3.08E-01
LSMOP6 2000 9.44E-01 1.03E+00 1.15E+00 1.24E+00 2.58E-01
5000 9.57E-01 1.15E+00 1.17E+00 9.31E-01 3.07E-01
1000 8.59E-01 8.73E-01 9.46E-01 3.57E-01 2.86E-01
LSMOP7 2000 5.97E-01 9.11E-01 9.11E-01 1.02E+00 3.13E-01
5000 5.21E-01 9.37E-01 8.99E-01 3.91E-01 3.44E-01
1000 5.24E-01 7.39E-01 7.23E-01 9.56E-01 3.39E-01
LSMOP8 2000 4.54E-01 7.53E-01 7.63E-01 8.58E-01 3.29E-01
5000 3.89E-01 7.07E-01 7.42E-01 8.99E-01 3.19E-01
1000 8.58E-01 7.54E-01 9.46E-01 8.93E-01 3.66E-01
LSMOP9 2000 8.63E-01 8.16E-01 9.05E-01 9.67E-01 3.87E-01
5000 8.50E-01 8.26E-01 9.17E-01 9.77E-01 5.32E-01
Table 1: Mean Values of SP Metric Obtained by Compared Algorithms Over LSMOP Problems With 3-Objectives
Problem Des. NSGAIII RMMEDA LMEA LMOCSO LT-PPM
1000 5.4E+00 8.8E+00 1.1E+01 1.3E+00 6.5E-01
LSMOP1 2000 7.3E+00 7.8E+00 1.1E+01 1.5E+00 5.9E-01
5000 9.8E+00 8.6E+00 1.1E+01 1.4E+00 7.5E-01
1000 3.1E-02 3.4E-02 3.4E-02 2.6E-02 3.9E-02
LSMOP2 2000 2.0E-02 2.5E-02 2.4E-02 1.9E-02 3.1E-02
5000 1.5E-02 2.0E-02 2.0E-02 1.4E-02 2.7E-02
1000 1.4E+01 1.6E+01 5.5E+01 9.9E+00 4.9E+00
LSMOP3 2000 1.7E+01 1.6E+01 4.2E+01 1.1E+01 4.7E+00
5000 1.9E+01 1.7E+01 2.1E+02 1.1E+01 5.1E+00
1000 1.1E-01 1.1E-01 1.1E-01 8.3E-02 1.1E-01
LSMOP4 2000 5.8E-02 6.3E-02 6.2E-02 4.8E-02 6.4E-02
5000 2.8E-02 3.2E-02 3.2E-02 2.5E-02 3.7E-02
1000 9.5E+00 1.1E+01 1.9E+01 3.0E+00 6.4E-01
LSMOP5 2000 1.2E+01 1.2E+01 1.9E+01 3.4E+00 4.8E-01
5000 1.8E+01 1.3E+01 2.0E+01 3.3E+00 7.6E-01
1000 3.7E+03 7.2E+03 3.1E+04 3.5E+02 1.9E+00
LSMOP6 2000 8.8E+03 1.2E+04 3.5E+04 5.0E+02 1.0E+00
5000 1.8E+04 5.8E+03 3.2E+04 5.6E+02 8.7E+00
1000 1.1E+00 1.1E+00 5.4E+02 8.2E-01 2.8E+00
LSMOP7 2000 1.0E+00 1.0E+00 1.4E+02 7.2E-01 4.0E+00
5000 9.7E-01 9.7E-01 1.2E+02 6.5E-01 2.1E+01
1000 5.8E-01 5.8E-01 5.9E-01 5.0E-01 2.0E-01
LSMOP8 2000 5.6E-01 5.8E-01 5.6E-01 4.9E-01 1.9E-01
5000 5.6E-01 5.5E-01 5.6E-01 4.9E-01 1.9E-01
1000 2.2E+01 4.1E+01 1.4E+02 2.4E+00 1.4E+00
LSMOP9 2000 4.3E+01 7.3E+01 1.4E+02 8.8E+00 1.4E+00
5000 6.9E+01 8.7E+01 1.5E+02 4.2E+01 1.4E+00
Table 2: Mean Values of IGD Metric Obtained by Compared Algorithms Over LSMOP Problems With 3-Objectives

Table 1 and Table 2 show the statistical results of the average SP and IGD values for 10 runs respectively, where the best result on each test instance is shown in bold.

From Table 1 and Table 2, it obviously that the performance of the proposed LT-PPM is better than the other five comparative convergence algorithms. Table 1 lists the SP values of the six comparison algorithms. LT-PPM performed best in 20 out of 27 test instances, followed by NSGAIII with 6 best results, and RMMEDA with 1 best result. Specifically, under all decision variable settings, LT-PPM performed better on LSMOP3, LSMOP5, LSMOP6, LSMOP7, LSMOP8 and LSMOP9. In other test instances, LT-PPM is slightly lower than the corresponding best performance algorithm.

In Table 2, LT-PPM got 18 of the 27 best results, LMOCSO got 9 best results, and the other algorithms did not get the best results. Specifically, under all decision variable settings, LT-PPM performs better on LSMOP1, LSMOP3, LSMOP5, LSMOP6, LSMOP8 and LSMOP9. LMOCSO achieved better convergence on problems LSMOP2, LSMOP4 and LSMOP7.

The above experimental results show that for most test instances, LT-PPM can obtain a set of solutions with good convergence and diversity. It can be seen that the proposed LT-PPM can obtain competitive performance on the LSMOP test problem, which confirms the effectiveness of LT-PPM in solving the challenging large-scale decision variable problem.

4.4 Experiment on Time Complexity and Discussion

Figure 2: Experiment on Time Complexity

Figure 2 compares the CPU running time on the LSMOP1 problem of the algorithms in the experiment, where the abscissa is the different decision variables, and the ordinate is the CPU time. It is worth noting that when the decision variable is greater than 500, the running time of RM-MEDA is greatly increased.

It can be seen from the results that the time consumption of our proposed algorithm is not the lowest when there are fewer decision variables. However, as the decision variables increase, the running time of the proposed algorithm grows slower than other algorithms. When it reaches 5000 dimensions, the time consumption of our algorithm is the lowest.

The results of the experiment prove that our algorithm is not directly related to the decision variables. This allows the algorithm to have better convergence and diversity when solving large-scale multi-objective optimization problems, while also avoiding the impact of large-scale decision variables on computational complexity.

5 Conclusion and Future Works

This paper proposes an algorithm, called LT-PPM, for solving LSMOP. The TPM component of the proposed method enables us to effectively use the heuristic information of the individual motion process to predict the POF for next generation. At the same time, the importance sampling technique makes the algorithm independent of the numbers of decision variables, so that the algorithm can explore faster in the large-scale search space. Simultaneously, the importance sampling and the TPM model are used to adjust the sparsity of the population, thereby balancing the development and exploration of the algorithm and escaping from the local optimum.

In LT-PPM algorithm, the general modeling-sampling process is not used. Instead, the Generating-Filtering strategy is adopted, which enables the algorithm to focus on using heuristic information to generate individuals and screen out dominant populations. Our method is compared with several other large-scale multi-objective optimization algorithms on 9 LSMOPs. The experimental results show that LT-PPM has good performance in convergence, distribution and robustness.

This paper presents a preliminary work on the application of trend prediction models to large-scale evolutionary calculations. Future work may take the following possible directions. The method has some details that can be improved, for large-scale problems with complex search spaces, such as constraints and discontinuous Pareto optimal regions, the proposed framework still has a lot of room for improvement. In addition, a wealth of advanced machine learning techniques can stimulate more innovations to solve practical applications with large-scale decision variables.

Acknowledgements.
Acknowledgement This work was supported by the National Natural Science Foundation of China (No.61673328).

References

  • [1] L. M. Antonio and C. A. C. Coello (2013) Use of cooperative coevolution for solving large scale multiobjective optimization problems. In

    2013 IEEE Congress on Evolutionary Computation

    ,
    Vol. , pp. 2758–2765. External Links: Document Cited by: §1, §2.2.
  • [2] F. v. d. Bergh and A. P. Engelbrecht (2004) A cooperative approach to particle swarm optimization. IEEE Transactions on Evolutionary Computation 8 (3), pp. 225–239. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [3] J. Cheng, G. G. Yen, and G. Zhang (2015) A many-objective evolutionary algorithm with enhanced mating and environmental selections. IEEE Transactions on Evolutionary Computation 19 (4), pp. 592–605. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [4] R. Cheng, Y. Jin, M. Olhofer, and B. sendhoff (2017) Test problems for large-scale multiobjective and many-objective optimization. IEEE Transactions on Cybernetics 47 (12), pp. 4108–4121. External Links: ISSN 2168-2275, Document Cited by: §4.1.
  • [5] K. Deb and H. Jain (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Transactions on Evolutionary Computation 18 (4), pp. 577–601. External Links: ISSN 1941-0026, Document Cited by: §4.1.
  • [6] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan (2002)

    A fast and elitist multiobjective genetic algorithm: nsga-ii

    .
    IEEE Transactions on Evolutionary Computation 6 (2), pp. 182–197. External Links: ISSN 1941-0026, Document Cited by: §1.
  • [7] J. J. Durillo, A. J. Nebro, C. A. C. Coello, J. Garcia-Nieto, F. Luna, and E. Alba (2010) A study of multiobjective metaheuristics when solving parameter scalable problems. IEEE Transactions on Evolutionary Computation 14 (4), pp. 618–635. External Links: Document Cited by: §1.
  • [8] D. Hadka (2013) Borg: an auto-adaptive many-objective evolutionary computing framework. Evolutionary Computation 21 (2), pp. 231. Cited by: §2.2.
  • [9] C. He, S. Huang, R. Cheng, K. C. Tan, and Y. Jin (2020)

    Evolutionary multiobjective optimization driven by generative adversarial networks (gans)

    .
    IEEE Transactions on Cybernetics, pp. 1–14. External Links: ISSN 2168-2275, Document Cited by: §2.2.
  • [10] C. He, L. Li, Y. Tian, X. Zhang, R. Cheng, Y. Jin, and X. Yao (2019) Accelerating large-scale multiobjective optimization via problem reformulation. IEEE Transactions on Evolutionary Computation, pp. 1–1. Cited by: §1, §2.2.
  • [11] W. Hong, P. Yang, and K. Tang (2021) Evolutionary computation for large-scale multi-objective optimization: a decade of progresses. International Journal of Automation and Computing 18 (2), pp. 155–169. External Links: ISSN 1751-8520, Document, Link Cited by: §1.
  • [12] W. Hong, K. Tang, A. Zhou, H. Ishibuchi, and X. Yao (2019-06) A scalable indicator-based evolutionary algorithm for large-scale multiobjective optimization. IEEE Transactions on Evolutionary Computation 23 (3), pp. 525–537. External Links: Document, ISSN 1941-0026 Cited by: §2.2.
  • [13] G. Ismayilov and H. R. Topcuoglu (2018) Dynamic multi-objective workflow scheduling for cloud computing based on evolutionary algorithms. In 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), Vol. , pp. 103–108. External Links: Document Cited by: item 1.
  • [14] M. Jiang, Z. Huang, G. Jiang, M. Shi, and X. Zeng (2017) Motion generation of multi-legged robot in complex terrains by using estimation of distribution algorithm. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. , pp. 1–6. External Links: Document Cited by: §2.1.
  • [15] M. Jiang, L. Qiu, Z. Huang, and G. G. Yen (2018) Dynamic multi-objective estimation of distribution algorithm based on domain adaptation and nonparametric estimation. Information Sciences 435, pp. 203 – 223. External Links: ISSN 0020-0255, Document, Link Cited by: §2.1.
  • [16] M. Jiang, Z. Wang, H. Hong, and G. G. Yen (2021)

    Knee point-based imbalanced transfer learning for dynamic multiobjective optimization

    .
    IEEE Transactions on Evolutionary Computation 25 (1), pp. 117–129. External Links: Document Cited by: §1.
  • [17] M. Köppen and K. Yoshida (2007) Substitute distance assignments in nsga-ii for handling many-objective optimization problems. In Evolutionary Multi-Criterion Optimization, S. Obayashi, K. Deb, C. Poloni, T. Hiroyasu, and T. Murata (Eds.), Berlin, Heidelberg, pp. 727–741. External Links: ISBN 978-3-540-70928-2 Cited by: §2.2.
  • [18] M. Li and J. Wei (2018) A cooperative co-evolutionary algorithm for large-scale multi-objective optimization problems. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’18, New York, NY, USA, pp. 1716–1721. External Links: ISBN 9781450357647, Link, Document Cited by: §2.2.
  • [19] R. Liu, J. Liu, Y. Li, and J. Liu (2020) A random dynamic grouping based weight optimization framework for large-scale multi-objective optimization problems. Swarm and Evolutionary Computation 55, pp. 100684. External Links: ISSN 2210-6502, Document, Link Cited by: §2.2.
  • [20] X. Ma, F. Liu, Y. Qi, X. Wang, L. Li, L. Jiao, M. Yin, and M. Gong (2016) A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with large-scale variables. IEEE Transactions on Evolutionary Computation 20 (2), pp. 275–298. Cited by: §2.2.
  • [21] E. T. Oldewage, A. P. Engelbrecht, and C. W. Cleghorn (2017) The merits of velocity clamping particle swarm optimisation in high dimensional spaces. In 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Vol. , pp. 1–8. External Links: Document Cited by: §1.
  • [22] F. d. Pierro, S. Khu, and D. A. Savic (2007) An investigation on preference order ranking scheme for multiobjective evolutionary optimization. IEEE Transactions on Evolutionary Computation 11 (1), pp. 17–45. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [23] A. Ponsich, A. L. Jaimes, and C. A. C. Coello (2013) A survey on multiobjective evolutionary algorithms for the solution of the portfolio optimization problem and other finance and economics applications. IEEE Transactions on Evolutionary Computation 17 (3), pp. 321–344. Cited by: §1.
  • [24] C. Qian (2020-08) Distributed pareto optimization for large-scale noisy subset selection. IEEE Transactions on Evolutionary Computation 24 (4), pp. 694–707. External Links: Document, ISSN 1941-0026 Cited by: §1.
  • [25] H. Qian and Y. Yu (2017) Solving high-dimensional multi-objective optimization problems with low effective dimensions. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, pp. 875–881. Cited by: §1.
  • [26] S. Qin, C. Sun, Y. Jin, Y. Tan, and J. Fieldsend (2021) Large-scale evolutionary multi-objective optimization assisted by directed sampling. IEEE Transactions on Evolutionary Computation (), pp. 1–1. External Links: Document, ISSN 1941-0026 Cited by: §2.2.
  • [27] R. Shang, K. Dai, L. Jiao, and R. Stolkin (2016) Improved memetic algorithm based on route distance grouping for multiobjective large scale capacitated arc routing problems. IEEE Transactions on Cybernetics 46 (4), pp. 1000–1013. External Links: ISSN 2168-2275, Document Cited by: §2.2.
  • [28] Z. P. Stanko, T. Nishikawa, and S. R. Paulinski (2015) Large-scale multi-objective optimization for the management of seawater intrusion, santa barbara, ca. Conference Proceedings In Agu Fall Meeting, Cited by: §1.
  • [29] K. Tang, J. Wang, X. Li, and X. Yao (2017) A scalable approach to capacitated arc routing problems based on hierarchical decomposition. IEEE Transactions on Cybernetics 47 (11), pp. 3928–3940. External Links: Document Cited by: §1.
  • [30] Y. Tian, R. Cheng, X. Zhang, F. Cheng, and Y. Jin (2018) An indicator-based multiobjective evolutionary algorithm with reference point adaptation for better versatility. IEEE Transactions on Evolutionary Computation 22 (4), pp. 609–622. External Links: ISSN 1941-0026, Document Cited by: §1.
  • [31] Y. Tian, R. Cheng, X. Zhang, and Y. Jin (2017) PlatEMO: a matlab platform for evolutionary multi-objective optimization [educational forum]. IEEE Computational Intelligence Magazine 12 (4), pp. 73–87. External Links: ISSN 1556-6048, Document Cited by: §4.1.
  • [32] Y. Tian, C. Lu, X. Zhang, K. C. Tan, and Y. Jin (2021)

    Solving large-scale multiobjective optimization problems with sparse optimal solutions via unsupervised neural networks

    .
    IEEE Transactions on Cybernetics 51 (6), pp. 3115–3128. External Links: Document Cited by: §1.
  • [33] Y. Tian, X. Zhang, C. Wang, and Y. Jin (2020) An evolutionary algorithm for large-scale sparse multiobjective optimization problems. IEEE Transactions on Evolutionary Computation 24 (2), pp. 380–393. External Links: Document Cited by: §1.
  • [34] Y. Tian, X. Zheng, X. Zhang, and Y. Jin (2019) Efficient large-scale multi-objective optimization based on a competitive swarm optimizer. IEEE Transactions on Cybernetics, pp. 1–13. Cited by: §2.1, §2.2, §4.1.
  • [35] G. Wang and H. Jiang (2007) Fuzzy-dominance and its application in evolutionary many objective optimization. In 2007 International Conference on Computational Intelligence and Security Workshops (CISW 2007), Vol. , pp. 195–198. External Links: Document Cited by: §2.2.
  • [36] H. Wang, L. Jiao, R. Shang, S. He, and F. Liu (2015) A memetic optimization strategy based on dimension reduction in decision space. Evolutionary Computation 23 (1). Cited by: §1.
  • [37] R. Wang, R. C. Purshouse, and P. J. Fleming (2013) Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Transactions on Evolutionary Computation 17 (4), pp. 474–494. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [38] S. Yang, M. Li, X. Liu, and J. Zheng (2013) A grid-based evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation 17 (5), pp. 721–736. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [39] Z. Yang, K. Tang, and X. Yao (2008) Large scale evolutionary optimization using cooperative coevolution. Information Sciences 178 (15), pp. 2985–2999. Cited by: §1.
  • [40] Z. Yang, K. Tang, and X. Yao (2014) Large scale evolutionary optimization using cooperative coevolution. Information ences 178 (15), pp. 2985–2999. Cited by: §2.2, §2.2.
  • [41] Y. Yuan, H. Xu, B. Wang, and X. Yao (2016) A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation 20 (1), pp. 16–37. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [42] Q. Zhang and H. Li (2007) MOEA/d: a multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation 11 (6), pp. 712–731. External Links: ISSN 1941-0026, Document Cited by: §1.
  • [43] Q. Zhang, A. Zhou, and Y. Jin (2008) RM-meda: a regularity model-based multiobjective estimation of distribution algorithm. IEEE Transactions on Evolutionary Computation 12 (1), pp. 41–63. External Links: ISSN 1941-0026, Document Cited by: §4.1.
  • [44] X. Zhang, Y. Tian, R. Cheng, and Y. Jin (2018) A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization. IEEE Transactions on Evolutionary Computation 22 (1), pp. 97–112. External Links: ISSN 1941-0026, Document Cited by: §1, §1, §2.2, §4.1.
  • [45] X. Zhang, Y. Tian, and Y. Jin (2015) A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Transactions on Evolutionary Computation 19 (6), pp. 761–776. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [46] H. Zille, H. Ishibuchi, S. Mostaghim, and Y. Nojima (2018) A framework for large-scale multiobjective optimization based on problem transformation. IEEE Transactions on Evolutionary Computation 22 (2), pp. 260–275. External Links: ISSN 1941-0026, Document Cited by: §2.2.
  • [47] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. d. Fonseca (2003) Performance assessment of multiobjective optimizers: an analysis and review. IEEE Transactions on Evolutionary Computation 7 (2), pp. 117–132. External Links: ISSN 1941-0026, Document Cited by: item 2.
  • [48] X. Zou, Y. Chen, M. Liu, and L. Kang (2008) A new evolutionary algorithm for solving many-objective optimization problems. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 38 (5), pp. 1402–1412. External Links: ISSN 1941-0492, Document Cited by: §2.2.