I Introduction
Many optimization problems in the real world[1][2] involve multiple optimization functions which conflict with each other and change over time. These dynamic optimization problems are called Dynamic Multiobjective Optimization Problems (DMOPs)[3]. For example, in the design of job scheduling systems[4], a number of decision variables, such as procedures, components, and operation time, are involved, which determine objective functions of energy consumption, production, and stability. These conflicting objective functions always change with time. Hence, efficient DMOAs should rapidly arrange scheduling schemes according to the changing environments, and this ability is critical to robust scheduling systems.
In recent years, in order to solving DMOPs, a variety of DMOAs have been proposed. These existing methods can be roughly grouped into the following three categories: The first category of DMOAs is based on maintaining diversity. Gong et al. [5] proposed a general framework to decompose decision variables into two subpopulations according to the interval similarity between each decision variable and interval parameters, and a strategy on the basis of change intensity is adopted to track the POF. In [6], Jiang et al.
developed a framework based on domain adaptive and nonparametric estimation to keep the explorationexploitation of DMOPs in terms of temporal and spatial views. The second is a memorybased method. Chen
et al. [7] implemented a dynamic twoarchive strategy to simultaneously maintain two coevolving populations. One population is concerned on convergence while the other focuses on diversity. Branke et al. [8] proposed a memory scheme to enhance the evolutionary process. In this algorithm, some excellent solutions are saved which can be used for guiding towards to optimal solutions. The third category of DMOAs is based on prediction. Muruganantham et al. [9]presented a population prediction strategy based on the Kalman filter technique. The Kalman filter technique
[10]can guide the search for new Paretooptimal solutions to generate a large number of highquality initial individuals. Then, the algorithm finds the optimal at this moment based on a decompositionbased differential evolution algorithm. Rong
et al. [11] presented a prediction model to track the moving POS by clustering the whole population into several subpopulations. In addition, the number of clusters depends on the intensity of environmental change. Zhou et al. [12] proposed a population prediction method to predict a whole population instead of predicting some isolated points. The algorithm uses center points to predict the next center point, and the previous manifolds are used to estimate the next manifold. The optimal population at this moment is determined based on a decompositionbased differential evolution algorithm. Hu et al. [13]designed a promising approach based on Incremental Support Vector Machine (ISVM)
[14]classifier in solving DMOPs, the ISVM is trained from the past Paretooptimal set, then highquality initial individuals are filtered through the trained ISVM. Jiang et al. [15] presented a framework based on transfer learning [16] to predict an effective initial population for solving DMOPs. The transfer component analysis (TCA)[16] is used in this framework for the domain adaptation problem[17].Traditional machine learning approaches are usually based on the assumption that the samples follow the Independent Identically Distributed (IID). Nevertheless, this hypothesis will be broken when dealing with DMOPs, since the solution distribution fails to satisfy the IID hypothesis. Although there is a DMOA based on transfer learning. However, it leads to poor diversity when samples clustering in the high dimensional latent space created by TCA.
In this paper, a regression transfer learning prediction based DMOA (RTLPDMOA) is proposed. The algorithm aims to generate an excellent initial population to enhance the ability of existing multiobjective optimization algorithms for DMOPs. When the environment has changed, a regression transfer learning prediction model is constructed by utilizing the historical population information which can predict objective values in the new environment. Then, with the assistance of this regression prediction model, some highquality solutions with better predicted objective values can be identified and selected as an initial population, which can improve the individuals’ performance of the evolutionary process significantly.
The contributions of this work are as follows: 1) The proposed algorithm can make full use of historical information and predict highquality initial population to improve the evolutionary performance of the existing static multiobjective optimization algorithms (SMOAs) in solving DMOPs. 2) The proposed algorithm can overcome the difficulty that solution distributions fail to meet the IID hypothesis. Compared with other prediction methods, the RTLPDMOA is promising.
The rest of the paper is organized as follows: In Section II, we describes the basic concepts of DMOPs and presents the related transfer learning method used in the RTLPDMOA. Section III gives the designed RTLPDMOA in detail. In Section IV, experimental results and analysis are shown. Conclusions are drawn in Section V.
Ii Preliminary Studies
Iia Dynamic Multiobjective Optimization
The mathematical form of DMOPs is as follows:
(1) 
where , and is the dimensional decision vector, and is the environment variable. is the dimensional objective vector. The goal of DMOAs is to find solutions at environment so that all objectives are as small as possible. Nevertheless, one solution cannot satisfy the minimum of all conflicting objectives. Hence, a tradeoff method called Pareto dominance is introduced to compare these solutions. The set of optimal tradeoff solutions is called the Paretooptimal solutions (POS) in the decision space and the Paretooptimal front (POF) in the objective space[18].
Definition 1
(Dynamic Decision Vector Domination) At environment , a decision vector Paretodominates another vector denoted by , if and only if
(2) 
Definition 2
(Dynamic ParetoOptimal Set, DPOS) If a decision vector at environment satisfies
(3) 
then all are called dynamic Paretooptimal solutions, and the set of dynamic Paretooptimal solutions is called the dynamic POS (DPOS).
Definition 3
(Dynamic ParetoOptimal Front, DPOF) DPOF is the Paretooptimal front of the DPOS for the DMOPs at the environment
IiB TrAdaboost.R2
TrAdaboost [19] is a classification algorithm based on the boosting method. The aim of TrAdaBoost is to filter out dissimilar samples in the past source domain to those in the target domain. In this way, TrAdaboost improves the classification accuracy. The source data set is combined with the target domain set to form a single data set. At each boosting step, TrAdaBoost increases the relative weights of target instances that are misclassified. When a source instance is misclassified, however, its weight is decreased. In this way, TrAdaBoost makes use of those source instances that are most similar to the target data while ignoring those that are dissimilar. In [20], the authors introduce TrAdaboostbased algorithms for transfer regression task, called TrAdaboost.R2.
TrAdaboost.R2 is an ensemble method in which each weak regression hypothesis () can map the source domain data set and the target domain data to . A strong regression hypothesis is determined by combining these weak hypotheses. In each training round, TrAdaboost.R2 increases the relative weights of instances from the target domain. Meanwhile, TrAdaboost.R2 decreases the weights of the instances from the source domain. When the regression error of a instance caused by is large, has a substantial influence on the changing weight of the instance. In this way, TrAdaboost.R2 reuses source instances that are most similar to the target data and ignores those that are dissimilar. In the next round, these modified weights are inputed into the next regression hypothesis , instances that are dissimilar to the target domain weaken their impacts of learning process, and instances with large weights help the learning algorithm in training better regressions.
Iii Proposed Algorithm
The framework of RTLPDMOA is illustrated in Algorithm 1. In brief, RTLPDMOA initializes randomly a population with size , and then executes a SMOA to optimize the population at environment . If environmental changes are detected, the environment variable is updated as . Then, the last population is inputted into the procedure of regression transfer. In the procedure of regression transfer, a regression hypothesis is determined with historical information which can predict objective vectors of individuals at the new environment. Next, in the procedure called initial population prediction, the is employed to predict the objective vectors and some highquality individuals are selected according to their predictive objective vectors. These individuals are regarded as an excellent initial population and inputted into a SMOA to accelerate the evolutionary process. The details of RTLPDMOA are presented in the following section.
Iiia Regression Transfer
The regression transfer process returns a strong regression hypothesis for environment . The strong regression hypothesis adapts to the solution distribution at current environment. When an individual is given, outputs a predicted objective vector of . Therefore, in the subsequent process of RTLPDMOA, an excellent individual with better predicted objective vectors can be selected as a member of the initial population.
The strong regression hypothesis is integrated with several weak regression hypotheses (, is the maximum number of iterations for training). These weak regression hypotheses are trained with the past population information. The last population combined with their objective values are regarded as source domain set . The target domain set is comprised of which is sampled from in the current decision space and their objective values , where and are the lower bound and upper bound of the decision variable at environment . and are combined into a set as the training data.
The process for training weak regression hypotheses is as follows: First of all, the weight vector is initialized as , denotes the weight of for training at environment . In the main training loop, for training , a Support Vector Regression (SVR)[21] is implemented as a basic learner to obtain the weak regression hypothesis from and . Then, the adjusted error of each individual for training is calculated as
(4) 
where is the maximum error, it is described as
(5) 
The is bigger when the difference between the predicted object vector and the true objective vector become bigger, and the adjusted error for is calculated as
(6) 
When is small, becomes smaller. Next, the weight vector is updated according to and : If a training individual from the has a bigger , the individual may be more dissimilar to the distribution of the target domain. Therefore, its training weight must be reduced more. However, if a training individual from the target domain has a bigger , then its training weight should be increased more for to adapt the target domain. So, the weights can be updated as
(7) 
where , and . In this way, individuals adapted to the solution distribution of the target domain have large weights; otherwise, they have small weights. Then, modified weights are inputted into next SVR to learn . Thus, in the next round, individuals with low weights that are dissimilar to the target domain weaken their impacts of the learning process and those with large weights will help the learning algorithm train better regression hypotheses. These weak regression hypotheses (, ) may gradually adapt to the target domain. After iterations, we obtain the final weak regression hypotheses and combine them to acquire a strong regression .
The details of regression transfer are shown in Procedure Regression Transfer.
algocf[htbp]
IiiB Initial Population Prediction
algocf[htbp]
In this section, the initial population prediction is utilized to identify some excellent solutions as the initial population with the assistance of .
To begin with, a test population is sampled from , where and are the lower bound and upper bound of the decision variable at environment . Then, objective values are predicted, and the nondominated front can be determined by fast nondominated sort[22] according to predicted objective values. Then, we select the first nondominated fronts as and limit the size of does not exceed the population size . Next, some Gaussian noises are added to until the population size is . The initial population is to accelerate the evolutionary process and improve the evolutionary performance for the current environment.
The details of initial population prediction are presented in Procedure Initial Population Prediction.
Iv Experiments
Iva Compared Algorithms
IvB Test Problems
All compared algorithms are evaluated on 8 benchmark DMOPs selected from FDA[26] and DMOP[23]. The FDA benchmark comprises FDA1, FDA2, FDA3, FDA4, and FDA5. The DMOP benchmark contains dMOP1, dMOP2, and dMOP3.
DMOPs is divided into three categories: Type I problem indicates POS changes, but the POF does not change. Type II problem indicates changes in POS and POF. Type III problem implies the POF changes but the POS does not change.
FDA1, FDA4, and dMOP3 belong to Type I problem. FDA3, FDA5, and dMOP2 belong to Type II problem. Type III contains FDA2 and dMOP1.
The dynamics of a DMOP is controlled by
(8) 
where , , and refer to the generation counter, severity of change, and frequency of change, respectively.
IvC Performance Indicators
1) The Inverted Generational Distance (IGD) metric [27] can measure the convergence of obtained solutions. A small IGD value represents the convergence of the solution is improved. IGD is defined as
(9) 
where is the true POF of a multiobjective optimization problem, and is an approximation set of POF obtained by a multiobjective optimization algorithm and is the number of individuals in the .
The MIGD[7] metric is a variant of IGD. The MIGD can be described as the average of the IGD values in all environments during a run.
(10) 
where is a set of discrete time points during a run and is the cardinality of .
2) The Maximum Spread (MS)[24] can quantify the extent of obtained solutions covers the true POF. A large MS value indicates additional coverage for the true POF by solutions obtained by the algorithm. MS is calculated as follows:
(11) 
where and represents maximum and minimum of th objective in true POF, respectively; and and represent the maximum and minimum of th objective in the obtained POF, respectively. This metric is also modified for evaluating DMOAs.
IvD Parameter Settings
Parameter settings in RTLPDMOA are as follows: We set the size of the population to 100 and set the number of iterations for training to 10. The size of and are set to 50 and 500, respectively. We choose RMMEDA[27] as the SMOA optimizer for RTLPDMOA, and the number of cluster is 4 in RMMEDA. The parameters in SVR are set by default[28].
Consistent with the experimental configuration in this study [25]: We fix the to 10. The frequency of change values are 5, and 10. The number of iterations of compared algorithms is , of which 50 are the number of iterations at the initial time. Hence, in each population of configurations, the problem is changed by times.
IvE Experimental Results
Experimental comparison results of RTLPDMOA with other three state of the art DMOAs. MIGD values and MS values are presented in Tables I and Table II, respectively. The best metric values are highlighted in bold.
Problem  ,  RTLPRMMEDA  dCOEA  PPS  SGEA 

FDA1  (5,10)  0.0051(0.0013)  0.0661(0.0128)  0.2061(0.0769)  0.0338(0.0081) 
(10,10)  0.0049(0.0011)  0.0413(0.0068)  0.0476(0.0204)  0.0132(0.0025)  
FDA2  (5,10)  0.0228(0.0046)  0.0774(0.0390)  0.0888(0.0348)  0.0121(0.0014) 
(10,10)  0.0223(0.0529)  0.0491(0.0329)  0.0619(0.0107)  0.0083(0.0006)  
FDA3  (5,10)  0.1425(0.0066)  0.2640(0.0355)  0.4143(0.0101)  0.0612(0.0327) 
(10,10)  0.1455(0.0081)  0.1910(0.0338)  0.2003(0.0183)  0.0405(0.0180)  
FDA4  (5,10)  0.1116(0.0092)  0.1604(0.0066)  0.3191(0.0203)  0.1603(0.0642) 
(10,10)  0.1189(0.0091)  0.1296(0.0048)  0.2196(0.0215)  0.1241(0.0664)  
FDA5  (5,10)  0.3615(0.0027)  0.4387(0.0469)  0.6577(0.0318)  0.5221(0.0395) 
(10,10)  0.3612(0.0053)  0.3691(0.0403)  0.5037(0.0355)  0.4002(0.0088)  
DMOP1  (5,10)  0.0469(0.0620)  0.0702(0.0157)  0.4182(0.1674)  0.0136(0.0079) 
(10,10)  0.0495(0.0085)  0.0395(0.0066)  0.0499(0.0091)  0.0084(0.0057)  
DMOP2  (5,10)  0.0425(0.0101)  0.1103(0.0207)  0.1563(0.0126)  0.0345(0.0036) 
(10,10)  0.0427(0.0097)  0.0850(0.0098)  0.4293(0.0195)  0.0162(0.0005)  
DMOP3  (5,10)  0.0047(0.0081)  0.0512(0.0101)  0.1717(0.0804)  0.1734(0.0858) 
(10,10)  0.0044(0.0077)  0.0287(0.0123)  0.1134(0.0079)  0.1252(0.0143) 
MEAN AND STANDARD DEVIATION VALUES OF MIGD METRIC FOR DIFFERENT DYNAMIC TEST SETTINGS
As the experimental results show, in Table I, the proposed RTLPDMOA performs better than the other three algorithms in 9 out of 16 test instances for MIGD values. It clearly shows that the proposed RTLPDMOA performs better than the compared algorithms on FDA1, FDA4, FDA5, and DMOP3 under all configurations for the MIGD values. We can find that RTLPDMOA achieves a good performance of MIGD values for triobjective problems. This is because the prediction method based on the transfer learning method have a strong ability to explore complicated different solution distributions. However, it performs worse than SGEA for FDA3, DMOP1, and DMOP2 under all dynamic test settings. Experimental results of MIGD values indicate that the proposed RTLPDMOA maintains better convergence over the other three state of the art DMOAs under most test functions.
Problem  ,  RTLPRMMEDA  dCOEA  PPS  SGEA 

FDA1  (5,10)  0.9983(0.0026)  0.8697(0.0249)  0.8721(0.0333)  0.9441(0.0378) 
(10,10)  0.9985(0.0024)  0.8921(0.0211)  0.9635(0.0149)  0.9782(0.0110)  
FDA2  (5,10)  0.9988(0.0047)  0.8267(0.0505)  0.9013(0.0497)  0.9934(0.0053) 
(10,10)  0.9939(0.0036)  0.8672(0.0285)  0.9356(0.0121)  0.9930(0.0034)  
FDA3  (5,10)  0.8809(0.0035)  0.5031(0.0427)  0.6001(0.0404)  0.8843(0.0711) 
(10,10)  0.8585(0.0253)  0.5873(0.0356)  0.6180(0.0299)  0.9437(0.0775)  
FDA4  (5,10)  1.0000(0.0000)  0.9649(0.7774)  0.9984(0.0008)  0.9997(0.0001) 
(10,10)  1.0000(0.0000)  0.9702(0.0063)  0.9990(0.0001)  0.9996(0.0001)  
FDA5  (5,10)  1.0000(0.0000)  0.9304(0.0380)  0.9974(0.0024)  0.9997(0.0001) 
(10,10)  1.0000(0.0000)  0.9551(0.0369)  0.9979(0.0039)  0.9995(0.0001)  
DMOP1  (5,10)  0.9961(0.0011)  0.8643(0.0414)  0.9301(0.0667)  0.9555(0.0305) 
(10,10)  0.9823(0.0006)  0.8881(0.0255)  0.9782(0.0339)  0.9849(0.0179)  
DMOP2  (5,10)  0.9962(0.0028)  0.7556(0.0563)  0.8513(0.0139)  0.9502(0.0130) 
(10,10)  0.9980(0.0263)  0.8145(0.0253)  0.9600(0.0147)  0.9810(0.0004)  
DMOP3  (5,10)  0.9969(0.0013)  0.8782(0.0136)  0.8559(0.0315)  0.5031(0.0248) 
(10,10)  0.9991(0.0014)  0.9104(0.0093)  0.8880(0.0183)  0.5838(0.0296) 
It can be clearly found from the Table II that the proposed RTLPDMOA obtains the best results in 13 out of 16 instances for MS values. Apart from FDA3 and DMOP1, RTLPDMOA performs better than the compared algorithms under all configurations. It is worth noting that RTLPDMOA achieves the maximum value of MS on triobjective problems: FDA4 and FDA5. Nevertheless, RTLPDMOA is a little worse than SGEA on FDA3. Overall, the diversity of solutions obtained by RTLPDMOA are extremely better than the other three algorithms in most case.
IvF Discussion
In this subsection, we perform a comparative experiment to verify whether the combination with the regression transfer learning prediction can improve performance. We compare RTLPRMMEDA with RMMEDA. RMMEDA is originally used to solve the static multiobjective problem and not applicable for DMOPs. Table III indicates that RTLPRMMEDA performs better than RMMEDA in all test functions at and configuration for MIGD values. The RTLPRMMEDA improves the RMMEDA for MIGD values by 22.66%–96.39%. Table IV indicates that RTLPRMMEDA performs better than RMMEDA in all test instances for MS values. RTLPRMMEDA improves the RMMEDA for MS values by 0.08%–39.88%. The ablation study reveals that the designed regression transfer learning prediction can significantly improve the performance of SMOAs.
Problem  RMMEDA  RTLPRMMEDA 

FDA1  0.1309(0.0287)  0.0051(0.0013) 
FDA2  0.1429(0.0333)  0.0228(0.0046) 
FDA3  0.2110(0.0285)  0.1425(0.0066) 
FDA4  0.1691(0.0140)  0.1116(0.0092) 
FDA5  0.5522(0.0160)  0.3615(0.0027) 
DMOP1  0.4187(0.0689)  0.0469(0.0620) 
DMOP2  0.0696(0.0149)  0.0425(0.0101) 
DMOP3  0.0235(0.0125)  0.0047(0.0081) 
Problem  RMMEDA  RTLPRMMEDA 

FDA1  0.8515(0.0365)  0.9983(0.0026) 
FDA2  0.9447(0.0098)  0.9988(0.0047) 
FDA3  0.6634(0.1033)  0.8809(0.0035) 
FDA4  0.9992(0.0002)  1.0000(0.0000) 
FDA5  0.9988(0.0002)  1.0000(0.0000) 
DMOP1  0.7121(0.0759)  0.9961(0.0011) 
DMOP2  0.9274(0.0190)  0.9962(0.0028) 
DMOP3  0.9692(0.0105)  0.9969(0.0013) 
V Conclusion
This paper has proposed the RTLPDMOA in solving DMOPs. When the environment has changed, a regression hypothesis which adapts to the solution distribution for predicting objective values is deduced. Then, excellent individuals are identified according to their predicted objective values and selected as an initial population, which can improve the performance of the evolutionary process.
From experimental comparison results, the proposed RTLPDMOA is very competitive in most test functions. In our future work, we will integrate some advanced machine learning methods into evolutionary computing to enhance the evolutionary performance of existing static multiobjective optimization algorithms and solve the real world problems
[29][30].Acknowledgment
This work was supported by the National Natural Science Foundation of China (Grant No.61673328) and Shenzhen Scientific Research and Development Funding Program (Grant No. JCYJ20180307123637294).
References

[1]
Min Jiang, Yang Yu, Xiaoli Liu, Fan Zhang, and Qingyang Hong, “Fuzzy neural network based dynamic path planning,” in
2012 International Conference on Machine Learning and Cybernetics, vol. 1, July 2012, pp. 326–330.  [2] C. Raquel and X. Yao, “Dynamic multiobjective optimization: a survey of the stateoftheart,” in Studies in Computational Intelligence. Springer Science mathplus Business Media, 2013, pp. 85–106.
 [3] Y. Yang, Y. Sun, and Z. Zhu, “Multiobjective memetic algorithm based on request prediction for dynamic pickupanddelivery problems,” in Evolutionary Computation, 2017.
 [4] W. Du, W. Zhong, Y. Tang, W. Du, and Y. Jin, “Highdimensional robust multiobjective optimization for order scheduling: A decision variable classification approach,” IEEE Transactions on Industrial Informatics, vol. 15, no. 1, pp. 293–304, Jan 2019.
 [5] D. Gong, B. Xu, Y. Zhang, Y. Guo, and S. Yang, “A similaritybased cooperative coevolutionary algorithm for dynamic interval multiobjective optimization problems,” IEEE Transactions on Evolutionary Computation, pp. 1–1, 2019.
 [6] M. Jiang, L. Qiu, Z. Huang, and G. G. Yen, “Dynamic multiobjective estimation of distribution algorithm based on domain adaptation and nonparametric estimation,” Information Sciences, vol. 435, pp. 203 – 223, 2018.
 [7] R. Chen, K. Li, and X. Yao, “Dynamic multiobjectives optimization with a changing number of objectives,” IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 157–171, Feb 2018.
 [8] J. Branke, “Memory enhanced evolutionary algorithms for changing optimization problems,” in Proceedings of the 1999 Congress on Evolutionary Computation. Institute of Electrical and Electronics Engineers.
 [9] A. Muruganantham, K. C. Tan, and P. Vadakkepat, “Evolutionary dynamic multiobjective optimization via kalman filter prediction,” IEEE Trans. Cybernetics, vol. 46, no. 12, pp. 2862–2873, 2016.
 [10] G. Welch, Kalman Filter. Boston, MA: Springer US, 2014, pp. 435–437.
 [11] M. Rong, D. Gong, Y. Zhang, Y. Jin, and W. Pedrycz, “Multidirectional prediction approach for dynamic multiobjective optimization problems,” IEEE Transactions on Cybernetics, pp. 1–13, 2018.
 [12] A. Zhou, Y. Jin, and Q. Zhang, “A population prediction strategy for evolutionary dynamic multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 44, no. 1, pp. 40–53, 2014.
 [13] W. HU, M. JIANG, X. Gao, K. C. TAN, and Y. Cheung, “Solving dynamic multiobjective optimization problems using incremental support vector machine,” in 2019 IEEE Congress on Evolutionary Computation (CEC), June 2019, pp. 2794–2799.
 [14] B. Gu, V. S. Sheng, K. Y. Tay, W. Romano, and S. Li, “Incremental support vector learning for ordinal regression,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 7, pp. 1403–1416, July 2015.
 [15] M. Jiang, Z. Huang, L. Qiu, W. Huang, and G. Yen, “Transfer learning based dynamic multiobjective optimization algorithms,” IEEE Transactions on Evolutionary Computation, vol. PP, no. 99, pp. 1–1, 2017.
 [16] M. Jiang, W. Huang, Z. Huang, and G. G. Yen, “Integration of global and local metrics for domain adaptation learning via dimensionality reduction,” IEEE Transactions on Cybernetics, vol. 47, no. 1, pp. 38–51, Jan 2017.
 [17] J. Lu, V. Behbood, P. Hao, H. Zuo, S. Xue, and G. Zhang, “Transfer learning using computational intelligence: A survey,” KnowledgeBased Systems, vol. 80, pp. 14 – 23, 2015, 25th anniversary of KnowledgeBased Systems.
 [18] K. Deb, Multiobjective optimization using evolutionary algorithms. John Wiley & Sons, 2001, vol. 16.
 [19] W. Dai, Q. Yang, G. R. Xue, and Y. Yu, “Boosting for transfer learning,” in International Conference on Machine Learning, 2007.
 [20] D. Pardoe and P. Stone, “Boosting for regression transfer,” in International Conference on Machine Learning, 2010.
 [21] D. Basak, S. Pal, and D. C. Patranabis, “Support vector regression,” Neural Information ProcessingLetters and Reviews, vol. 11, no. 10, pp. 203–224, 2007.

[22]
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsgaii,”
IEEE Transactions on Evolutionary Computation, vol. 6, no. 2, pp. 182–197, April 2002.  [23] C.K. Goh and K. C. Tan, “A competitivecooperative coevolutionary paradigm for dynamic multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 13, no. 1, pp. 103–127, 2009.
 [24] A. Zhou, Y. Jin, and Q. Zhang, “A population prediction strategy for evolutionary dynamic multiobjective optimization,” IEEE Transactions on Cybernetics, vol. 44, no. 1, pp. 40–53, jan 2014.
 [25] S. Jiang and S. Yang, “A steadystate and generational evolutionary algorithm for dynamic multiobjective optimization,” IEEE Transactions on Evolutionary Computation, vol. 21, no. 1, pp. 65–82, 2017.
 [26] M. Farina, K. Deb, and P. Amato, “Dynamic multiobjective optimization problems: test cases, approximations, and applications,” IEEE Transactions on evolutionary computation, vol. 8, no. 5, pp. 425–442, 2004.
 [27] Q. Zhang, A. Zhou, and Y. Jin, “Rmmeda: A regularity modelbased multiobjective estimation of distribution algorithm,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 1, pp. 41–63, 2008.
 [28] C.C. Chang and C.J. Lin, “Libsvm: A library for support vector machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, pp. 27:1–27:27, May 2011.
 [29] J. Min, C. Zhou, and S. Chen, “Embodied concept formation and reasoning via neuralsymbolic integration,” Neurocomputing, vol. 74, no. 1, pp. 113–120, 2010.

[30]
W. Yin, J. Min, Z. Huang, C. Fei, and C. Zhou, “An npcomplete fragment of
fibring logic,”
Annals of Mathematics and Artificial Intelligence
, vol. 75, no. 34, pp. 391–417, 2015.
Comments
There are no comments yet.