Evolutionary Multitasking for Multiobjective Continuous Optimization: Benchmark Problems, Performance Metrics and Baseline Results

06/08/2017 ∙ by Yuan Yuan, et al. ∙ 0

In this report, we suggest nine test problems for multi-task multi-objective optimization (MTMOO), each of which consists of two multiobjective optimization tasks that need to be solved simultaneously. The relationship between tasks varies between different test problems, which would be helpful to have a comprehensive evaluation of the MO-MFO algorithms. It is expected that the proposed test problems will germinate progress the field of the MTMOO research.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the explosion in the variety and volume of incoming information streams, it is very desirable that the intelligent systems or algorithms are capable of efficient multitasking. The evolutionary algorithms (EAs) are a kind of population-based search algorithms, which have the inherent ability to handle multiple optimization tasks at once. By exploiting this characteristic of EAs, evolutionary multitasking

[1, 2]

has become a new paradigm in evolutionary computation (EC), which signifies a multitasking search involving multiple optimization tasks at a time, with each task contributing a unique factor influencing the evolution of a single population of individuals. The first tutorial on evolutionary multitasking (“Evolutionary Multitasking and Implications for Cloud Computing”) was presented in IEEE Congress on Evolutionary Computation (CEC) 2015, Sendai, Japan. Since then, a series of studies on this topic have been conducted from various perspectives, which not only show the potentials of evolutionary multitasking

[3, 4, 5, 6, 7, 8] but also explain why this paradigm works [9, 10] to some extent.

Evolutionary multiobjective optimization (EMO) [11, 12, 13]

has always been a popular topic in the field of EC over the past 20 years. The reason mainly lies in the following two aspects. On the one hand, many optimization problems in real-world applications can be formulated as multiobjective optimization problems (MOPs) in essence. On the other hand, EAs are able to approximate the whole Pareto set (PS) or Pareto front (PF) of a MOP in a single run due to the population-based nature, which are regarded as quite suitable for solving MOPs. Until now, plenty of EMO techniques have been developed, and they can be roughly classified into three categories: Pareto dominance-based algorithms

[14, 15, 16], decomposition-based algorithms [17, 18, 19, 20], and indicator-based algorithms [21, 22, 23].

Since evolutionary multitasking has shown promise in solving multiple tasks simultaneously and EMO is one of the focused topics in EC, there may be noteworthy implications of understanding the behavior of algorithms at the intersection of the two paradigms. The combination of the two different paradigms is referred to herein as multiobjective multifactorial optimization (MO-MFO), which means that a number of multiobjective optimization tasks are solved simultaneously by evolving a single population of individuals using EAs. MO-MFO has been first discussed in [4], where proof of concept results from two synthetic benchmarks and a real-world case study were presented.

In this report, we suggest nine test problems for MO-MFO, each of which consists of two multiobjective optimization tasks that need to be solved simultaneously. The relationship between tasks varies between different test problems, which would be helpful to have a comprehensive evaluation of the MO-MFO algorithms. It is expected that the proposed test problems will germinate progress the field of the MO-MFO research.

The rest of this report is outlined as follows. Section II presents the definitions of the proposed test problems. Section III describes how to evaluate the performance of the algorithm on the test problems. Section IV provides the baseline results obtained by a MO-MFO algorithm (i.e., MO-MFEA [4]) and its underlying basic MOEA (i.e., NSGA-II [16]) on the proposed benchmark problems.

Ii Definitions of Benchmark Problems

In the previous studies on evolutionary (single-objective) multitasking [1, 2, 9], it is found that the degree of the intersection of the global optima and the similarity in the fitness landscape are two important ingredients that lead to the complementarity between different optimization tasks. Motivated by this, the characteristics of multiobjective optimization tasks in the test problems to be presented are focused on the function, and the relationship between two multiobjective optimization tasks in a test problem can be translated into the relationship between their respective functions, which is based on the fact that the Pareto optimal solutions of a multiobjective task are achieved iff. its corresponding reaches the global minimum.

Suppose that and are the global minima of functions in the two multiobjective tasks and , respectively. Each dimension of and is then normalized to the same range , obtaining and . If , we say that the global minima are complete intersection; if there exists no dimension where the values of and are equal, we call that no intersection; all the other relationships between and are referred to as partial intersection.

For computing the similarity between the fitness landscape of functions, 1,000,000 points are randomly sampled in the unified search space [1], then the Spearman’s rank correlation coefficient between is calculated as the similarity. The similarity lying in is regarded as high, medium, and low, respectively.

We consider three degrees of the intersection of the global minima, i.e., complete, partial and no intersection, and within each category, three categories of similarity in the fitness landscape, i.e., high, medium, and low similarity. Accordingly, there are nine test problems in total. Note that many practical settings give rise to a third condition for categorizing potential multitask optimization settings, namely, based on the phenotypic overlap of the decision variables [2]. To elaborate, a pair of variables from distinct tasks may bear the same semantic (or contextual) meaning, which leads to the scope of knowledge transfer between them. However, due to the lack of substantial contextual meaning in the case of synthetic benchmark functions, such a condition for describing the similarity/overlap between tasks is not applied in this technical report.

The definitions of the proposed test problems are described in detail as follows. Note that there are five shift vectors (

) and four rotation matrixes () involved, whose data can be available online111The data of rotation matrixes and shift vectors in the test problems can be downloaded from
https://drive.google.com/open?id=0B8WAZ9HjQsUSdUY5UzBLN0NPd2M.
:

1) Complete Intersection with High Similarity (CIHS)

The first multiobjective task is defined as follows:

(1)

The second multiobjective task is defined as follows:

(2)

For this test problem, the number of decision variables is set to 50 for both and , and the similarity between and is 0.97. The PS and PF of are given as follows:

(3)

The PS and PF of are given as follows:

(4)

2) Complete Intersection with Medium Similarity (CIMS)

The first multiobjective task is defined as follows:

(5)

The second multiobjective task is defined as follows:

(6)

For this test problem, the number of decision variables is set to 10 for both and , and the similarity between and is 0.52. The PS and PF of are given as follows:

(7)

The PS and PF of are given as follows:

(8)

3) Complete Intersection with Low Similarity (CILS)

The first multiobjective task is defined as follows:

(9)

The second multiobjective task is defined as follows:

(10)

For this test problem, the number of decision variables is set to 50 for both and , and the similarity between and is 0.07. The PS and PF of are given as follows:

(11)

The PS and PF of are given as follows:

(12)

4) Partial Intersection with High Similarity (PIHS)

The first multiobjective task is defined as follows:

(13)

The second multiobjective task is defined as follows:

(14)

For this test problem, the number of decision variables is set to 50 for both and , and the similarity between and is 0.99. The PS and PF of are given as follows:

(15)

The PS and PF of are given as follows:

(16)

5) Partial Intersection with Medium Similarity (PIMS)

The first multiobjective task is defined as follows:

(17)

The second multiobjective task is defined as follows:

(18)

For this test problem, the number of decision variables is set to 50 for both and , and the similarity between and is 0.55. The PS and PF of are given as follows:

(19)

The PS and PF of are given as follows:

(20)

6) Partial Intersection with Low Similarity (PILS)

The first multiobjective task is defined as follows:

(21)

The second multiobjective task is defined as follows:

(22)

For this test problem, the number of decision variables is set to 50 for both and , and the similarity between and is 0.002. The PS and PF of are given as follows:

(23)

The PS and PF of are given as follows:

(24)

7) No Intersection with High Similarity (NIHS)

The first multiobjective task is defined as follows:

(25)

The second multiobjective task is defined as follows:

(26)

For this test problem, the number of decision variables is set to 50 for both and , and the similarity between and is 0.94. The PS and PF of are given as follows:

(27)

The PS and PF of are given as follows:

(28)

8) No Intersection with Medium Similarity (NIMS)

The first multiobjective task is defined as follows:

(29)

The second multiobjective task is defined as follows:

(30)

For this test problem, the number of decision variables is set to 20 for both and , and the similarity between and is 0.51. The PS and PF of are given as follows:

(31)

The PS and PF of are given as follows:

(32)

9) No Intersection with Low Similarity (NILS)

The first multiobjective task is defined as follows:

(33)

The second multiobjective task is defined as follows:

(34)

For this test problem, the number of decision variables is set to 25 for and 50 for , and the similarity between and is 0.001. The PS and PF of are given as follows:

(35)

The PS and PF of are given as follows:

(36)

Table I summarizes the proposed test problems together with their properties, where denotes the similarity between two tasks in a test problem. Fig. 1 shows four kinds of the Pareto fronts of the multiobjective optimization tasks in the test problems.

Fig. 1: Four different kinds of Pareto fronts involved in the proposed test problems.
Problem Task Pareto Set Pareto Front Properties
No.
CIHS 0.97 , , concave, unimodal,
separable
, , concave, unimodal,
separable
CIMS 0.52 , , concave, multimodal,
nonseparable
, , concave, unimodal,
nonseparable
CILS 0.07 , , concave, multimodal,
separable
, , convex, multimodal,
nonseparable
PIHS 0.99 , , convex, unimodal,
separable
, , convex, multimodal,
separable
PIMS 0.55 , , concave, unimodal,
nonseparable
, , concave, multimodal,
nonseparable
PILS 0.002 , , concave, multimodal,
nonseparable
, , concave, multimodal,
nonseparable
NIHS 0.94 , , concave, multimodal,
nonseparable
, , convex, unimodal,
separable
NIMS 0.51 , , concave, multimodal,
nonseparable
, , concave, unimodal,
nonseparable
NILS 0.001 , , concave, multimodal,
nonseparable
, , concave, multimodal,
nonseparable
TABLE I: Summary of the proposed test problems for Evolutionary Multiobjective Multitasking.

Iii Performance Evaluation

This section describes the process that will be followed to evaluate the performance of the algorithm on the proposed test problems. All the test problems should be treated as black-box problems, i.e., the analytic forms of these problems are not known for the algorithms.

Iii-a Performance Metric

The inverted generational distance (IGD) [24] is used to evaluate the performance of an algorithm on each task of the considered test problem. Let be a set of nondominated objective vectors that are obtained for a task by the algorithm, and

be a set of uniformly distributed objective vectors over the PF of

. and are first normalized using the maximum and minimum objective values among , then the metric IGD of the approximate set is calculated as: