Joint Device Association, Resource Allocation and Computation Offloading in Ultra-Dense Multi-Device and Multi-Task IoT Networks

With the emergence of more and more applications of Internet-of-Things (IoT) mobile devices (IMDs), a contradiction between mobile energy demand and limited battery capacity becomes increasingly prominent. In addition, in ultra-dense IoT networks, the ultra-densely deployed small base stations (SBSs) will consume a large amount of energy. To reduce the network-wide energy consumption and extend the standby time of IMDs and SBSs, under the proportional computation resource allocation and devices' latency constraints, we jointly perform the device association, computation offloading and resource allocation to minimize the network-wide energy consumption for ultra-dense multi-device and multi-task IoT networks. To further balance the network loads and fully utilize the computation resources, we take account of multi-step computation offloading. Considering that the finally formulated problem is in a nonlinear and mixed-integer form, we utilize the hierarchical adaptive search (HAS) algorithm to find its solution. Then, we give the convergence, computation complexity and parallel implementation analyses for such an algorithm. By comparing with other algorithms, we can easily find that such an algorithm can greatly reduce the network-wide energy consumption under devices' latency constraints.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

12/23/2017

Learning-Based Computation Offloading for IoT Devices with Energy Harvesting

Internet of Things (IoT) devices can apply mobile-edge computing (MEC) a...
02/18/2020

Leveraging Linear Quadratic Regulator Cost and Energy Consumption for Ultra-Reliable and Low-Latency IoT Control Systems

To efficiently support the real-time control applications, networked con...
05/10/2022

Proactive Traffic Offloading in Dynamic Integrated Multi-Satellite Terrestrial Networks

The integration between the satellite network and the terrestrial networ...
03/03/2022

Endogenous Security of Computation Offloading in Blockchain-Empowered Internet of Things

This paper investigates an endogenous security architecture for computat...
04/22/2019

Maximum Lifetime Analytics in IoT Networks

This paper studies the problem of allocating bandwidth and computation r...
04/02/2018

Virtualized Application Function Chaining: Maximizing the Wearable System Lifetime

The number of smart devices wear and carry by users is growing rapidly w...
11/17/2020

Deep Reinforcement Learning for Stochastic Computation Offloading in Digital Twin Networks

The rapid development of Industrial Internet of Things (IIoT) requires i...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the development of information technologies and Internet of Things (IoT), more and more new applications have emerged, e.g., pilotless automobiles, virtual reality, augmented reality, panoramic video [1, 2]. It is easy to find that most of these emerging applications are computing-intensive and delay-sensitive. However, in the reality, IoT mobile devices (IMDs) cannot meet the computation power or storage requirements of these applications well. To tackle this problem, the external computation or storage capacity is introduced. Consequently, the cloud computing framework was proposed, which provides some services for IMDs (users) by collecting much computation and storage resources into a data center [3, 4].

As we know, the cloud computing framework is highly centralized. In such a framework, the computation data of IMDs needs to be updated to a cloud center at first, and then the computation tasks are executed in this center. However, a large amount of data transfer will incur severe network congestion and high operation cost. In order to shorten the distance between IMDs and computation center, fog computing was presented [5], which is closer to the data source than cloud computing. In fog computing networks, IMDs’ computation tasks and applications can be tackled at the edge of networks, rather than on the cloud. Following this, to further promote the concept of “local processing power” of fog computing, mobile edge computing (MEC) emerged. MEC greatly shortens the distance between IMDs and computation center, and execute IMDs’ computation tasks and applications in lower latency and energy consumption, and higher security [6].

To further shorten the distance between IMDs and computation center, ultra-dense small base stations (SBSs) are deployed into heterogeneous cellular networks (HCNs) [7, 8, 9], where all SBSs and macro BSs (MBSs) are implemented with edge computing servers. Although such a framework can greatly enhance the service coverage, and balance the computation tasks among BSs, the deployment of ultra-dense SBSs will result in abundant energy consumption [10, 11, 12, 13].

Evidently, under the limited network resources, it is a hot topic of how to reduce the network-wide energy consumption and extend the standby time of mobile terminal devices (IMDs) and SBSs, which jointly considers the computation offloading, user (device) association and resource allocation. Significantly, the computation offloading problem may be to decide whether IMDs’ computation tasks are executed on edge servers or mobile terminals, or it may be to decide how much IMDs’ computation tasks are executed on edge servers and mobile terminals. In addition, the device association problem is to decide which BS should be selected by one IMD.

I-a Related Work

Although the spectrum sharing can improve the spectrum utilization efficiency in 5G wireless heterogeneous networks, it results in severe interference among users. To effectively reduce the network interference and enhance the system performance, a variety of resource management methods and strategies have been advocated. Under the latency constraints of users, Tan et al. in [14] introduced orthogonal frequency division multiple access (OFDMA) into MEC networks, and jointly performed the task offloading and resource allocation to minimize the total energy consumption of all users. After introducing non-orthogonal multiple access technologies (NOMA) into MEC networks, Xue et al. in [15] jointly considered the task offloading and resource allocation to maximize the task processing capacity for multi-server and multi-task MEC networks.

Meanwhile, there exist a lot of efforts made on the network scenarios with multi-user and multi-server. Ding et al. in [16] developed a code-oriented partitioning offloading mechanism to decide the CPU frequency, execution location, and transmission power of users while minimizing the weighted sum of computation time and energy consumption for multiple users and multiple MEC servers under limited computation power and waiting task queue. At last, the formulated problem was converted into a convex form, and a decentralized computation offloading algorithm was established. Cheng et al. in [17] jointly investigated the computation offloading and resource allocation to minimize users’ energy consumption and task delay for uplink ultra-dense NOMA-MEC-enabled networks. Then, a novel mean field-deep deterministic policy gradient (MF-DDPG) algorithm was proposed to achieve the optimal solution of formulated problem.

It is easy to find that the aforementioned researches concentrate on the multi-user systems, but not the multi-task ones. With the constantly updating of applications, many users need to process multiple tasks. Evidently, it is necessary for us to study the algorithm design for multi-user multi-task systems. Chen M et al. in [18] jointly optimized the offloading decision, the computational and communicational resource allocation to minimize the overall cost of computation, energy and delay of users for a general multi-user MCC (mobile cloud computing) system. In such a system, each user has multiple independent computation tasks. At last, an efficient three-step algorithm was established to find the locally optimal solution of formulated problem. Ning et al. in [19]

studied a problem of minimizing total execution delay of computation tasks of all users in the scenarios with multiple dependent tasks, utilized the branch and bound algorithm to solve single-user computation offloading problem, and designed an iterative heuristic algorithm to solve multi-user one. Chen W

et al. in [20] investigated a multi-user multi-task computation offloading problem of maximizing overall revenue for MEC networks with energy harvesting, used the Lyapunov optimization to determine the energy harvesting policy, and then introduced centralized and distributed greedy maximum scheduling algorithms to resolve the formulated problem. Chen J et al. in [21]

mainly solved the dependent task offloading problem of minimizing average energy-time cost in multi-user scenarios, which was modeled as a Markov decision process. Then, an Actor-Critic mechanism was proposed to attain the solution of formulated problem. In addition, Guo S

et al. in [22] studied a problem of minimizing energy-efficiency cost under the task-dependency requirements and completion time deadline constraints, and presented a distributed dynamic offloading and resource scheduling (eDors) algorithm to obtain its solution. Guo H et al. in [1] jointly optimized the offloading decision, local computation capacity and power allocation to minimize the weighted time and energy consumption for ultra-dense IoT networks with multi-task, multi-device and multi-server implementation.

Furthermore, with the development of emerging industries, the demand for information is gradually increasing. In many places, e.g., shopping malls and stations, the task requests are highly intensive. At this time, MEC servers may have insufficient computation resources for served users, which will cause long user delay and high energy consumption. To solve this kind of problem, more and more scholars concentrated on joint computation offloading and resource allocation. Kan et al. in [23] considered the allocation of both radio resources and computation resources to minimize the network cost, and proposed a heuristic algorithm. Labidi et al. in [24] jointly performed the resource scheduling and computation offloading to minimize the average energy consumption of users, and proposed offline dynamic programming approaches. Hu et al. in [25]

considered a problem of joint request offloading and resource scheduling to minimize the response delay of requests for MEC-enabled ultra-dense networks, and designed the cross-direction genetic algorithm (GA) to achieve the solution of formulated problem. Elgendy

et al. in [26] tried to minimize the weighted sum of energy consumption of users by jointly optimizing the task compression, offloading and security for multi-user multi-task MEC networks, utilized linearization and relaxation approaches to achieve the solution of formulated problem. Dai et al. in [11] jointly performed the computation capacity optimization, user association, power control and computation offloading to minimize the network-wide energy consumption for multi-user multi-task networks.

Among the efforts mentioned above, a few of them take account of computation offloading in ultra-dense IoT networks, and very few of them consider the two-step computation offloading. In addition, very few efforts have been made towards the energy efficiency optimization under the proportional computation resource allocation.

I-B Contributions and Organization

In this paper, a problem of minimizing network-wide energy consumption is formulated, which jointly optimizes the device association, power control, computation offloading and resource partition. Specifically, the main contributions of this paper can be listed as follows.

  1. Multi-Step Computation Offloading in Ultra-Dense IoT Networks: We consider the both one-step and two-step computation offloading in ultra-dense IoT Networks. In the one-step computation offloading, IMDs can be associated with MBS and offload computation tasks to MBS directly. In the two-step computation offloading, IMDs firstly need to offload partial tasks to SBSs and then SBSs may offload partial tasks to MBS. To enhance the performance of multi-step computation offloading, we try to optimize the frequency (band) partition used for avoiding the inter-tier interference. In addition, the equal frequency utilization is advocated to eliminate intra-tier and intra-cell interference. Evidently, the multi-step computation offloading in such an interference management mechanism should be a new investigation.

  2. Computation Offloading Problem Formulation under Proportional Computation Resource Allocation: Under the proportional computation resource allocation, a problem of network-wide energy consumption minimization is formulated for a multi-device multi-task system. In such a problem, the device association, power control, computation offloading and frequency band resource partition are jointly optimized under the IMDs’ latency constraints. It is evident that this is a new formulation and investigation.

  3. Solution of Formulated Problem: Considering that the formulated problem is in a nonlinear and mixed-integer form, we utilize the hierarchical adaptive search (HAS) algorithm to find its solution. To this end, we need to make some proper changes on the gene encoding, selection, crossover and mutation of individuals. In addition, unlike the existing efforts, the optimized parameters are encoded into two types of chromosomes, which have different lengths.

  4. The Convergence and Computation Complexity Analyses: As for the algorithms developed in this paper, we make some detailed investigations on the convergence, computation complexity and parallel implementation analyses.

The rest of this paper is organized as follows. Section II introduces system model including network model, communication model, computation model and multi-task model; Section III gives the formulation of a problem of minimizing network-wide energy consumption under IMDs’ latency constraints; Section IV utilizes HAS to solve the formulated problem; Section V provides the detailed algorithm analysis including convergence, complexity and parallel implementation analyses; Section VI gives the simulation results and analyses; Section VII provides the conclusion and further discussion.

Ii System Model

In this section, we firstly show the network model, i.e., ultra-dense heterogeneous cellular networks with multiple IMDs, tasks and MEC servers. Then, the communication, computation and multi-task models are described in detail.

Ii-a Network Model

In this paper, we concentrate on MEC-enabled ultra-dense IoT networks with multi-task, multi-device and multi-server implementation, which is illustrated in Fig.1. In such networks, ultra-dense SBSs are deployed into each macrocell, and any BS is equipped with one MEC server. Generally, in any macrocell, the number of SBSs is greater than or equal to the one of IMDs. Without loss of generality, we consider one MBS and SBSs in MEC-enabled ultra-dense networks, where the set of SBSs is represented as ; the index of MBS is given by ; indicates the set of all BSs; IMDs lie in the set . In this paper, we assume that all SBSs are connected to the MBS via wired links, and any IMD has a computing-intensive and latency-sensitive application to be executed within a certain deadline. In addition, we consider that each application of any IMD has relatively independent tasks, which are denoted by .

As illustrated in Fig.1, an effective interference management mechanism is introduced to eliminate the network interference in this paper. Specifically, the whole frequency band is cut into two parts including and , which are used for MBSs and SBSs respectively. Then, the frequency band is allocated to SBSs equally, and the frequency band of each SBS is allocated to its associated IMDs equally. Significantly, the widths of frequency bands , and are , and respectively, where is the frequency band partitioning factor. In this way, the inter-tier interference can be cancelled, the intra-tier interference can be eliminated, and the intra-cell interference can be avoided completely. Although the spectrum utilization ratio of such an interference management mechanism is relatively low, the frequency band allocated to IMDs should be sufficient since each SBS often serves at most one IMD in ultra-dense IoT Networks.

Fig. 1: MEC-enabled ultra-dense IoT Networks with multi-task, multi-device and multi-server implementation.

Ii-B Communication Model

Since the data size of computation results downloaded from any BS is very small, the time used for downloading them can be negligible. That is to say, we just need to concentrate on the uplink transmission.

If IMD is associated with SBS , the uplink signal-to-interference-plus-noise ratio (SINR) from IMD to SBS can be given by

(1)

where represents the transmission power of IMD ; denotes the channel gain between IMD and SBS ; is the noise power.

Under the interference management mechanism mentioned above, the uplink data rate from IMD to SBS can be given by

(2)

where represents the number of IMDs associated with SBS ; represents the bandwidth of any IMD associated with SBS ; denotes the association index between IMD and SBS ; if IMD is associated with SBS , otherwise.

If IMD is associated with MBS, the uplink signal-to-interference-plus-noise ratio (SINR) from IMD to MBS can be given by

(3)

where represents the channel gain between IMD and MBS.

Similarly, the uplink data rate from IMD to MBS can be given by

(4)

where represents the number of IMDs associated with MBS; represents the bandwidth of any IMD associated with MBS.

Ii-C Computation Model

We assume that any IMD has an application composed of tasks. In addition, the -th task in this application can be expressed as , where denotes the data size of the -th computation task of IMD ; is the number of CPU cycles used for computing one bit of task . In addition, the execution time of any IMD cannot exceed its deadline .

In this paper, we consider that the computation offloading procedure can include the following two steps. In the first step, the part of -th task of IMD is offloaded to SBS for processing. In the second step, the part of -th task offloaded to SBS is offloaded to MBS for processing. Specifically, when IMD is associated with SBS , is offloaded to this BS for processing and is calculated locally. Then, is offloaded to MBS for processing, and the remaining is processed at SBS . Certainly, we can let IMD be associated with MBS. At this time, is offloaded to MBS for processing and is calculated locally.

Next, the time and energy consumption for computation offloading will be discussed in different scenarios.

1) Local computation: When IMD is associated with BS , the amount of -th task processed locally is , and the local execution time used for processing -th task of IMD associated with BS can be given by

(5)

where represents the computation capacity allocated by IMD to the -th task.

For ease of algorithm design, the computation capacity of IMD is allocated to -th task according to the CPU occupation ratios of all tasks. Specifically, the computation capacity of IMD is allocated to -th task can be given by

(6)

where represents the total computation capacity of IMD .

When IMD is associated with BS , the local computation energy consumption used for executing the remaining amount of -th task can be given by

(7)

where is the effective switched capacitance depending on the chip architecture.

2) Offloading to SBS: When IMD adopts two-step computation offloading to complete its -th task, the time used for this type of operation includes four parts. In detail, the first part is the uplink transmission time used for uploading tasks to SBSs, the second one is the execution time of tasks at SBS, the third one is the uplink transmission time used for uploading tasks to MBS, and the last one is the execution time of tasks at MBS. Therefore, under the two-step computation offloading, when IMD is associated with SBS , the time used for completing its -th task can be given by

(8)

where denotes the wired backhauling rate from SBS to MBS; is the computation capacity allocated to -th task of IMD by SBS , and is the one allocated to -th task of IMD by MBS; in the right side of (8), the first, second, third and fourth terms are the uplink transmission time used for uploading tasks to SBSs, the execution time of tasks at SBS, the uplink transmission time used for uploading tasks to MBS, and the execution time of tasks at MBS respectively.

When IMD is associated with SBS , the computation capacity of SBS is allocated to -th task of IMD according to the CPU occupation ratios of all tasks. Specifically, under the two-step computation offloading, when IMD is associated with SBS , the computation capacity of SBS is allocated to -th task of IMD can be given by

(9)

where represents the total computation capacity of SBS .

Since SBSs can upload tasks to MBS for processing if IMDs are associated with these SBSs, and IMDs can also directly upload them to MBS for execution if these IMDs are associated with MBS, the data processed at MBS should include the following two parts. When some IMDs are associated with SBSs, the first part refers to the amount of data uploaded by these associated SBS, and it is given by . When some IMDs are associated with MBS, the second part refers to the amount of data uploaded by IMDs associated with MBS, and it is given by . Consequently, according to the CPU occupation ratios of all tasks, the computation capacity of MBS is allocated to -th task of IMD can be given by

(10)

where is the total computation capacity of MBS.

When IMD adopts two-step computation offloading to complete its -th task, the energy consumption used for this type of operation should include the following four parts. Specifically, the first part is the energy consumption used for uploading tasks to SBSs, the second one is the execution energy consumption of tasks at SBS, the third one is the energy consumption used for uploading tasks to MBS, and the last one is the execution energy consumption of tasks at MBS. Therefore, under the two-step computation offloading, when IMD is associated with SBS , the energy consumption used for executing its -th task can be given by

(11)

where and represent the energy consumption of each CPU cycle at SBS and MBS respectively; denotes the power consumption per second on wired line; in the right side of (11), the first, second, third and fourth terms are the energy consumption used for uploading tasks to SBSs, the execution energy consumption of tasks at SBS, the energy consumption used for uploading tasks to MBS, and the execution energy consumption of tasks at MBS respectively.

3) Offloading to MBS: When IMD adopts one-step computation offloading to complete its -th task, it is associated with MBS. At this time, the time used for this type of operation can be given by

(12)

where the first term in the right side of (12) is the uplink transmission time used for uploading tasks to MBS, and the second one is the execution time of tasks at MBS.

When IMD adopts one-step computation offloading to complete its -th task, the energy consumption used for this type of operation can be given by

(13)

where the first term in the right side of (13) is the energy consumption used for uploading tasks to MBS, and the second one is the execution energy consumption of tasks at MBS.

Ii-D Multi-task Model

To meet the practical implementation, we assume that all computation tasks are executed sequentially. That is to say, for any IMD , its -th task can be executed only when its first tasks are completed. In addition, we assume that the local execution and computation offloading are performed simultaneously. Therefore, the total time of IMD used for completing its computation tasks is the maximum of local execution and computation offloading time, and it can be given by

(14)

However, the total energy consumption of IMD used for completing its computation tasks is the sum of local execution and computation offloading energy consumption, and it can be given by

(15)

Iii Problem Formulation

In order to reduce the network-wide energy consumption, and extend the standby time of mobile terminal devices (IMDs) and SBSs, we jointly perform the device association, computation offloading and resource allocation to minimize the network-wide energy consumption under IMD’ latency constraints for ultra-dense multi-device and multi-task IoT Networks. It is noteworthy that the proportional computation resource allocation is utilized before the problem formulation. Specifically, the optimization problem is formulated as

(16)

where , , and ; takes a small enough value to avoid the zero division, e.g., ; indicates that the task execution time of IMD cannot be greater than the deadline ; and show that any IMD can just be associated with one BS; gives the lower bound () and upper bound () of transmission power of IMD ; gives the lower bound () and upper bound (1) of frequency band partitioning factor. In addition, as shown in , when IMD is associated with SBS , this IMD can offload bit of -th task to SBS , and then SBS can offload bit of its received partial -th task to MBS. Evidently, and should be greater than or equal to , but less than or equal to the data size of -th task of IMD . Meanwhile, should be less than or equal to . As revealed in , when IMD is associated with MBS, this IMD can offload bit of -th task to MBS. It is evident that should be greater than or equal to , but less than or equal to the data size of -th task of IMD .

It is easy to find that the formulated problem (16) is in a nonlinear and mixed-integer form, and the optimized parameters are also highly coupling. That means such a problem is in a nonconvex form. In ultra-dense IoT Networks, the problem (16) is often a large-scale mixed-integer nonlinear programming one. At this time, it is evident that an exhaustive searching method utilized for testing all possible solutions should be impractical and infeasible.

Iv Algorithm Design

To solve the problem (16), we utilize the hierarchical adaptive search (HAS) algorithm [13] to find its solution, which is the combination of adaptive genetic algorithm (GA) with diversity guided mutation (DGM) [27] and adaptive particle swarm algorithm (PSO) algorithm [28]. In the whole algorithm, adaptive GA with GDM is firstly used for coarse-grained search, and adaptive PSO is then employed for fine-grained search. Compared with the efforts in [29], such an algorithm can avoid the premature convergence and improve the convergence speed, and finally achieve a better solution.

Iv-a Adaptive GA with DGM

It is easy to find that GA is essentially a series of operations on the chromosome patterns. In other words, the selection operation is used to inherit good patterns from the current population to the next generation, the crossover operation is utilized to reorganize patterns, and the mutation operation is adopted to mutate the patterns [27]. Through these genetic operations, the chromosome pattern evolves toward a better direction gradually, and the optimal solution of formulated problem can be obtained finally. That is to say, in order to solve the problem (16), GA starts with a set of initial feasible solutions, and then repeatedly performs the selection, crossover and mutation operations until an acceptable solution of problem achieves or GA converges.

Fig. 2: The chromosome and its structure of individual .

1) Chromosome

In GA, any individual is defined by a specific chromosome, and any chromosome can represent a solution of optimization problem. For simplicity in the design of algorithm, the real coding mechanism is utilized in this paper. Specifically, the chromosome and chromosome structure of individual can be found in Fig.2, where represents the set of individuals. In this figure, five optimized parameters , , , , are encoded as , , , , , where , and represents the index of BS associated with IMD in the individual ; , and is the transmission power of IMD in the individual ; , and denotes the amount of data offloaded to associated SBS by virtual IMD in the individual ; , and represents the amount of data offloaded to associated MBS by virtual IMD in the individual ; is the frequency band partitioning factor in individual . Significantly, the set of virtual IMDs is given by , and is its length. Evidently, the index of any virtual IMD can be easily converted into the indices of real IMD and task. Conversely, the indices of any real IMD and task can be converted into the index of virtual IMD.

2) Fitness Function

To evaluate how appropriate an individual is, a fitness function is widely used in GA. Through a direct observation, we can easily find that the constraint is in a nonlinear, mixed-integer and coupling form. Evidently, it may be very difficult for us to meet such constraint in the initialization and genetic operations of GA. To properly deal with the constraint , a penalty function is introduced in the definition of fitness function, which can be definitely used for preventing individuals falling into the infeasible region. In this way, the established population can always find the feasible optimum.

To minimize the network-wide energy consumption, the negative objective function of problem (16) can be well used as a fitness function. However, in order to meet IMDs’ latency constraints at the same time, the constraint needs to be put into such a fitness function as a penalty term. Consequently, the fitness function of individual can be defined as

(17)

where is the penalty factor of IMD .

3) Population Initialization

To meet the constraints -, the initial population can be generated by using the following rules. Specifically, the initial genes of any individual can be given by

(18)

where returns the row subscript and column subscript in an matrix corresponding to linear index ; randomly outputs an element from the set , randomly generates a number between 0 and .

4) Selection

The selection operation in GA is used for screening individuals of a population, its main task is to select some individuals from the parent population in some way and then let them be inherited into the next generation. After the selection operation, some high-fitness individuals are more likely to be inherited into the next generation, but some low-fitness ones may have a lower probability of being inherited. In this paper, the tournament method is used for selecting individuals, which should be a good option to select some good individuals rather than to select only the best individuals. Moreover, in order to improve the performance of GA, the historical best individual is always preserved in the population during the selection operation. Specifically, when the historical best individual is not chosen into the next generation, the selected worst individual should be replaced with historical best individual. In addition, the historical best individual is always updated at each iteration.

5) Crossover

The crossover operation refers to the exchange of some segments of genes between two paired chromosomes according to the crossover probability, and finally establishes two new individuals. Such an operation is an important feature of GA, which is different from other evolutionary algorithms. It plays a key role in GA and is the main method used for generating new individuals. Meanwhile, it maintains the diversity of population, and it is more conducive to the convergence and avoids falling into local optimum. In this paper, we always select two neighboring individuals

and to exchange the corresponding segments of genes. In order to propagate the building blocks of the best/better individuals [30], the crossover probability between individuals and can be given by

(19)

where and represent the constant coefficients; denotes the minimum of fitness values of individuals and . and represent the average and minimum fitness values of population respectively.

6) Mutation

The so-called mutation operation refers to the replacement of some gene values in individual’s coding string with other gene values according to the mutation probability, and finally forms a new individual. Such an operation in GA is an auxiliary method to generate new individuals, which determines the local search ability of GA while maintaining the diversity of population. In GA, the crossover and mutation operations work together to complete global and local search of solution space. In this paper, an individual is randomly selected to mutate, and its mutation probability can be given by

(20)

where and denote the constant coefficients; represents the fitness value of individual ; represents the maximum fitness value of population.

Based on the mutation probability (20), any chromosome of individual can be mutated by using

(21)
(22)
(23)
(24)
(25)

where ; and

are randomly generated by obeying 0-1 uniform distribution, which always keeps the optimized parameters in the feasible domain. In the rules (

21)-(25), and are used for controlling the mutation magnitude of genes and the searching direction respectively. A larger implies a higher mutation magnitude, means some gene mutates towards the maximum, but means some gene mutates towards the minimum. It is easy to find that the mutation operation of all genes will not jump out the feasible region.

In order to deal with a problem of premature convergence in traditional GA, DGM can be executed before adaptive mutation and crossover operations. To this end, we introduce the diversity measurement used for investigating the alternate between exploiting and exploring behaviors [31]. Definitely, the diversity measure for an N-dimensional numerical problem can be defined as

(26)

where , , , and represents the lengths of diagonals of feasible domains of , , , and respectively;

(27)

Then, DGM can be executed according to a specified probability, i.e.,

(28)

where and are constant coefficients, and represent the diversity thresholds.

Up to now, adaptive GA with DGM can be summarized in Algorithm 1, where denotes the number of iterations.

Algorithm 1: Adaptive GA with DGM
1: Initialization:
2:    .
3:    Initialize the population including individuals using (18).
4:    Calculate the fitness values of all individuals using (17).
5:    Find the current best individual with maximum fitness values.
6:    Replace historical best individual with current best individual if
7:      the former individual has smaller fitness value than the later one.
8: While do
9:    Generate a new population including individuals selected by
10:      tournament method.
11:   Replace worst individual with historical best individual If the
12:     latter is not selected into the next generation.
13:   Execute DGM using (21)-(25) under the probability (28).
14:   Calculate the fitness values of all individuals using (17).
15:   Adaptively cross any two neighbouring individuals under (19).
16:   Adaptively mutate using (21)-(25) under the probability (20).
17:   Calculate the fitness values of all individuals using (17).
18:   Find the current best individual in this generation.
19:   Replace historical best individual with current best individual if
20:     the former individual has smaller fitness value than the later one.
21:   Update the iteration index: .
22: EndWhile

Iv-B Adaptive PSO

As a population-based intelligent optimization algorithm, PSO algorithm was proposed according to the foraging behavior of birds. In the PSO, each particle has two attributes including position and velocity, where the position represents a solution of optimization problem, and the velocity shows how the solutions evolve. Specifically, the velocity of any particle (individual) can be updated by

(29)
(30)
(31)
(32)
(33)

where , , , and are the velocities of , , , and at -th iteration; , , , and are the positions of , , , and at -th iteration; represents an inertia weight of particle at the -th iteration; and denote the self-learning and social learning factors respectively; , , , , and are random numbers; , , , and are the historical optimal position of particle at the -th iteration; , , , and are the global historical optimal position, i.e., the position of global historical best particle at the -th iteration. In this paper, the particle who has historical optimal position is regarded as personal best particle, and the one who has global historical optimal position is seen as global best particle. Evidently, the global best particle is selected from personal best particles.

In (29)-(33), the inertia weight of any individual can be updated by

(34)

where and represent the maximum and minimum inertia weights respectively; is the number of iterations of adaptive PSO.

After updating the velocities of particles, the position of any particle can be updated by

(35)
(36)
(37)
(38)
(39)

In order to keep global best particle moving toward the local minimum, according to the rules in [32], the velocity of particle is updated again by

(40)
(41)
(42)