Workflows are widely exploited for the modeling of complicated applications, such as DNN-based applications . Such a workflow is usually computing-intensive, typically composed of tens of interdependent tasks. Indeed, workflow scheduling is essential as its result could directly affect the performance of workflow applications [HAN2020101837]. Due to the complicated structure of a workflow and the sophisticated data dependencies between the tasks in a workflow, completing workflow scheduling in time can be rather challenging even with the use of a high-performance computing platform.
To tackle the aforementioned challenge, some research has been focused on workflow scheduling in the cloud computing environment [7508444, ISMAYILOV2020307, 9095379]. Such work is mainly oriented towards cloud service providers (e.g., Amazon EC2, Rackspace, and GoGrid) which provide virtual resources to the end customers [JOFRE2014184]. The underlying techniques have been developed in an attempt to rationally schedule the dependent tasks among the virtual resources through the pay-as-you-go method. However, they are generally aimed at reducing the workflow completion time and improving the resource utilization, with far less a focus on optimizing the execution cost of workflow applications. They mostly ignore the performance variation between virtual machines with different configurations. In terms of scheduling workflow in the cloud, it might increase the traffic load of core networks and cause high latency due to massive data transmission between clients and the cloud .
Edge computing provides an essential technology to improve performance in workflow scheduling [SUN2020101799]. It enhances the computing ability of a mobile network through deploying the computation and storage resources around the edges of the mobile web, thereby providing the service with high broadband and low delay to the clients. Workflow scheduling in edge-cloud environments not only meets the compute-intensive requirements of workflow applications, but also effectively reduces data transmission delays, while scheduling data-intensive tasks to the edge and compute-intensive tasks to the cloud . Nonetheless, the service nodes (namely, the virtual machines in the cloud and servers in the edge) are generally heterogeneous and their processing capacities can be rather different, as well as the cost-performance in terms of load/energy consumptions . To reduce the workflow execution cost while satisfying deadline constraints, significant difficulties remain in rationally scheduling the data-dependent tasks for efficient data transmissions and task executions.
The existing studies on workflow scheduling are mainly carried out subject to certain conditions (e.g., assuming that the performance of the service nodes, bandwidth, and other factors are steady, without fluctuation) [SAEEDI2020106649, GU2020106, BISWAS2019101932]. However, in the practical scheduling processes, the CPUs of the service nodes and bandwidths between them always fluctuate, which may have considerable impact upon workflow scheduling. Whilst uncertainties in scheduling have been addressed in the literature, the relevant work is mainly focussed on Fuzzy Job Shop Scheduling Problem (FJSSP) [8626515, 9120283] or task scheduling for real-time embedded systems . Little has been done on the challenging problem of fuzzy scheduling for workflow applications.
In our previous work [8476198, 8941306], we have addressed the problem of cost-driven scheduling for deadline-based workflows across multiple clouds, and cost-driven offloading for DNN-based applications over the cloud, edge, and end devices. Based on the experience accumulated in the field of workflow scheduling, in this paper, we further address a significant issue facing workflow scheduling. That is, the problem of reducing the workflow execution cost caused by task computation and data transmission, while satisfying the required deadline constraints in uncertain edge-cloud environments. In particular, a novel cost-driven workflow scheduling strategy is proposed, on the basis of an Adaptive Discrete Particle Swarm Optimization (ADPSO) algorithm, which is employed the operators of genetic algorithm. In support of this development, Triangular Fuzzy Numbers (TFNs)  are adopted to represent task processing time and data transferring time in the uncertain environments. The strategy can obtain an optimal fuzzy total execution cost for a workflow within its deadline with respect to commonly adopted benchmark datasets, offering three major contributions to the relevant literature:
An order-server nesting strategy to encode the workflow scheduling problem in uncertain edge-cloud environments.
An adaptive discrete Particle Swarm Optimization (PSO) algorithm employing the operators of Genetic Algorithm (GA) to improve the performance and searching efficiency of the scheduling strategy.
A workflow scheduling strategy to reduce fuzzy workflow execution cost caused by task computation and data transmission, while meeting the required deadline constraints.
The rest of this paper is organized as follows. Section II briefly reviews the related work. Section III presents the problem definition of workflow scheduling in uncertain edge-cloud environments. Section IV describes our proposed workflow scheduling strategy in detail. Section V analyses the performance of our strategy through experimental studies in comparison with the state-of-the-art scheduling strategies. Finally, Section VI summarizes the work and outlines relevant future research directions.
Ii Related Work
A workflow model, used to simulate and analyze the workflow applications in the real world, consists of a set of computational tasks linked through control and data dependencies . Workflow scheduling is essential as its result could directly affect the performance of workflow applications.
Many research efforts have been launched to workflow scheduling in cloud computing. Yuan et al.  considered the cost minimization of data centers in private cloud. They proposed a Temporary Task Scheduling Algorithm (TTSA) that could efficiently schedule all arriving tasks to the private or public clouds. This method effectively reduced the cost of the private cloud while satisfying all tasks’ delay constraints. Meng et al.  proposed a security-aware scheduling method based on PSO algorithm for real-time resource allocation across heterogeneous clouds. Experimental results shown that this strategy could achieve a good balance between scheduling and security performance. Pham et al.  considered the fulfilment and interruption rates of the volatile resources in order to reflect the instability of the cloud infrastructure. In that work, a novel evolutionary multi-objective workflow scheduling approach was proposed for generating a set of trade-off solutions, whose makespan and cost were superior to the state-of-the-art algorithms. Paknejad et al. [PAKNEJAD202112]
proposed an enhanced multi-objective co-evolutionary algorithm, called ch-PICEA-g, for workflow scheduling in cloud environment. Experiments results indicated that the proposed algorithm outperformed its counterparts in terms of different performance metrics, such as cost, makespan, and energy consumption. It is of practical significance for scheduling workflow in cloud computing. However, it might increase the traffic load of core networks and cause high latency due to massive data transmission between clients and the cloud.
Edge computing can effectively reduce the system delay of workflow scheduling [Weisong2019Edge, 7931566, JARARWEH202042]. Workflow scheduling in edge-cloud environments has recently drawn great interest. For instance, Xie et al. [XIE2019361] designed a novel Directional and Non-local-Convergent PSO (DNCPSO) algorithm to simultaneously optimize the completion time and execution cost of the workflow. Experimental results demonstrated that DNCPSO could achieve better performance than other classic algorithms. Peng et al.  proposed a node reliability model to evaluate resource reliability in Mobile Edge Computing (MEC) environments, defining workflow scheduling as an optimization problem and solving it by an algorithm based on Krill-Herd [GANDOMI20124831]. Through experiments based on real workflow applications and mobile user contract tracking, it had proven that the performance of this method was significantly better than the traditional methods in terms of success rate and makespan. However, existing research on workflow scheduling in edge-cloud environments hardly considers the comprehensive cost optimization for task computation and data transmission.
In real-world practice, the performance of service nodes and the bandwidth may fluctuate while scheduling a workflow application. Initial work exists that deals with scheduling in uncertain computing environments, but such work is mainly oriented towards intelligent manufacturing systems. In particular, Lei [LEI2010610] represented fuzzy processing time and fuzzy due-date with TFNs and trapezoidal fuzzy numbers, respectively, while introducing an improved fuzzy max operation to investigate the FJSSP. In that work, so-called availability constraints are employed for maximizing the satisfaction level of customers. Sun et al  also used TFNs to describe processing time to cope with the FJSSP problem, where an effective hybrid Cooperative Evolution Algorithm (hCEA) was proposed for minimizing the fuzzy makespan. Fortemps  expressed an uncertain duration as a six-point fuzzy number, thereby establishing a fuzzy scheduling model to minimize the fuzzy completion time for job shop scheduling problem. Similarly, Li et al [LI2019105585] used TFNs to capture the uncertainty of fuzzy processing time and introduced a uniform parallel machine scheduling with such processing time representation under fuzzy resource consumption constraints, minimizing the makespan.
Despite the aforementioned remarkable developments in the relevant research area, an important open issue remains on fuzzy workflow scheduling that is of great practical significance: Workflow scheduling that considers fuzzy task processing time and fuzzy data transferring time in uncertain edge-cloud environments. Inspired by this observation, the reminder of this paper will establish a novel approach to cost-driven scheduling for deadline-based workflows in uncertain edge-cloud environments.
Iii System Model and Definitions
In this section, the workflow scheduling in certain environments is described firstly. Then we further elaborate the workflow scheduling in uncertain environments. Thirdly, the operations for TFNs in fuzzy workflow scheduling are introduced in detail. Finally, an example of cost-driven scheduling for a deadline-based workflow application in uncertain edge-cloud environments is illustrated.
Iii-a Workflow Scheduling in Certain Environments
The workflow scheduling framework proposed in this study consists of three main components, i.e., the edge-cloud environments, a deadline-based workflow, and a cost-driven scheduler.
A certain environment means that there is no fluctuation during workflow scheduling and execution. The edge-cloud environments consist of the cloud and edge, where there are different computing nodes (i.e., virtual machines in the cloud and severs in the edge). For simplicity, we use ‘servers’ to denote the computing nodes in the cloud and edge with a uniform representation. There are servers in the cloud , and servers in the edge . A server is denoted by Eq. (1).
where and are the booting time and shutdown time of server , respectively; is the processing capacity of server ; is the computation cost per time unit , which is a specific time unit for server ; refers to the platform to which the server belongs. Note that when , belongs to the cloud with powerful processing capacity. Otherwise, belongs to the edge with normal processing capacity.
The bandwidth between any two different servers is denoted by Eq. (2).
where is the value of bandwidth , ; is the data transmission cost per GB from server to .
A workflow can be described as a directed acyclic graph (DAG) , where is a finite set of tasks, and is a finite set of directed arcs. Each directed arc indicates that there is a dataset transferred from task to , and cannot be executed until is finished. For an arc , is called the immediate predecessor task of , and is called the immediate successor task of . In addition, a workflow has a corresponding deadline constraint . When a workflow is completed within its deadline based on a specific scheduling strategy, this strategy is called as a feasible solution.
Suppose that the processing time of task on server is described as . Owing to its popularity, the serial processing model  is adopted herein. It expresses that a task is processed on only one server, and a server can process only one task concurrently. The data transferring time is denotes by Eq. (3).
where is the time to transfer dataset from server to . If and are the same server, the data transferring time is 0.
A cost-driven scheduler aims to reduce the workflow execution cost mainly caused by task computation and data transmission, while satisfying its deadline in edge-cloud environments. A scheduling strategy is denoted by Eq. (4).
where is the mapping from tasks and datasets to servers; is the workflow completion time, and is the workflow execution cost in edge-cloud environments with a given scheduling strategy,.
There are two subsets in the mapping : indicates that task is executed on server , and implies that dataset is transferred from server to . When the subset is determined, the other subset will be determined. Therefore, the mapping is equivalent to as Eq. (5).
When the mapping is determined, the servers processing all tasks are determined with a specific scheduling strategy . Due to the data dependencies between the tasks in a workflow, the execution order of each task is relatively fixed. Each task will have the start time and end time once the corresponding is determined. The workflow completion time can be denoted by Eq. (6).
where is the task computation cost, and is the data transmission cost.
In summary, the scheduling strategy for a workflow in certain environments can be described by Eq. (10), which indicates that the scheduler pursues to minimize the total workflow execution cost , while satisfying its deadline .
Iii-B Workflow Scheduling in Uncertain Environments
Uncertain environments mean that there are fluctuations during workflow scheduling and execution. The task processing time and data transferring time are uncertain due to the fluctuations of server processing capacity and bandwidth, respectively. Fuzzy set are employed to reflect the uncertainties during workflow execution [Zadeh1965Fuzzy], while the task processing time and data transferring time are represented as TFNs.
Where is the normal (namely, the most possible) value of the fuzzy variable ; and are the lower and upper limit values of , respectively. When , is a certain number. A fuzzy variable in uncertain environments corresponds to a variable in certain environments. According to the principles of fuzzy set theory [ext-principle], the scheduling strategy for a workflow in uncertain environments can be defined as Eq. (12).
Where is the fuzzy total workflow execution cost, and is the fuzzy workflow completion time. Both fuzzy variables (i.e., and ) are represented by TFNs. For the target
, an equivalent representation through its mean value and the standard deviation can be introduced[LEE1988887], where the objective function given in Eq. (12) is equivalent to that Eq. (13).
Where and are the mean value and standard deviation of , and is the weighting factor of . According to the work of Lee and Li [LEE1988887]
, the mean value and standard deviation of a TFN can be defined through uniform distribution and proportional distribution. Therefore,and can be computed as Eqs. (14) and (15), respectively.
In the process of workflow scheduling, the actual task processing time and data transferring time are more likely to be longer than the estimated values[PALACIOS201574]. A new fuzzification method based on Sun et al.  is proposed to describe the uncertain values (namely, task processing time and data transferring time). Therefore, the related parameters of a TFN are redefined as follows: is the estimated time; and are randomly selected from the interval and , respectively, where and are adjustment coefficients, satisfying that , and .
Such a TFN will satisfy the constraint . Therefore, the mean value of a TFN becomes that in Eq. (16), which is more likely to be longer than its estimate.
In summary, the scheduling strategy for a workflow in uncertain edge-cloud environments can be formalized as Eq. (18).
Iii-C Operations for TFNs in Fuzzy Workflow Scheduling
To construct a feasible schedule for a workflow in uncertain edge-cloud environments, the operations for TFNs (i.e., addition, ranking, max, and multiplication) need to be introduced as follows.
Iii-C1 Addition Operation
Addition operation is used to calculate the end time of tasks. Suppose that the start time and processing time of a task are denoted by and , respectively. The end time of such a task is calculated by Eq. (19) .
Iii-C2 Ranking Operation
Ranking operation is required to calculate the maximum end time of all the immediate predecessors of task . Suppose that the end time of one predecessor and that of another predecessor are and , respectively. The maximum end time of such two predecessors is then calculated with respect to three different situations, following the ranking criterion proposed by Sakawa et al. [Sakawa2000Fuzzy].
If , then .
If and , then .
If and , then .
Note that multiple ranking operations are recursively performed if there are more than two predecessor tasks.
Iii-C3 Max Operation
Max operation is needed to calculate the start time of tasks. Suppose that the maximum end time of all immediate predecessors of task is , and the last idle time of a server before processing task is . Then, the membership function of ’s start time is computed by Eq. (20).
According to the max criterion proposed by Lei [LEI2010610], the start time of task can be approximated by Eq. (21) .
Iii-C4 Multiplication Operation
Multiplication operation is carried out to calculate the task computation cost and data transmission cost as addressed by Eqs. (8) and (9), respectively. The product of a TFN and a real number is computed by Eq. (22) .
Similarly, the quotient of a TFN divided by a real number can be transformed into the product of such a TFN and another real number as Eq. (23).
Iii-D Illustration of Cost-Driven Workflow Scheduling
Fig. 2 presents an example of cost-driven scheduling for a deadline-based workflow in uncertain edge-cloud environments. The edge-cloud environments consist of four servers, where belong to the cloud and belong to the edge. The workflow application has 8 tasks and 9 datasets, whose deadline is s. The time unit is set to 60s. Table I lists the relevant parameters for the bandwidths between different servers. Table II presents the computation cost per hour for all servers. Table III shows the fuzzy processing time for each task on the available servers.
|(2560.97, 2800, 3484.58)||(618.57, 700, 884.27)||(3406.72, 3500, 3828.08)||(4812.15, 5250, 5974.03)|
|(13918.81, 14000, 17808.50)||(3323.98, 3500, 3835.46)||(16819.00, 17500, 20539.04)||(25343.82, 26250, 28794.55)|
|(1144.78, 1200, 1450.83)||(298.19, 300, 303.49)||(1323.77, 1500, 1683.88)||(2089.08, 2250, 2479.93)|
|(701.05, 800, 1015.21)||(176.64, 200, 258.50)||(973.31, 1000, 1174.42)||(1427.83, 1500, 1936.48)|
|(1717.10, 2000, 2297.11)||(482.74, 500, 590.05)||(2468.10, 2500, 2889.79)||(3211.60, 3750, 4865.82)|
|(210.18, 240, 274.81)||(51.30, 60, 77.00)||(283.40, 300, 375.52)||(403.55, 450, 568.21)|
|(399.99, 400, 413.68)||(96.64, 100, 107.72)||(483.87, 500, 576.55)||(689.23, 750, 881.02)|
|(7849.11, 8000, 10081.39)||(1817.86, 2000, 2204.68)||(9113.32, 10000, 12923.71)||(14511.28, 15000, 16312.29)|
Fig. 2(c) depicts the workflow scheduling results based on the random scheduling strategy [ZHOU20206154]. It randomly schedules each task to their corresponding severs, and all tasks are executed according to their data dependencies. The fuzzy completion time of the workflow application is (11599.95s, 11599.96s, 11613.64s), which exceeds the corresponding deadline (i.e., s). The fuzzy execution cost based on the random scheduling strategy is (38.61$, 38.84$, 39.6$), whose equivalent defuzzified value is 39.14$. Fig. 2(d) depicts the optimal workflow scheduling. The fuzzy completion time is (7036.56s, 7052.69s, 7129.24s), which meets its deadline constraint. The fuzzy execution cost is (28.41$, 29.84$, 32.79$), whose equivalent defuzzified value is 30.93$. The optimal execution cost is significantly reduced by 21% compared to that based on the random scheduling strategy.
Iv Our Proposed Workflow Scheduling Strategy
The goal of a workflow scheduling strategy, , is to find the best mapping from all tasks in a workflow to different servers in edge-cloud environments , where the workflow execution cost is optimal within its corresponding deadline . The tasks on a server have their strict execution order based on the data dependencies. A task can be executed on different servers, and a server also can process many tasks. Therefore, finding the best mapping from all tasks to different servers is a NP-hard problem [HOSSEINISHIRVANI2020103501]. PSO is one of the effective algorithms to address such problems. Therefore, we propose a workflow scheduling strategy based on the modified PSO algorithm (i.e., ADPSO). The traditional PSO algorithm is introduced first, followed by a detailed description of ADPSO.
Iv-a Traditional PSO Algorithm
PSO is an efficient evolutionary technique inspired by the social behavior of bird flocks. Kennedy and Eberhart first presented the PSO algorithm in 1995 , which has been broadly investigated and utilized ever since. The particle is the most significant concept in PSO, which usually represents a candidate solution for an optimization problem. Each particle in a population at the iteration has its own position and velocity , which will determine their direction and magnitude at the next iteration. The velocity of each particle is affected by their personal best particle and the global best particle . Each particle constantly updates their own velocity and position in the potential solution space to obtain better fitness. The iterative update of velocities and positions for each particle are determined by Eqs. (24) and (25), respectively.
where is an inertia weight, which determines to what extent the velocity of current particles will affect the corresponding particles of the next generation, having a great impact on the convergence of PSO; and are acceleration coefficients, which denote the cognitive ability of a particle for its personal and global best particle, respectively; and are the random numbers on the interval [0,1), used to enhance the searching ability of PSO.
The traditional PSO algorithm is designed for continuous optimization problems. However, workflow scheduling in edge-cloud environments is a discrete optimization problem. Therefore, an applicable PSO-based algorithm with new problem encoding and population update needs to be further adjusted.
The proposed ADPSO are described from five aspects: problem encoding, fitness function, population update, mapping from a particle to a fuzzy scheduling, and parameter settings.
Iv-B1 Problem encoding
Problem encoding affects the searchability of a PSO-based algorithm, which is expected to meet three major principles: Viability, Completeness, and Non-redundancy . Inspired by the work in , an order-server nesting strategy is developed to encode the cost-driven workflow scheduling in uncertain edge-cloud environments. In particular, the particle in the iteration (i.e., ) is denoted by Eq. (26).
where , indicates the assignment of task , meaning that is executed on the server with a specified order . There are two criteria for task execution on a server as follows.
Criterion 1: If two concurrent tasks without data dependencies (i.e., there are no direct or indirect data dependencies between the tasks) are scheduled to the same server, the task with a larger order value will be processed earlier. If two tasks have the same order value, the one entering pending queue earlier will be processed first.
Criterion 2: If two tasks with data dependencies are scheduled to the same server, the predecessor one is processed first.
Fig. 3 depicts an encoded particle corresponding to the scheduling result of Fig. 2(d). After task is executed, tasks and are both scheduled to server . Since there are no data dependencies between and , and has a larger order value (i.e., 3.9), it is processed first based on Criterion 1. Tasks , and
are next scheduled to the corresponding servers. At this moment,, and are in the pending queue of server . Whilst has the same order value (i.e., 1.5) as , it will be processed before because it enters the pending queue of earlier. The remaining tasks are similarly processed based on the two criteria.
Iv-B2 Fitness function
Fitness function is used to evaluate the performance of particles. In general, a particle with a small fitness value represents a better candidate solution. This study aims to minimize the fuzzy total execution cost of scheduling a workflow within its deadline . Therefore, a particle corresponding to a scheduling result with a smaller fuzzy execution cost can be regarded as a better solution. However, the problem encoding strategy may not meet the Viability principle, which dictates that the fuzzy completion time of a workflow must not exceed its deadline . Therefore, we compare the performance of two particles following three different situations.
Situation 1: Both particles corresponding to the scheduling results are feasible. The one with a smaller fuzzy total execution cost is deemed better, and the fitness function is defined by Eq. (27).
Situation 2: One particle corresponding to the scheduling result is feasible, and the other is infeasible. The feasible particle is naturally deemed better, and the fitness function is defined by Eq. (28).
Situation 3: Both particles corresponding to the scheduling results are infeasible. The one with less fuzzy completion time is deemed better, which is more likely to become feasible after update operations. The fitness function is defined by Eq. (29).
Iv-B3 Population update
The update of each particle is affected by three factors: inertia, individual cognition, and social cognition . To strengthen the searchability and avoid premature convergence of the proposed scheduling strategy, ADPSO employs the mutation operator and crossover operator of GA. The iterative update of the particle at the iteration for the workflow scheduling is defined as Eq. (30).
where and are mutation operation and crossover operation, respectively; and are acceleration coefficients; and are random numbers on the interval .
For the inertia part, the mutation operator of GA [doi:10.1080/00207540902814348] is introduced to perform the updating as Eq. (31).
Where denotes the dual mutation operator, which includes the neighborhood mutation operator for task order and the adaptive multi-point mutation operator for the number of servers.
The neighborhood mutation operator randomly chooses three locations in a particle, and generates all sort combinations for the task order in the corresponding field. Then, it randomly selects a particle from the sort combinations as the generated one for feeding to the adaptive multi-point mutation operator. Fig. 4(a) depicts the neighborhood mutation operator. It randomly chooses the locations and , and generates all sort combinations for task order. It then randomly selects the second combination as the one for the adaptive multi-point mutation operator.
The adaptive multi-point mutation operator randomly chooses locations (i.e., number of mutations) in a particle, and mutates each location’s number of servers in the interval . Fig. 4(b) depicts this adaptive multi-point mutation operator. It randomly chooses locations (i.e., and ), and mutates the corresponding number of servers from (2,4,2) to (1,3,4).
Where is the two-point crossover operator. randomly selects two locations in particle A, and then replaces the corresponding segments between the two locations of A with the same interval in particle B. Fig. 5 depicts this crossover operator in action. It randomly selects the locations and in a mutated particle, and replaces the segments between and with the same interval in (or ).
Iv-B4 Mapping from a particle to a fuzzy scheduling
The mapping from a particle to a fuzzy scheduling in uncertain edge-cloud environments is summarized in Algorithm 1. The inputs are the workflow , all available servers , and an encoded particle . The output is the corresponding fuzzy workflow scheduling based on the particle . It first initializes the mapping to an empty set and to (0,0,0). The fuzzy task processing time on different servers, and the fuzzy data transferring time between servers are calculated (line 3). According to the encoded particle , each task is scheduled on server with the order . For a certain task , its start time is equal to the booting time of server if it is an entry task. Otherwise, the task cannot start until the last dataset is transferred to from its parents (line 13-22). Then, the end time of is the sum of its start time and its processing time on server (line 24). According to Eqs. (6) and (7), and are subsequently calculated. Note that if the fuzzy completion time exceeds the corresponding deadline(i.e., ), the algorithm stops immediately and returns a symbolic value of False, meaning that this particle is infeasible (line 27-29). Finally, it returns the fuzzy scheduling strategy if this particle is feasible (line 31).
Iv-B5 Parameter settings
The inertia weight influences the convergence and searchability of PSO-based algorithm . A larger inertia weight helps the algorithm jumping out of local optima, improving its global searchability. By contrast, a smaller inertia weight improves the algorithm’s local searchability. This study proposes a new adjustment mechanism that can adaptively adjust the value of inertia weight based on the particle’s current state, thereby enhancing the algorithm’s overall searchability, as shown in Eq. (34).
where and represent the predefined maximum and minimum values of , represents the number of different encoding values between the current particle and the global best particle , and represents the size of the particle’s encoding space. This mechanism can adaptively adjust the algorithm’s searchability according to the difference between the global best particle and current particle. When is relatively small, it means that the difference between and is small. Thus, the particle’s local searchability is expected to be enhanced, increasing the algorithm’s convergence. Conversely, a bigger value of means to increase the magnitude of , enhancing the particle’s global searchability.
Regarding the adaptive multi-point mutation, the mutation number is adaptively adjusted according to the change of the inertia weight , and its adjustment strategy is implemented by Eq. (35).
where and are the predefined maximum and minimum values of the mutation number . The inertia weight has a positive effect on the mutation number . When is large, the mutation number is increased to enhance the mutation ability so that the algorithm’s global searchability can be intensified. On the contrary, if is small, is decreased and as such, only limited mutation ability is reserved to maintain the diversity of the population.
The acceleration coefficients are dynamically adjusted according to , where and denote the start values of and , and and denote their end values.
Iv-B6 Algorithm flowcharts
Fig. 6 presents the flowchart of ADPSO algorithm, which includes the following steps.
Step 1: Initialize the parameters of ADPSO, including population size , maximum iteration number , inertia weight, and acceleration coefficients. Next, randomly generate the initial population.
Step 2: Calculate each particle’s fitness value according to Eqs. (27-29). Each particle is set to its personal best particle and the particle with the smallest fitness value is set to the global best particle.
Step 3: Update all particles according to Eq. (30), and recalculate the fitness value of each updated particle.
Step 4: Set the updated particle as the current personal best particle if its fitness value is less than the existing personal best; else, go to Step 6.
Step 5: Set the updated particle as the global best particle if its fitness value is less than the existing global best.
Step 6: Check whether the termination condition is met; if so, output the global best particle and terminate, else, go back to Step 3.
V Performance Analysis
To validate the effectiveness of the workflow scheduling strategy based on the proposed ADPSO, experimental evaluations are carried out. In particular, the following Research Questions (RQs) are checked with the experiments conducted.
RQ1: Compared with traditional PSO-based Algorithms, does ADPSO improve the searchability and convergence? (Section V-B)
RQ2: In optimizing the fuzzy workflow execution cost, is ADPSO superior to other algorithms in terms of performance stability? (Section V-C)
RQ3: In workflow scheduling with respect to given deadlines in uncertain edge-cloud environments, does ADPSO help reduce workflow execution cost? (Section V-D)
V-a Basic experimental setup
All experiments are run on the Win10 64-bit operating system with an Intel(R) Core(TM) i5-7200U CPU at 3.60 GHz and 16 GB RAM. Both ADPSO and all compared algorithms are implemented in Python 3.7. Parameters are set according to , where = 100, = 1000, = 0.9, = 0.4, = , = 1, = 0.9, = 0.2, = 0.4, and = 0.9.
Five types of workflow are tested for this study, obtained from different scientific fields , including: CyberShake from earthquake science, Epigenomics from biogenetics, LIGO from gravitational physics, Montage from astronomy, and SIPHT from bioinformatics. Each type of workflow has different structures, numbers of tasks, and data transmissions between tasks, with detailed information stored in an XML file [WorkflowHub]. For each type of workflow, we choose three categories for the experiments: Tiny (approximately 30 tasks), Small (approximately 50 tasks) and Medium (approximately 100 tasks).
There are three cloud servers () and two edge servers () in edge-could environments. Each server has specific processing ability and computation cost per time unit. We assume that has the most powerful processing ability, the processing time of tasks on can be directly recorded from the corresponding XML file. Also, the processing capacity of or is approximately half or a quarter of that of , while the processing capacity of or is about one-eighth or one-tenth of that of . The computation cost per hour for is set to 15.5 $h, and the other severs’ computation cost are approximately proportional to their processing abilities.
The bandwidth and data transmission cost between different types of server are set as Table I. Each workflow is assumed to have a corresponding deadline constraint in order to test the algorithm performance, set as Eq. (36).