Parallel processing (or parallel computing) is a field in electrical engineering and computer science related to the application of many computers running in parallel to solve computationally intensive problems. The main goal of parallel processing is to provide users with performance which no single computer may deliver. Scheduling is an important task allowing parallel systems to perform efficiently and reliably. In general, scheduling can be considered as managing the execution of jobs which required certain resources in such way that certain optimality and/or feasibility criteria are met. Such optimality metrics can be minimal finish time, lowest monetary cost and so on.
Divisible load is a special but widely used type of data which can be divided into arbitrary sizes and independently processed in parallel. The divisible loads may be commonly encountered in applications which are processing a great amount of similar data units. During the past decades, Divisible Load Theory (DLT) has been proved as a powerful tool for scheduling in parallel systems.
This paper mainly studies a parallel system which has multiple sources and multiple processors. Sequential load distribution is used for the workload distributing procedure. Compared with regular single source load distribution systems, multiple sources have to be scheduled to communicate with the processing nodes in a specific sequence which solves the finish time optimization problem. When processing nodes have front-end processors, the nodes can compute and communicate at the same time. So both the scenarios of the processing nodes with or without front-ends are considered. Numerical tests and simulations show that the multi-source multi-processor system has significant improvement compared with single-source systems by reducing the system minimal finish time.
In this paper, a monetary cost model is also proposed for estimating the overall computing power used by the system. The trade-off relationship between monetary cost and minimal finish time is discussed for different situations. Detailed suggestions are given for the user who has a time budget, a cost budget or both.
In the past decade, parallel and distributed systems have become a very general application. To process large-scale, data-intensive loads, multiple processors are required to work in parallel. The most important task for a scheduling problem is to assign different amounts of data to these parallel computers and make them finish each partition in an acceptable temporal range. Parallel systems are often used in the areas that have heavy computation requirements.
In order to study the processing of load for parallel and distributed computing, Divisible Load Theory (DLT) was created     . It assumes that communication and computation loads can be partitioned arbitrarily among numerous processors and being processed in parallel. In 1988 , Cheng’s paper first gave an intuitive proof for the Divisible Load Theory’s optimality principle. Five years later, a formal proof was given and an extensive search to validate the result was run on an IBM mainframe. Since then, DLT was developed and studied with multiple network topologies. Topologies include bus networks , star networks, tree networks , meshes , grids  , etc. Nowadays, DLT has also been developed for more different environments. These include cloud networks  and sensor networks  . It has become a powerful tool for modeling data-intensive computational problems.
Potential applications of Divisible Load Theory can be widely found in the fields of image processing, video processing, sensor networks, cloud networks, etc. The following section gives more details of these applications.
1.2.1 Image Processing
Image feature extraction is a highly used function in computer vision systems. There are mainly two phases of computation for image feature extraction. In the first phase, the image will be segmented into many pieces and be processed independently and locally by different processors. During this procedure, the local features of the image will be extracted. In the second phase of computation, local features from different processors are exchanged and then processed to extract the desired features. The first phase of image feature extraction can be considered to use DLT since the load can be arbitrarily divisible since there is no precedence relations .
1.2.2 Video Processing
Another application for DLT is video processing. With the rapid growth of digital TV and interactive media over broadcast networks, the need for high performance computing for broadcasting is much more important than earlier. Parallel processing is one of the best ways to meet the need for a considerable amount of data processing. The authors of  first applied the DLT paradigm to the video encoding process and designed a parallel video encoder which was shown to achieve a good performance. With the help of DLT, the precise modeling and minimization of the execution time of each phase of the video encoding process becomes an easy task.
1.2.3 Sensor Network
Wireless sensor networks (WSN) are spatially distributed autonomous sensors to monitor physical or environmental conditions, such as temperature, sound, pressure, etc, and then cooperatively pass their data through the network to a main location.
The problem of load distribution in a large scale sensor network was defined as an optimization problem to minimize the overall finish time of the whole system  . By finishing sensing tasks faster, a system could get the returned results more quickly and also save more energy and monetary cost. Since the data collected by multiple sensors may have no precedence relations, it can be considered as a divisible load. In this case, DLT can be applied to sensor network applications to improve their performance.
1.2.4 Cloud Network
Cloud computing is an on demand service in which shared resources, information, software and other devices are provided according to the clients requirement at specific time . Cloud network provides continuity for large-scale service-oriented applications  . For more details on cloud computing, refer to    .The users may require the whole cloud network to process a job which is very data-intensive. To process the job, efficient load balancing techniques are needed, which involves reassigning the total load to the individual nodes of the collective systems to make resource utilization effective and to improve the response time of the job. The Divisible Load Theory paradigm is a very powerful tool for solving the load balancing problem, as long as the load is arbitrarily divisible.
1.3 Motivations and Contributions
In most of the previous work of Divisible Load Theory, it is often assumed that there is only one source to store the original data. This source node may deliver the data fractions to each processing node in a sequential manner, which leads to a result that many processing nodes are idle when they do not have any data to process. This results in a waste of computing resources and lower efficiency. Nowadays, with the rapid development of network and cloud computing, it is very practical to store the original data in different databanks and later send it to different processors for further computation .
It is also very practical to adapt the multi-source topology to traditional networks. In this paper, the topology of two-level tree networks fed data by a data originator node is considered. The original data is stored in the data originator on the first layer, which is only one source node.
In the second layer of the tree topology, there are a few source nodes, which receive load from the first layer and further transmit them to the third layer. Finally, the third layer contains many processing nodes that do the computing tasks in parallel.
This paper is mainly focused on the last two layers of this two-level tree network. Compared with the previous work on multi-source systems, the study is separated into two cases, which is that the processors have front-end processors, or the processors are not equipped with front-end processors. Closed-form solutions are found for both of the scenarios to achieve the overall minimal finish time for the system. Moreover, a monetary cost model is developed to estimate the cost charged by using the processors’ computing power. The trade-off between monetary cost and system finish time/ makespan will be given. More suggestions are discussed for the users who have monetary budgets to use the system, or have requirements to finish the task in a certain time range.
The rest of this proposal is organized as follows. Section 2 first briefly introduces the basics of a classic scheduling problem using Divisible Load Theory. Then in section 3, a multi-source multi-processor network topology is studied, which is divided by two cases: the processing nodes are equipped with front-end processors or the processing nodes are not equipped with front-end processors. Section 4 contains the numerical tests and simulations for this parallel distribution system. The system speedup and performance analysis using Amdahl’s Law will be shown in section 5. In section 6, we investigate calculating the overall monetary cost of the distribution system. More detailed discussion of the trade-off between the minimal finish time and monetary cost is covered here. The conclusion and future works appears in section 7 and 8.
The following notation is used in this proposal:
The fraction of divisible load that is assigned from source to processor .
The inverse communicating speed of source .
The release time of source .
The inverse computation speed of processor .
The cost for processor to work for one unit of time.
The finish time of processing the entire job.
The total job (amount of data) that needs to be distributed and processed.
The overall monetary cost for the entire system.
The user’s budget for the monetary cost.
The user’s budget for the finish time.
2 Basics for Divisible Load Theory (DLT)
This section mainly studies the basics for Divisible Load Theory. A fundamental model is given and a closed-form solution for the overall minimal finish time is presented.
Divisible Load Theory (DLT) is a methodology involving the linear and continuous modeling of partitionable computation and communication loads for parallel processing. There is a fundamental assumption for most of the divisible load studies. In order to achieve the minimal finish time, all of the processors should finish processing the fractions of load that are assigned to themselves at the same time. If not, the unfinished data can always be sent to the processors which already finished processing. A formal proof of this assumption was given in .
2.2 Problem Formulation
A basic model using DLT is shown in Figure 1. The topology is a single-level tree network. The top layer is a source node, which stores all the data after getting a work task. It transmits different amount of load partitions to the second layer of the tree, where there are M processing nodes. These processors can do parallel computing once each of them receive their fraction of load . Each processor has a separate link to connect the source node for communication. Based on the assumption that the source can only communicate with one processor at a time, the arrangement of the communication between source and processors is as follows:
First of all, the source node does sequential communication, which means that it communicates with the arranged order of processors , , , …, , . Secondly, to achieve shorter finish time of processing the whole task, the processing nodes are sorted by the descending order of their computing speed , which is (please note that is the inverse computing speed of ). This makes the processors with faster computing speed start processing earlier than the ones with slower speed. As another result, the time that all the processors finishes processing, which is called finish time can be shorter.
The main problem to solve is to find a load distribution plan that minimizes the overall system finish time , which includes all the processing nodes as well as the source node.
Figure 2 is the timing diagram for this basic load distribution system. Here, denotes the inverse communication speed of source and is the inverse computing speed of processor . The load fraction that source sends to processor is represented by . The source node is not involved in the computing procedure. Each of the processors starts computing their fraction of load after finishing receiving it . They stop processing at the same time instant to achieve the minimal finish time of the whole system.
There are several assumptions for the processors: Firstly, a processor can only compute after it has finished the communication unless it is equipped with a front-end processor.Secondly,The source can only communicate with one worker processor at a time.Lastly,There is no communication between the worker processors.
The timing diagram indicates that source has continuous communication with the processors. Once it finishes sending load to processor , it continues sending load to processor . Since all of the processors finish processing at the same time, the following equations are written to represent the finish time for each processor. For processor , its finish time equals the communication time that source communicated with to , plus the computing time of :
Also, based on the definition that is the fraction of load that source sends to processor node , the following equation can be written to normalize the total amount of load which is processed by this system:
Since there are unknowns and linear equations, load partitions can be uniquely solved as well as the system finish time .
3 Scheduling for Multi-Source Multi-Processor System
In this section, a multi-source, multi-processor load distribution system is studied. The study will be divided into two scenarios: the processing nodes with or without front-end professors. Here, front-end processor refers to a small-sized sub-processor which has the job of data collection and communication between source node and processing node. If a processing node is equipped with a front-end processor, it can start computing the data once it starts receiving it with the front-end processor.
There is an assumption that it always take a much longer time to compute the data rather than transfer it. In this case, if the node continuous receiving data, it can achieve continuous processing, which is assumed to be more efficient and energy-wise.
There is also an assumption that the load which need to be sent by each source to the children processors has already been received from the job allocator by the time when each source starts distributing load.
Meanwhile, since there are multiple sources () that are distributing load fractions in parallel, they are sorted in order to achieve shorter finish time. The system would always start using the sources which have faster communication speeds so that the processors could get the load fractions earlier. In this section, they are sorted in the descending order of their communication speed, which is (please be noted that is the inverse communication speed of ). In this paper, the link speeds is determined by the communication speeds of the sources.
3.1 Scheduling with Front-end Processor
3.1.1 Network Topology
Figure 3 is the network topology for a multi-source multi-processor network. It is a two-level tree topology. Compared with the single source single-level tree network discussed with Figure 1, this network has one more layer, which is a job allocator/originator that stores all of the data that is needed for computing. This job allocator distribute fractions of load to the second layer, where there are N source nodes. Then, the source nodes further distribute the load into smaller fractions and allocate them to M processing nodes on the third layer.
For each source node , the amount of load it obtains from job allocator is denoted by , which equals to the total load that sends to all the processing nodes. So the following equations can be written: . By solving the minimization problem of system finish time, all of the values for , where can be found. So in order to simplify the problem, this paper mainly focuses on the two lower layers of this network.
3.1.2 Problem Formulation
The timing diagram for multi-source multi-processor distribution system is shown in Figure 4. Here, denotes the fraction of load that source sends to processor . For source it can start sending load to the right after the time reaches its release time , or after the previous source finishes sending load to , whichever is later.
The order that each source distributes load fractions to processors is the same as the order that the processors are sorted, which means that processors with faster computing speed receive load earlier than the ones with slower computing speed. For the processors, the order that they receive load fractions from different sources matches the order that the sources are sorted.
Inspired by the previous work , the following part will discuss the constraints for this problem.
A. Constraints Introduced by Release Time
First, a new parameter is introduced for this system, which is called the release time of sources. It is denoted by and shows when source first become available for usage.
Figure 5 shows the case that processor is getting load fractions from adjacent sources. The release times or may appear at any time point before the first fraction starting sending out from source or .
In order to achieve continuous computing in , the start time of sending load fraction should be exactly the same or earlier than the end time when finishes processing the previous load fraction. Also, the start time that source sends the first load fraction must be later or equal to its release time .
From the discussion above, the following criteria for release time can be proved:
B. Constraints Introduced by Continuous Processing
The timing diagram is shown as Figure 4. It indicates that there might be some gaps between adjacent load fractions. In order to study them, they can be divided into two categories: gaps on sources, and gaps on processors.
For gaps on sources, for example, load fraction and are the two fractions that source , sends to processor and in a sequence. The gap may appear when the distribution of is already finished while is still getting load fraction from source .
For gaps on processors, for example, load fraction and are the two fractions that processor , gets from source and in a sequence. As discussed above, in order to be energy-wise, continuous processing for all the processors is required. The following constraints can be written based on Figure 6:
C. Constraints Introduced by Finish Time
As the system timing diagram shows, the finish time for each processor equals the summation of two parts. The first part is the waiting time for processor to get the first load fraction, which is plus the time that distributes load fractions to processors , ,…, . The second part is the total processing time for the node to finish all the tasks. Since there might have many gaps during communication, the following criteria can be written as an inequality:
D. Constraints Introduced by Normalization
In order to normalize all the load fractions , a parameter called total job is used:
As a conclusion, an optimization problem is defined as the following:
Given the number of sources (N), number of processors (M), each source sends load fractions to all the processors in a sequence, all the sources work in parallel, find the load fractions assigned to each processor from each source such that the total system finish time is minimized.
Minimize such that:
In this problem, the variables are the system finish time and the load fractions and
. So it is a linear programming problem which has N * M + 1 variables.The solution of this problem is a point in a N * M + 1 dimensional space.
3.2 Scheduling without Front-end Processor
3.2.1 Network Topology
This section mainly discusses the case in which all the processing nodes are not equipped with front-end processors. In this case, a node can only start processing the data once all of it’s data has been received. The network topology remains the same as Figure 3, however the timing diagram is changed as Figure 7 shows.
3.2.2 Problem Formulation
From the timing diagram it can found that gaps may appear in the communication phase. In this research, two new parameters are used to mark the time stamps of the starting and ending time of sources distributing each load fraction:
The time that source starts distributing load fraction to processor .
The time that source ends distributing load fraction to processor .
A. Constraints Introduced by the Amount of Load for Each Load Fraction
Based on the definition of and , the following equation is used to measure the length of each load transmission between sources and processors:
B. Constraints Introduced by and on Processors
Figure 8 shows the relationship between two adjacent load transmissions on processor . It is clearly assumed by the sequential communication that has to wait until finishes distributing load to :
C. Constraints Introduced by and on Sources
Figure 9 shows the Relationship Between Two Adjacent Loads Transited by Source . Similar with the discussion for the last constraint, has to wait until finishes receiving load from :
D. Constraints Introduced by Release Time
Firstly, the release time for the first source equals the starting time of distributing load fraction :
Secondly, the start time of the first load fraction transmission by each source should be equal or later than the release time of that source:
To make the full use of each source, it should be keep on distributing load before the next source first become available at its release time:
E. Constraints Introduced by Finish Time
Each processing node starts processing right after it finishes receiving all the data for the sources. So the finish time of equals the summation of the finish time of transmitting last load fraction, which is , plus the computing time, . Since this is an optimization problem, the finish time can be written as inequalities:
F. Constraints Introduced by Normalization
As with the last case, the parameter total job is used to normalize all the load fractions :
Here is the summary of the optimization problem:
Given the number of sources (N), number of processors (M), each source sends load fractions to all the processors in a sequence, all the sources work in parallel, find the load fractions assigned to each processor from each source such that the total system finish time is minimized. Minimize such that
Similar to the last case that already has been studied, the variables are: the system finish time , the load fractions , and , and the starting time and finish time for the transaction of each fraction of load , and . So it is also a problem which can be solved by linear programming techniques.
4 Simulation and Numerical Tests
This section presents multiple simulation tests to prove the improvement of multi-source multi-processor distribution system compared with regular single-source system. A numerical test will firstly be presented to show a simple case. Later, more simulations will be tested to show how the system finish time will change as the number of sources/processors increases.
4.1 Numerical Test
In this numerical test, two distribution systems are created (one with front-end processor built with all processors and one without). The parameters used are listed in Table 1 and Table 2.
|(0.2, 0.4)||(10, 50)||(2, 3, …, 6)||100|
Figure 10 is the summary of load fractions that each source sends to each processor. For Figure 11, the load that both first and second source sent to each processor are added and the amount of load that each processor computes is plotted. It is very clear that the processors with faster computing speeds do more processing work than the slower ones. By using the faster processors more than the slower ones, the system can minimize the finish time.
|(0.2, 0.2)||(0, 5)||(2, 3, 4)||100|
4.2 Finish Time Versus Increasing Number of Sources and Processors
|(0.5, 0.6, 0.7)||(2, 3 ,4)||(1.1, 1.2, 1.3 ,… ,3)||100|
Figure 12 shows 3 cases in which the system has a single source, two sources or three sources in the system. Here all the processors are not equipped with front-end processors. The x-axis is the increasing number of processors working for the distribution system. The y-axis is the system minimal finish time in seconds. All the parameters used are the same as Table 3.
As the figure shows, while adding more sources in the system, the overall finish time can be reduced since the added sources could help distributing load to the processors faster. Also, by increasing the number of processors used in the system, finish time is also reduced. This is because more processing resources are introduced to the system and the whole system can compute the data faster. In increasing the number of processors, the influence of adding them is becoming smaller, since the new processors have slower computing speed. So compared with faster processors which were added on earlier, they can improve the system in a less significant way. The parameters used in this simulation are listed in Table 3. The simulation result for the system with front-end processors is similar to Figure 12.
4.3 Finish Time Versus Different Job Sizes
Figure 13 demonstrates how the minimal finish time changes while the total job size varies. The distribution system with front-end processors was used in this simulation. There were three sources and up to 20 processors to do the simulation. The parameters used are the same as Table 3, except using three different job sizes. It is natural that the larger the job size is, the longer time the system needs to compute it. Based on the Figure 13, it can be found that the multi-source multi-processor system can have much more significant improvement while the job size is larger. For the case that the job size equals to 500, it saves about 50 percent of finish time by increasing the number of processors from three to seven. This gives us an evidence that the multi-source multi-processor job distribution system can significantly improve the performance of any large data center, sensor network, cloud network, etc.
5 Speedup and System Performance Analysis
In recent decades, Amdahl’s Law is widely used as a formula to give the theoretical speedup of the execution of a task with fixed workload. Performance levels can be found by comparing different systems using Amdahl’s Law with the same workload.
5.1 Introduction of Amdahl’s Law
Amdahl’s Law was firstly created by G.H Amdahl in 1967  for discussing if it was practical and efficient to use a multiplicity of processors rather than a single processor to achieve better performance. In his work , a performance metric called ”speedup” was used to predict the theoretical speedup of execution time when using multiple processors. It is the ratio of the solution time for one processor, T(1), to the solution time for multiple processors, T(p):
Since the main goal for this paper is to study the improvement of using multiple sources in the load distribution system compared with the traditional single-source systems, a new equation is used to show the speedup of using p sources and n processors over q source and n processors.
5.2 Speedup Analysis and Simulations
In this problem, both the number of source nodes and the number of processing nodes can be increased. To adapt Amdahl’s Law, either the number of sources or the number of processors is fixed to compare the system optimal finish time, which can be referred to the solution time in Amdahl’s Law, with the finish time using less nodes.
The simulation results are plotted in Figure 14 with the data in Table 4. In the simulation, the distribution system without front-end processors was used. In order to highlight the improvement of increasing the number of processors and sources, homogeneous nodes are being used during this simulation process.
|(0.5, 0.5, …, 0.5)||(0, 0, …, 0)||(2, 2, …, 2)||100|
Figure 14 is the system finish time of the systems using 1, 2, 3, 5 and 10 sources and 1 to 18 processors. The x-axis indicates the increasing number of processors and the y-axis indicates the minimal system finish time which solved with the method discussed in previous section.
Figure 15 is the speedup of the system using multiple sources and processors compared with single source and the corresponding number of processors. This plot was drawn with Equation 16 and the simulation values in Figure 14. The x-axis indicates the increasing number of processors and the y-axis indicates the speedup of the system with the corresponding number of sources and processors.
Figure 15 shows that by adding more sources to the system, the speedup becomes larger. For example, the speedup for the system of 2 sources and 12 processors is around 1.59, comparing with the speedup using 3 sources/ 5 sources/ 10 sources (also 12 processors) to be 1.90/ 2.21/ 2.49. In this example, the speedup value of using 3 sources has an improvement of 19% compared with the case using 2 sources. The speedup value of using 10 sources has an improvement of 57%.
The relative low values of speedup observed here are due to inefficiencies of the sequential distribution protocol .
Meanwhile, by observing each fitted line of the speedup values, one can easily see that the speedup value for using the same number of source and increasing number of processors is also gradually getting larger.
These observations further prove that the multi-source multi-processor system provides improvement for the load distribution system by reducing the system minimal finish time and boosting the system speedup level.
6 Trade-off Analysis for Minimal Finish Time and Monetary Cost
In this section, since the computing of jobs requires a great amount of computing power, a concept called monetary cost is introduced to measure the monetary cost for hiring the processors’ computing power. Monetary cost was previously studied in  . Here a trade-off analysis is presented with several suggestive plans given to users who have budgets on monetary cost, or have to finish processing the total data within a certain finish time, or have the budget for both money and time. In this section all the simulations are done with the network with front-end processors equipped with the processing nodes.
6.1 Definition of Monetary Cost
The term monetary cost is the cost for using sources or processors for processing the load. This paper mainly focus on the monetary cost for the processors. The monetary cost for processors is defined as . The unit for them is cost/unit time. So the total cost for to process load fraction is . The total monetary cost for the entire system to finish processing job is:
In this paper, there is an assumption that the faster processors have more expensive monetary cost, which is: .
6.2 Trade-off Analysis with a Cost Budget
This section is going to discuss how many processors should be used given a cost budget . Since both the number of sources and the number of processors influences the results, and this paper is mainly discussing the computing cost given by the processors. In the case, the number of sources is fixed to be two. The parameters used in this section are listed in Table 6.
|(0.5, 0.6)||(2, 3)||(1.1, 1.2, …, 3)||(29, 28, …, 10)||100|
STEP 1. Plot the Number of Processors VS. Total Cost
First, the relationship between the number of processors and total cost is plotted as Figure 16. The x-axis is the number of processors used in the distribution system, and y-axis is the total cost for computing, where the units are dollars. It is natural that the total computing cost is growing as the number of processors increases. However the growth rate is becoming smaller. This is because although more processors are used in the system, while solving the optimal problem of finish time, the slower/cheaper processors are assigned with much less amount of load to process. This makes them have less influence on the total cost.
As an example, given that the budget for the system monetary cost is 3450 dollars. By looking into the list of total cost, the two closest solutions can be found:
Using 6 processors: the total computing cost is about 3433.77 dollars;
Using 7 processors: the total computing cost is about 3451.67 dollars.
In this case, all the solutions that using less than or equal to 6 processors is within the budget of 3450 dollars.
STEP 2. Plot the Number of Processors VS. Gradient of Finish Time and the Gradient of
Second, Figure 17 is plotted to show the relationship between the number of processors and system minimal finish time.
Figure 18 shows the gradient of finish time. The definition of the gradient of is:
The values of the gradient of finish time demonstrates by what percentage can the new solution make the whole system finish the job faster. In this test result, is about 8.4%, and is about 5.3%.
STEP 3. The Trade-off Plan
Now let us discuss a trade-off plan. It is clear that when the number of processors increases, the finish time decreases but the monetary cost increases. So there must be a trade-off between finish time and monetary cost. Suppose when adding one more processor to the system, the finish time is reduced by less than 6%, then the user may prefer using less processors to reduce the monetary cost rather than using one more processor to reduce finish time, which is already reaching a very low value. In this way, a good suggestion can be given to the user about how many processors should be used in the system to be within the budget of cost. In this example, the user should use 5 processors.
6.3 Trade-off Analysis with a Time Budget
This section is going to discuss how many processors should be used given the maximal of time that the total job needs to be finished distributing and processing, which is called . The simulation results in the last section is used here.
First, the user increases the number of processors from 1 to , where , and . Since the finish time decreases as the number of processors increases, and is the maximum finish time that the user required, all the solutions that have more than processors could meet the requirement. For example, while =32 seconds, all the solutions with equal to or more than 10 processors meets the requirement.
Meanwhile, since the total computation monetary cost increases as the number of processors increases, with the purpose of saving money, the user should use as few processors as possible, which is processors. In the example, the user should pick 10 processors.
6.4 Trade-off Analysis with Both a Time Budget and a Cost Budget
In this section, both the time budget and the cost budget are considered. By combining the two graphs of number of processors versus finish time and total cost, the solution area which meet both of the requirements is highlighted.
CASE 1. The Two Solution Areas Overlapped
In Figure 19, the x-axis is the number of processors involved in the test, the left y-axis is the total cost for the processors, and the right y-axis is the overall system minimal finish time. As Figure 19 shows, the solution area of is highlighted in blue, and the solution area of is highlighted in orange. They have a overlapped solution area, where the number of processors can varies from 6 to 12. All the systems from 6 to 12 processors satisfy both the cost budget and the time budget.
CASE 2. There is No Overlap Between Two Solution Areas
Same as in the last figure, for Figure 20, the x-axis is the number of processors involved in the test, the left y-axis is the total cost for the processors, and the right y-axis is the overall system minimal finish time. In Figure 20, the two solution areas there and are highlighted. Since there is no overlap between these two areas, there is no solution to satisfy both the cost budget and the time budget. The user has to either increase the amount of money to spend on processing the whole job, or wait a longer time for the system to finish processing.
This paper studies the load distribution and finish time optimization problem for a multi-source, multi-processor network based on the two-level tree network topology. The study was divided into two scenarios: the processing nodes are equipped with or without front-end processors. Numerical tests and simulations results showed that the multi-source system has great improvement compared with single-source system, since the overall system minimal finish time is reduced significantly. Meanwhile, by increasing either the number of sources or processors, the finish time can be further reduced. Then, a monetary cost model is proposed to calculate the computing power used for the system. Finally, since monetary cost and minimal finish time has a trade-off relationship, three trade-off plans are demonstrated: 1. the user has a cost budget; 2. the user has a time budget; 3. the user has both a cost budget and a time budget.
8 Future Work
In this paper, it is assumed that if the source or processor has to communicate with multiple nodes, it uses sequential communication. This means that the source or processor could only communicate with one node at a time. However, with the rapid growth of the technology, it is very common to use the sources and processors which can do simultaneous communication with a bandwidth limitation. In the future work, the bandwidth parameters should to be modified to see how much faster the system can be improved.
On the other hand, a more complicated but realistic scenario may have multiple jobs arrive at the processing nodes during the processing phase, which makes the processing speed become time-varying. Also, the sources’ communication speed can also be time-varying due to the injection of some job distributing tasks. It will be a very valuable study to combine the current study with this scenario.
Another interesting topic is the combination of Divisible Load Theory and Amdahl’s Law. Amdahl’s law is a formula used to find the maximum improvement possible by improving a particular part of a system. In parallel computing, Amdahl’s law is mainly used to predict the theoretical maximum speedup for program processing using multiple processors. Since it is a very useful tool for predicting speedup, with the help of it, new methods can be discovered to improve parallel systems while adapting our current study to more complicated network topologies.
-  Cheng, Y.C. and T. G. Robertazzi, Distributed Computation with Communication Delays, IEEE Transactions on Aerospace and Electronic Systems, Volume. 24, Issue. 6, Nov. 1988, pp. 700 - 712.
-  Sohn, J. and T. G. Robertazzi, Optimal Load Sharing for a Divisible Job on a Bus Network, IEEE Transactions on Aerospace and Electronic Systems, Vol. 32, Issue. 1, Jan. 1996, pp. 34 - 40.
-  Kim, H.J., Jee, G.-I. and Lee, J.G., Optimal Load Distribution for Tree Network Processors, IEEE Transactions on Aerospace and Electronic Systems, Vol. 32, Issue. 2, April 1996, pp. 607 - 612.
-  M. Drozdowski and W. Glazek, Scheduling Divisible Loads in a Three-Dimensional Mesh of Processors, Parallel Computing, Volume. 25, Issue. 4, April 1999, pp. 381 - 404.
-  B. Veeravalli, D. Ghose, V. Mani and T. G. Robertazzi, Scheduling Divisible Loads in Parallel and Distributed Systems, IEEE Computer Society Press, 1996.
-  L. Ping, B. Veeravalli and A. A. Kassim, Design and implementation of parallel video encoding strategies using divisible load analysis, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 15, Issue: 9, Sept. 2005, pp. 1098 - 1112.
-  X. Li, X. Liu and H. Kang, Sensing Workload Scheduling in Sensor Networks Using Divisible Load Theory Global Telecommunications Conference, 2007. GLOBECOM ’07. IEEE, 2007, pp. 785 – 789.
-  X. Li, H. Kang and J. Cao, Coordinated Workload Scheduling in Hierarchical Sensor Networks for Data Fusion Applications, Journal of Computer Science and Technology, Volume. 23, 2008, pp. 355 - 364.
-  Daniel, D., and Lovesum, S. P. J. A Novel Approach for Scheduling Service Request in Cloud with Trust Monitor, IEEE International Conference on Signal Processing, Communication, Computing and Networking Technologies, 2011, 509 – 513.
-  Wang, X., Wang, B., and Huang, J., Cloud computing and its key techniques, Computer Science and Automation Engineering (CSAE), Volume. 3. IEEE, 2011, pp. 404 – 410.
-  Yang, Y., Choi, J. Y., Choi, K. M., Gannon, P. M., and Kim, D. S. BioVLAB-Microarray: Microarray Data Analysis in Virtual Environment, Proc. IEEE E-science, 2008, 159 – 165.
-  Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R. H., Konwinski, A., Lee, G., Patterson, D. A., Rabkin, A., Stoica, I., and Zaharia, M. Above the Clouds: A Berkeley View of Cloud Computing, University of California, Berkeley, Tech. Rep. No. UCB/EECS-2009028, 2009.
-  Mell, P., and Grance, T. The NIST Definition of Cloud Computing, NIST Special Publication 800-145. NIST, US Department of Commerce, 2011.
-  H. M. Wong, D. Yu, B. Veeravalli and T. G. Robertazzi, Data Intensive Grid Scheduling: Multiple Sources with Capacity Constraints, Proc. 15th Int’l Conf. Parallel and Distributed Computing and Systems, 2003.
-  D. Yu and T.G. Robertazzi, Multi-Source Grid Scheduling for Divisible Loads, Proc. 40th Annual Conference on Information Sciences and Systems, 2006. IEEE, 2006, pp. 188 – 191.
-  M. Abdullah, M. Othman, Cost-Based Multi-QoS Job Scheduling using Divisible Load Theory in Cloud Computing, Procedia Computer Science, Volume 18, 2013, Pages 928 - 935.
-  S. Suresh, H. Huang, H. J. Kim, Scheduling in Compute Cloud With Multiple Data Banks Using Divisible Load Paradigm, IEEE Transactions On Aerospace And Electronic Systems Volume. 51, No. 2 April 2015, pp. 1288 – 1297.
-  A. Shokripour, M. Othman, Survey on Divisible Load Theory and its Applications, International Conference on Information Management and Engineering, 2009, pp 300 - 304.
-  T.G. Robertazzi, Ten Reasons to Use Divisible Load Theory, Computer, Volume. 36, Issue. 5, May 2003 , pp.63 - 68.
-  G.M. Amdahl, Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities, Proceedings of the AFIPS Conference, 1967, pp. 483-485.
-  G.M. Amdahl, Computer Architecture and Amdahl’s Law, Computer, Volume. 46, Issue. 12, Dec. 2013, pp. 38 - 46.
-  Agrawal, R. and Jagadish, H.V., Partitioning Technqiues for Large-Grained Parallelism, Proceedings of the Seventh Annual International Phoenix Conference on Computers and Communications, March 1988, pp. 31 - 38.
-  Sohn, J., Robertazzi, T.G. and Luryi, S., Optimizing Computing Costs using Divisible Load Analysis, IEEE Transactions on Parallel and Distributed Systems, Volume. 9, No. 3, March 1998, pp. 225 - 234.
-  Charcranoon, S., Robertazzi, T.G. and Luryi, S., Parallel Processor Configuration Design with Processing/Transmission Costs, IEEE Transactions on Computers. Volume. 49, No. 9, Sept. 2000, pp. 987 - 991.