Log In Sign Up

QoS aware Automatic Web Service Composition with Multiple objectives

by   Soumi Chattopadhyay, et al.

With an increasing number of web services, providing an end-to-end Quality of Service (QoS) guarantee in responding to user queries is becoming an important concern. Multiple QoS parameters (e.g., response time, latency, throughput, reliability, availability, success rate) are associated with a service, thereby, service composition with a large number of candidate services is a challenging multi-objective optimization problem. In this paper, we study the multi-constrained multi-objective QoS aware web service composition problem and propose three different approaches to solve the same, one optimal, based on Pareto front construction and two other based on heuristically traversing the solution space. We compare the performance of the heuristics against the optimal, and show the effectiveness of our proposals over other classical approaches for the same problem setting, with experiments on WSC-2009 and ICEBE-2005 datasets.


Hybrid Optimization Algorithm for Large-Scale QoS-Aware Service Composition

In this paper we present a hybrid approach for automatic composition of ...

FASS: A Fairness-Aware Approach for Concurrent Service Selection with Constraints

The increasing momentum of service-oriented architecture has led to the ...

Summary of a Literature Review in Scalability of QoS-aware Service Composition

This paper shows that authors have no consistent way to characterize the...

DATESSO: Self-Adapting Service Composition with Debt-Aware Two Levels Constraint Reasoning

The rapidly changing workload of service-based systems can easily cause ...

Evolutionary Multitasking for Semantic Web Service Composition

Web services are basic functions of a software system to support the con...

1 Introduction

In recent times, web services have become ubiquitous with the proliferation of Internet usage. A web service is a software component, that takes a set of inputs, performs a specific task and produces a set of outputs. A set of non-functional quality of service (QoS) parameters (e.g., response time, throughput, reliability etc.) are associated with a web service. These QoS parameters determine the performance of a service. Sometimes, a single web service falls short to respond to a user query. Therefore, service composition [1, 2] is required. During service composition, multiple services are combined in a specific order based on their input-output dependencies to produce a desired set of outputs. While providing a solution in response to a query, it is also necessary to ensure fulfillment of end-to-end QoS requirements [3], which is the main challenge in QoS aware service composition [4, 5, 6].

A large body of literature in service composition deals with optimization of a single QoS parameter [7, 8], especially, response time or throughput. However, a service may have multiple QoS parameters; therefore, the service composition problem turns out to be a multi-objective optimization problem. Though optimality of the end solution is the primary concern in multi-objective service composition, computing the optimal solution is time consuming. This has led to another popular research theme around multi-constrained service composition [9, 10], where a constraint is specified on each QoS parameter and the objective is to satisfy all the QoS constraints in the best possible way.

Two different models have been considered in service composition literature, namely, workflow based model (WM) [11, 12] and input-output dependency based model (IOM) [7]. The salient features of both the models are discussed in Table I. Most of the research proposals on multiple-QoS aware service composition have considered WM [13, 14]. In general, the methods proposed in WM cannot solve the problem in IOM, since in IOM, in addition to the QoS values of a service, the input-output dependencies between the service also need to be considered. Approaches [15, 16], that consider IOM typically transform the multiple objectives into a single objective and generate the optimal solution instead of the Pareto optimal solutions [17]. A weighted sum is used to convert multiple objectives into a single objective. However, finding the weights is a challenging task.

Features WM IOM
Query specification A Workflow: a set of tasks to be performed in a specific order A set of given query inputs and a set of desired query outputs
Query objective To serve the query by selecting a service for each task so that To serve the query by identifying a set of services that are directly
the overall QoS values are optimized, where the service (by the query inputs) or indirectly (by the outputs of the services that
repository contains a set of functionally equivalent services are directly or indirectly activated by the query inputs) activated by the
for each task query inputs and can produce the query outputs
Search Space , the number of tasks and the number of functionally (in the worst case), the total number of services that can be
equivalent services for each task; Total number of services activated by the query inputs
that can participate to serve the query
Complexity For a single QoS parameter, finding the optimal solution is a Though each of response time and throughput, when treated as an individual
polynomial time algorithm [18]. For multiple QoS parameters, parameter can be optimized in polynomial time, some of the other parameters
finding the Pareto optimal solutions is NP-hard [19] (e.g., reliability, price, availability) require exponential time procedures, even
when a single parameter optimization is considered. Multiple parameters
and their simultaneous optimization turns out to be a hard problem [20]
TABLE I: Salient features of WM and IOM

In this paper, we study the multi-objective QoS-aware web service composition problem in IOM. To the best of our knowledge, there is not much work in IOM considering multiple QoS aware service composition based on Pareto front construction. However, considering the parameters individually instead of a weighted sum combination, has a major significance, since it can deal with the users having various QoS preferences. Additionally, we have considered multiple local and global constraints on different QoS parameters. In this paper, our major contributions are as follows:

•  We first propose an optimal algorithm, that constructs the Pareto optimal solution frontier satisfying all QoS constraints for the multi objective problem in IOM. We theoretically prove the soundness and completeness of our algorithm.

•  Additionally, we propose two heuristics. The first one employs a beam search strategy [21]

, while the other is based on non deterministic sorting genetic algorithm (NSGA)


•  To demonstrate the time-quality trade-off, we perform extensive experiments on the benchmarks ICEBE-2005 [23] and WSC-2009 [24]. Additionally, we compare our proposed methods with [25], which proposes the composition problem in IOM using a single objective weight transformation.

This paper is organized as follows. In Section 2, we compare and contrast our model and proposed approaches with the existing literature. Section 3 presents some background, the next outlines our problem. Section 4-7 present our proposal, Section 8 presents results. Section 9 concludes the work.

2 Related Work

Automatic service composition [26, 27] is a fundamental problem in services computing. A significant body of research has been carried out on QoS-aware service composition considering a single QoS parameter, especially, response time and throughput [28, 29, 30]. Multiple-QoS aware service composition has been discussed in [31]. We first discuss related work regarding the models followed by the solution approaches for multiple QoS aware web service composition.

2.1 Problem Models

The two most popular models considered in literature are the workflow model (WM) and input-output dependency based model (IOM). The salient features of these two models are discussed in Table I. In WM, it is assumed that a task can be accomplished by a single web service. However, in practice, it may not be the case always. Some times more than one service may be required to perform a particular task. Therefore, the input-output dependency based model becomes popular.

It may be noted, existing solution approaches for WM are unable to solve the composition problem in IOM. This is mainly because of the following reasons:

  • In WM, a workflow is provided as an input, whereas, in IOM, no workflow is provided, rather the aim of IOM is to find out a flow of services to serve the query so that the overall QoS values are optimized.

  • In general, while selecting a service in WM, only the QoS values of the service need to be taken care of. In contrast to the former case, while selecting a service in IOM, not only the QoS values of the services need to be considered but also its input-output dependencies on the other services need to be taken into account.

  • In WM, the number of services does not vary across all solutions, while, in IOM, the number of services varies across the solutions to a query.

However, methods that can solve the composition problem in IOM can solve the composition problem in WM. Moreover, the search space of WM is a subset of the search space of IOM. We now discuss different solution models.

2.2 Solution Models and Approaches

We classify below the different solution models and discuss the approaches existing in literature.

Scalarization (SOO): To deal with multiple QoS aware service composition, [12, 32, 33] have resorted to scalarization techniques to convert multiple objectives into a single objective using the weighted sum method. In [25], the authors have proposed a planning graph based approach and an anytime algorithm that attempts to maximize the utility in IOM. A scalarization technique in WM was proposed in [12]. Though scalarization techniques are simple and easy to implement, however, some information may be lost due to the transformation from multiple objectives to a single objective. Moreover, finding the weights of the parameters is difficult. User preferences are required to decide the weights of the parameters, which is not always easy to identify. Even though the preferences of the parameters are obtained, it is not easy to quantify the preferences to find the weights of the parameters, which has a great impact on finding the optimal solution.

Single-objective multi-constrained optimization (SOMCO): To overcome the shortcomings of scalarization techniques, researchers have looked at another popular approach, namely, single-objective multi-constrained optimization [19, 34, 35]. In this approach, one parameter is selected as the primary parameter to be optimized, while for the rest of the parameters, a worst case bound is set (often termed as constraints). For example, in [36], the authors analyzed the relation between multi-objective service composition and the Multi-choice, Multi-dimension 0-1 Knapsack Problem (MMKP) in WM and used the weighted sum approach to compute the utility function. The objective of [36] is to maximize the total utility while satisfying different QoS constraints. In [19] and [34], authors proposed a multi constrained QoS aware service composition approach, instead of finding the optimal solutions in WM. In [19]

, an Integer Linear Programming (ILP) based approach was proposed, where ILP is used to divide the global constraints into a set of local constraints and then using the local constraints, the service selection is done for each task in WM. In

[37, 3], ILP based methods are used to solve multi-constrained service composition in WM. Dynamic binding is the main concern of [34]

, where authors proposed to generate the skyline services for each task in WM and cluster the services using the K-means algorithm. In

[35], an ILP based multi-constrained service composition was proposed in IOM. In this class of methods as well, selecting the primary parameter to optimize (and rest to put constraints on) is a challenging problem and often depends on user preferences. Moreover, determining the constraint values is not an easy task and may this often lead to no solution being generated (i.e., no solution exists to satisfy all the constraints).

Pareto optimal front construction (POFC): To address the above challenges, another research approach based on constructing the Pareto optimal frontier has been proposed. A Pareto front consists of the set of solutions where each solution is either same or better in at-least one QoS value than rest of the solutions belonging to the Pareto front. This approach does not require identifying the user preferences of the QoS parameters. Therefore, this approach can easily deal with users having different preferences. To the best of our knowledge, most of the work based on Pareto front construction [37] focus on WM. For example, in [17], the authors proposed to generate the Pareto optimal solutions in a parallel setting. In [38]

, the authors proposed a fully polynomial time approximation method to solve the problem. A significant amount of work has been done based on evolutionary algorithms

[39, 40]

, such as Particle Swarm Optimization

[41], Ant Colony Optimization [42], Bee Colony Optimization [43], Genetic Algorithms [44, 45], NSGA2 [46, 47]. In this paper, we consider the Pareto front construction model on IOM.

Table II summarizes the state-of-the-art methods considering different models that have been discussed above.

Models Methods
WM-SOO [12]
WM-SOMCO [19, 34, 36, 3, 44, 45, 43, 41, 42, 37]
WM-POFC [17, 31, 39, 46, 47, 38, 40]
IOM.SOO [25, 32, 33]
TABLE II: State of the Art regarding Models

2.3 Novelty of our work and contributions

In contrast to the above, we consider this problem in IOM-POFC setting. In addition, we have considered local and global constraints on QoS parameters. In Section 3, we formally describe our model.

The search space of the composition problem addressed in this paper is exponential as discussed earlier. Therefore, we first try to reduce the search space of our algorithm using clustering as demonstrated in [48, 32]. On the reduced search space, we propose an optimal algorithm using a graph based method. In literature, the graph based methods [33, 29, 30] are mainly applied either to solve the service composition problem for single parameter optimization or to solve the multiple QoS aware problem using scalarization. In this paper, we apply the graph based approach to construct a Pareto optimal solution frontier. The optimal algorithm is an exponential time procedure and often does not scale for large scale composition. Therefore, we further propose two heuristic algorithms.

Our first heuristic algorithm is based on beam search technique. Beam search technique is applied in [25] to solve the multiple QoS aware problem using scalarization. Here, we use the beam search technique to find Pareto optimal solutions. Since our algorithm is a heuristic approach, it does not generate the optimal solutions. However, we have shown that the solution quality monotonically improves with increase in the size of the beam width.

Our second heuristic algorithm is based on NSGA. Though multiple evolutionary algorithms exist in literature [44, 45, 46, 47] to solve multiple QoS aware optimization, however, all these methods, to the best of our knowledge, solve the problem in WM. We use it to find the solutions for IOM. Moreover, in each step of the algorithm based on NSGA, we ensure that the solutions generated by the algorithm is a functionally valid solution.

3 Background and Problem Formulation

In this section, we discuss some background concepts for our work. We begin with a classification of a QoS parameter.

Definition 3.1.

[Positive / Negative QoS parameter:] A QoS parameter is called a positive (negative) QoS parameter, if a higher (lower) value of the parameter implies better performance.

Reliability, availability, throughput are examples of positive QoS parameters, while response time, latency are examples of negative QoS parameters.

Consider two services and being compared with respect to a QoS parameter . We have the following cases:

  • is better than with respect to implies,

    • If is a positive QoS, , where and are the respective values of for and .

    • If is a negative QoS parameter, .

  • is as good as with respect to implies, , irrespective of whether is positive or negative.

  • is at least as good as with respect to implies, either is better than or is as good as with respect to .

The QoS parameters are further classified into four categories based on the aggregate functions used for composition: maximum, minimum, addition, multiplication.

A query is specified in terms of a set of input-output parameters. We now present the concept of eventual activation of a web service for a given query. A web service is activated, when the set of inputs of the service is available in the system. As an example, consider in Table III, is activated when its input is available. A service is eventually activated by a set of input parameters , if is either directly activated by itself or indirectly activated by the outputs of the set of services that are eventually activated by , as shown in Example 3.1. In the next subsection, we formally discuss the model considered in this paper and our objective.

3.1 Problem Formulation

The service composition problem considered in this paper can be formally described as below:

  • A set of web services

  • For each service , a set of inputs and a set of outputs

  • A set of QoS parameters

  • For each service , a tuple of QoS values

  • A set of aggregation functions , where is defined for a QoS parameter

  • A query , specified by a set of inputs and a set of requested outputs

  • Optionally, a set of local QoS constraints and a set of global QoS constraints

A constraint denotes a bound on the worst case value of a QoS parameter. While the local constraints are applicable on a single service ( in Example 3.1), the global constraints are applicable on a composition solution, ( in Example 3.1).

The objective of multi-objective QoS constrained service composition is to serve in a way such that the QoS values are optimized, while ensuring functional dependencies are preserved, and all local and global QoS constraints are satisfied. Since multiple (and often disparate) QoS parameters are involved, this calls for a classical multi-objective optimization, and we address this challenge in this work. In this paper, we propose an optimal solution construction methodology. Often, a single solution may not be the best with respect to all the QoS parameters. Therefore, instead of producing a single solution, our method generates a set of Pareto optimal solutions as described in the following sections.

3.2 Running Example

We now present an illustrative example for our problem.

Example 3.1.

Table III shows a brief description of the services in a service repository, their inputs, outputs and values of response time (in ms), throughput (number of service invocations per minute) and reliability (in percentage) in the form of a tuple (RT, T, R).

Services Inputs Outputs (RT, T, R)
(500, 7, 93%)
(600, 13, 69%)
(350, 4, 97%)
(475, 3, 85%)
(1300, 15, 81%)
(700, 19, 90%)
(1100, 9, 80%)
(1100, 6, 73%)
(300, 13, 79%)
(800, 9, 78%)
(1300, 3, 65%)
(900, 7, 83%)
(400, 9, 93%)
(750, 5, 79%)
(700, 17, 91%)
(500, 13, 90%)
(150, 5, 86%)
(400, 2, 73%)
(300, 3, 81%)
(1500, 12, 94%)
(900, 14, 97%)
(1700, 14, 87%)
(1100, 10, 80%)
(1700, 12, 81%)
(1400, 13, 83%)
(1900, 7, 80%)
(1500, 11, 92%)
(1100, 15, 94%)
(500, 17, 72%)
(350, 12, 74%)
TABLE III: Description of example services

Consider a query with inputs and desired outputs . The objective is to find a solution to the query in such a way that the values of the QoS parameters are optimized (i.e., minimizing response time, maximizing throughput and reliability). It may be noted, a single solution may not be able to optimize all the QoS parameters. Therefore, multiple solutions need to be generated optimizing different QoS parameters.

The services that are eventually activated by the query inputs are shown in Figure 1. The services at of Figure 1 are directly activated by the query inputs, while the services at and are indirectly activated by the query inputs. Each ellipse represents the input parameters available in the system at a particular point of time. Additionally, we have the following set of constraints.

  • Each service participating in the solution must have a reliability value greater than .

  • The reliability of the solution must be more than .

  • The response time of the solution must be less than s.

Fig. 1: Response to the query

In this paper, we demonstrate the Pareto optimal solutions construction method given the above scenario using this example.

4 Solution Architecture

In the following, we first define a few terminologies to build up the foundation of our work.

Definition 4.1.

[Dominating Service:] A service with QoS tuple dominates another service with , if , is at least as good as and , such that, is better than . is called the dominating service and is dominated.

Example 4.1.

Consider and in Table III with QoS tuples and respectively. dominates , since has a lesser response time, higher throughput and reliability as compared to .

Definition 4.2.

[Mutually Non Dominated Services:] Two services and are said to be mutually non-dominated, if no one dominates the other, i.e., no service is a dominating service.

Example 4.2.

Consider and in Table III with QoS tuples and respectively. and are mutually non-dominated. has higher throughput than , while has lower response time and higher reliability.

Definition 4.3.

[Skyline Service Set:] Given a set of services , the skyline service set is a subset of such that the services in are non-dominated and each service in is dominated by at least one service in .

Example 4.3.

Consider be the set with QoS tuples , , , respectively (as in Table III). is the set of skyline services. , and are non-dominated, while dominates .

The skyline service set for to a given set of services is unique.

Definition 4.4.

[Non Dominated Tuple:] Given a set of QoS tuples , a tuple is called non dominated, if such that is better than .

A QoS tuple is better than implies, each QoS parameter in is at least as good as in , while at least one QoS parameter in is better than in , where the terms “at least as good as” and “better than” are used with the same meaning as defined earlier in the context of comparing two services.

Example 4.4.

Consider a set of QoS tuples . = , , constitute the set of non dominated tuples, since, the three tuples in are non dominated and is dominated by .

As discussed earlier, a composition solution with respect to a query is a collection of services that are eventually activated by and produce . During this process of activation, all functional dependencies are preserved. We now define different characterizations of a solution.

Definition 4.5.

[Feasible Solution:] A composition solution is feasible if it satisfies all local () and global () constraints.

Example 4.5.

Consider the service descriptions and the query discussed in Example 3.1. is a solution to the query, where and are executed in parallel, while , and the parallel combination of and are executed sequentially. The QoS tuple for the composition solution is . This does not satisfy any of the global constraints and violates the local constraint as well and therefore, is not feasible. Consider another solution, . The QoS tuple for the solution is , which satisfies both the local and the global constraints. Therefore, the solution is feasible.

Definition 4.6.

[Non Dominated Solution:] A composition solution with QoS tuple is a non-dominated solution, if and only if with QoS tuple such that for which is better than and rest of the parameters in are at least as good as in .

In other words, has a better value for at least one QoS than any solution , .

Definition 4.7.

[Pareto Front:] The set of non-dominated solutions with respect to a query is called the Pareto front.

In a multi-objective composition problem, we may not find a single solution which is optimal in all respects, rather, we may find a Pareto front consisting of a set of non-dominated solutions. The feasible solutions obtained from the Pareto front constitute the optimal solution space of our problem.

We now present an optimal solution generation technique. Our proposal has two main phases: a preprocessing phase and a run-time computation phase. The aim of the preprocessing phase is to reduce the number of services participating in solution construction, while the main aim of the run-time computation phase is to compute the solution in response to a query. Below, we explain our proposal in detail.

4.1 Preprocessing phase

The motivation behind preprocessing the web services is to reduce the run-time computation. We first define the notion of equivalent services, which serve as the foundation.

Definition 4.8.

[Equivalent Services:] Two services and are equivalent (), if the inputs of are same as in , and the outputs of are same as in .

Example 4.6.

Consider the first two services of Table III: and with input and outputs . Here, and are equivalent, since, they have identical inputs and outputs.

Here, we apply the clustering technique proposed in [48]. As the first step of preprocessing, we compute the set of equivalent services. Each equivalence class forms a cluster, while the set of equivalence classes of a given set of web services forms a partition of . Therefore, the clusters are mutually exclusive and collectively exhaustive. We first divide the services in the service repository into multiple clusters and represent each cluster by a single representative service. Once the services are clustered, we find the skyline service set for each cluster. The set of skyline services is used for our run-time service composition step.

The representative service corresponding to each cluster is associated with multiple QoS tuples corresponding to each service of the skyline service set. The main aim of this step is to prune the search space. Since the number of clusters must be less than or equal to the number of services in the service repository, the number of services reduces by this preprocessing. After preprocessing, we now have the following set of services: , where and each service consists of a set of QoS tuples . Equality condition holds, only when the service repository does not contain any equivalent services and the preprocessing phase cannot reduce the number of services.

Example 4.7.

If we cluster the services shown in Table III, the number of services reduces from 30 to 12. Table IV shows the clustered service set. The first column of Table IV presents the representative service for each cluster, while the second column shows the cluster itself. Finally, the third column indicates the set of QoS tuples corresponding to each service of the skyline services corresponding to a cluster. Consider the first cluster , shown in the first row of Table IV. is the representative service corresponding to . The skyline service set of is . Therefore, consists of the QoS tuple corresponding to each service in .

Representative Cluster (RT, T, R)
Web Service
(500, 7, 93%), (350, 4, 97%), (600, 13, 69%)
(700, 19, 90%)
(1100, 9, 80%)
(300, 13, 79%)
(400, 9, 93%)
(700, 17, 91%), (500, 13, 90%)
(150, 5, 86%)
(900, 14, 97%)
(1700, 14, 87%), (1400, 13, 83%)
(1100, 10, 80%), (1700, 12, 81%)
(1100, 15, 94%)
(500, 17, 72%), (350, 12, 74%)
TABLE IV: Description of services after preprocessing

Consider the query in Example 3.1. The number of services reduces from 30 to 12. The number of QoS tuples reduces from 30 to 18.

The preprocessing step helps to prune the search space by removing some services. No useful solution in terms of QoS values is lost in preprocessing, as stated formally below.

Lemma 1.

The preprocessing step is Pareto optimal solution preserving in terms of QoS values.

All proofs are compiled in Appendix.

4.2 Dependency graph construction

The composition solutions are generated at run-time in response to a query. To find a response to a query, a dependency graph is constructed first. The dependency graph is a directed graph, where is the set of nodes and is the set of edges. Each node corresponds to a service that is eventually activated by the query inputs and each directed edge represents a direct dependency between two services, i.e., the service corresponding to the node produces an output which is an input of the service corresponding to the node . Each edge is annotated by the input-output of the services. Each solution to a query is either a path or a subgraph of [28].

The dependency graph is constructed using the algorithm illustrated in [33]. While constructing the dependency graph, here we additionally validate the local and global constraints. While the local constraints are validated once, when a service is selected for the first time, the global constraints are validated in each step of the solution construction. While an activated service is selected for node construction, the service is first validated against the set of local and global constraints. Each service corresponds to a set of skyline services. If any service from the skyline services violates any local / global constraint, we disregard that service by removing its corresponding QoS tuple from of . If is empty, we do not construct any node corresponding to . It may be noted, if a service violates any of the global constraints, any solution that includes also violates the global constraint.

Example 4.8.

Consider Example 3.1. To respond to the query, while constructing the dependency graph, four services and are activated from the query inputs at first. It may be noted, is associated with three QoS tuples (500, 7, 93%), (350, 4, 97%) and (600, 13, 69%), out of which one tuple (600, 13, 69%) violates , since its reliability is less than 70%. Therefore, while validating , the third tuple, i.e., (600, 13, 69%) is removed from corresponding to .

During dependency graph construction, the set of services that can be activated by the query inputs are identified first. With the set of identified services, the dependency graph is constructed. Finally, backward breadth first search (BFS) is used in to identify the set of nodes that are required to produce the set of query outputs. The remaining nodes are removed from the graph.

Fig. 2: Dependency Graph in response to a query (a) generated from query inputs (b) after backward traversal
Example 4.9.

Consider the query in Example 3.1. Figure 2 shows the dependency graph constructed over the services described in Table IV in response to the query. Figure 2(a) shows the dependency graph constructed from the query inputs, while Figure 2(b) shows the one generated after removal of unused nodes. In Figure 2(a), the nodes marked with red represent the services that do not take part to produce the query outputs.

If the dependency graph consists of a loop, we identify the loop and break the cycle [7]. Finally, we partition the dependency graph into multiple layers using the approach used in [33], where a node belongs to a layer , if for all the edges , belongs to any layer , where . The first layer consists of a single node . Finally, we introduce dummy nodes in each layer if necessary as demonstrated in [33], to ensure that each node in a layer is connected only to the nodes in either its immediate predecessor layer or its immediate successor layer . We assume that each dummy node has a QoS tuple with the best value for each QoS parameter. If a solution to a query consists of any dummy node, the dummy node is removed from the solution while returning the solution. The above assumption ensures that after removal of the dummy nodes, the QoS values of the solution remain unchanged. In the next subsection, we discuss the feasible Pareto optimal solution frontier generation technique.

5 Pareto Front Construction

To find the feasible Pareto optimal solutions, we transform the dependency graph into a layered path generation graph (LPG). LPG is a directed acyclic graph, where is a set of nodes and is a set of edges. Each node consists of a set of nodes of . A directed edge from to exists, if each service corresponding to a node , belonging to is activated by the outputs of the services corresponding to the nodes, belonging to . Similar to the dependency graph, the LPG also consists of two dummy nodes: a start node and an end node consisting of the start node and the end node of respectively. We assume that each dummy node has a QoS tuple with the best value for each QoS parameter. While constructing , we simultaneously compute the Pareto optimal solution frontier and validate the global constraints. We define the notion of a cumulative Pareto optimal tuple.

Definition 5.1.

[Cumulative Pareto Optimal Tuple:] A set of non dominated QoS tuples, generated due to the composition of a set of services during an intermediate step of the solution construction, is called a cumulative Pareto optimal tuple.

The cumulative Pareto optimal tuples, generated at the final step of the solution construction, is the Pareto front. For each node , we maintain two sets of QoS tuples: a set of non dominated tuples and a set of cumulative Pareto optimal tuples. We now discuss the construction of .

1:Input: ,
3:Queue Insert;
5:        Remove(); corresponding to
6:        the set predecessor nodes of ;
7:       for  do
8:             if  is not constructed earlier then
9:                     the set of non dominated tuples corresponding to ;
10:                    if  does not satisfy any global constraints then
11:                          Remove from ;
12:                    end if
13:                    if , then continue;
14:                     Insert;
15:             end if
16:             Construct an edge ;
17:              The cumulative Pareto front of is constructed from ( Combination of ());
18:             if  does not satisfy any global constraints then
19:                    Remove from ;
20:             end if
21:       end for
Algorithm 1 Graph Conversion and Solution Generation
Fig. 3: Conversion of Dependency Graph to LPG

To construct , we traverse the graph in a backward direction, starting from the node . We start the transformation from dependency graph to LPG by constructing a dummy node of consisting of of . During the procedure, we maintain a FIFO (i.e., First In First Out) queue. The following steps convert to :

Fig. 4: Pareto Optimal Front Construction
  • The first node is removed from the queue.

  • The set of predecessor nodes of is constructed.

  • For each predecessor node of , the temporary Pareto optimal front till is constructed or modified (for already existing nodes).

  • For each Pareto optimal QoS tuple till , the global constraints are validated. If any global constraint is violated, the tuple from the Pareto front is removed.

  • Each predecessor node is inserted in the queue, if the queue does not already hold the same.

We briefly elaborate each step below. We first insert in the queue and then continue the procedure until the queue becomes empty. In each step, we remove a node from the queue, say (in FIFO basis) and construct its predecessor nodes as described below.

Consider a node consisting of a set of nodes of . Also, consider be the set of inputs that are required to activate the services corresponding to . For each , we compute a set of nodes , such that an edge annotated by is incident to at least one node in . We then compute a set of combinations of nodes in consisting of a node from each , for all . We now define the notion of a redundant service.

Definition 5.2.

[Redundant Service:] A service belonging to a solution in response to a query is redundant, if is also a solution to .

We consider the following assumption: a solution with some redundant services cannot be better, in terms of QoS values, than . Two sets and may not be mutually exclusive, for , since the service corresponding to one node may produce more than one output from . Therefore, if we consider a combination consisting of one node from each , where , we do not need to consider any combination which is a superset of . For each combination, we construct a node and an edge of .

Example 5.1.

Fig. 3(b) shows the LPG generated from the dependency graph in Fig. 3(a). Consider the end node of Fig.3(b). consists of . is the required set of inputs (i.e., query outputs). and . We get 4 combinations from and construct a node for each combination and the corresponding edges. Consider another node of consisting of . The required set of inputs is . and . We get 2 combinations from . However, one combination is a superset of another . Therefore, we disregard the combination and construct a node corresponding to and the corresponding edge.

We now prove the following lemma.

Lemma 2.

Each path from to in represents a solution to the query in terms of functional dependencies.

Once a node of is constructed, we construct the set of Pareto optimal tuples corresponding to the node. Consider a node consisting of a set of nodes of . The QoS tuples corresponding to are combined and a new set of tuples, , is constructed. Each tuple in is then validated against the set of global constraints . If any tuple violates any of the global constraints, the tuple is removed from . If no tuple from satisfies the global constraints, we disregard the node . Otherwise, we compute the set of non dominated tuples from and associate these with . Consider is removed from the queue and is created as the predecessor of . If already exists in the queue, we do not need to recompute the set of non dominated tuples of . Once the set of non dominated tuples corresponding to are constructed, we construct the cumulative Pareto optimal solutions till .

In order to find the Pareto front till , we combine the tuples in the Pareto front constructed till with the set of non dominated tuples of . The combined tuples are verified against the global constraints and if any tuple violates any of the global constraints, we remove the tuple from the combined set. Finally, we compute the cumulative Pareto optimal solutions till from the set of combined tuples and the cumulative Pareto front of . The Pareto front constructed in constitutes the feasible Pareto optimal solutions to the query.

Example 5.2.

Fig. 4 shows the Feasible Pareto front generation method on a LPG. The set of initial non dominated tuples of consists of one tuple , initialized with the best values of these parameters. The cumulative Pareto optimal front of also consists of the same tuple.

Now consider a node . When is created as a predecessor of , the set of non dominated tuples corresponding to are constructed first. The cumulative Pareto front till is constructed next by combining the cumulative Pareto front till and the set of non dominated tuples of , followed by selecting the Pareto front from the combined set. In the next iteration, when is constructed as a predecessor of , the set of non dominated tuples are not recomputed. However, is modified. The cumulative Pareto front till and the set of non dominated tuples of are combined first and then the Pareto front is selected from the combined set and the already existing set . It may be noted, the cumulative Pareto front till violates the global constraints. Hence, the node is disregarded from the graph. The final solution path is marked by the bold line.

Algorithm 1 presents the formal algorithm for constructing the feasible Pareto optimal solution in response to a query. We now prove the following lemma.

Lemma 3.

Algorithm 1 is complete.

Fig. 5: Heuristic solution Construction
Lemma 4.

Algorithm 1 is sound.

The search space of this algorithm is exponential in terms of the number of services required to serve a query. This limits its scalability to large service repositories. In the next subsection, we propose two scalable heuristics.

6 A Heuristic Approach

We first discuss the limitation of the solution discussed in the previous subsection. It is easy to see that Step 6 of Algorithm 1, where the set of predecessors of a node is constructed, may explode. Consider the following example.

Example 6.1.

Consider a node of LPG requires 10 inputs and each input is provided by 10 nodes of the dependency graph. The number of possible predecessor nodes of is .

If the number of inputs of a node or the number of nodes providing an input, increases, the number of predecessor nodes also increases exponentially. In our heuristic, we try to address the above issue. The main motivation of this algorithm is to reduce the search space of the original problem. On one hand, we attempt to reduce the number of combinations generated at Step 6 of Algorithm 1. On the other hand, we try to restrict the number of nodes generated at a particular level of the LPG. Our approach is based on the notion of anytime algorithms[25] using beam search. Beam search uses breadth-first search to build its search space. However, at each level of the graph, we store only a fixed number of nodes, called the beam width. The greater the beam width, the fewer the number of nodes pruned.

While constructing , all predecessor nodes of the set of nodes at a particular level are computed, as earlier. However, only a subset of the nodes is stored depending on the beam width of the algorithm. Consider be the set of nodes generated at level and the beam width is . Therefore, only out of nodes are stored. The nodes are selected based on the values of the cumulative Pareto optimal tuples computed till , for . At each level, the selected set of nodes is ranked between . A node with rank has higher priority than a node with rank , where . Consider be the number of levels in the dependency graph, where the level consists of . The selection criteria for choosing nodes from level is discussed below.

  • The feasible non dominated tuples corresponding to the cumulative Pareto optimal tuples computed till each , for are computed first.

  • If , is returned.

  • If , the following steps are performed:

    • The utility corresponding to each tuple is computed as follows: