Towards a General Framework for Static Cost Analysis of Parallel Logic Programs

07/31/2019 ∙ by Maximiliano Klemen, et al. ∙ IMDEA Networks Institute 0

The estimation and control of resource usage is now an important challenge in an increasing number of computing systems. In particular, requirements on timing and energy arise in a wide variety of applications such as internet of things, cloud computing, health, transportation, and robots. At the same time, parallel computing, with (heterogeneous) multi-core platforms in particular, has become the dominant paradigm in computer architecture. Predicting resource usage on such platforms poses a difficult challenge. Most work on static resource analysis has focused on sequential programs, and relatively little progress has been made on the analysis of parallel programs, or more specifically on parallel logic programs. We propose a novel, general, and flexible framework for setting up cost equations/relations which can be instantiated for performing resource usage analysis of parallel logic programs for a wide range of resources, platforms and execution models. The analysis estimates both lower and upper bounds on the resource usage of a parallel program (without executing it) as functions on input data sizes. In addition, it also infers other meaningful information to better exploit and assess the potential and actual parallelism of a system. We develop a method for solving cost relations involving the max function that arise in the analysis of parallel programs. Finally, we instantiate our general framework for the analysis of logic programs with Independent And-Parallelism, report on an implementation within the CiaoPP system, and provide some experimental results. To our knowledge, this is the first approach to the cost analysis of parallel logic programs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Estimating in advance the resource usage of computations is useful for a number of applications; examples include granularity control in parallel/distributed systems, automatic program optimization, verification of resource-related specifications and detection of performance bugs, as well as helping developers make resource-related design decisions. Besides time and energy, we assume a broad concept of resources as numerical properties of the execution of a program, including the number of execution steps, the number of calls to a procedure, the number of network accesses, number of transactions in a database, and other user-definable resources. The goal of automatic static analysis is to estimate such properties without running the program with concrete data, as a function of input data sizes and possibly other (environmental) parameters.

Due to the heat generation barrier in traditional sequential architectures, parallel computing, with (heterogeneous) multi-core processors in particular, has become the dominant paradigm in current computer architecture. Predicting resource usage on such platforms poses important challenges. Most work on static resource analysis has focused on sequential programs, but relatively little progress has been made on the analysis of parallel programs, or on parallel logic programs in particular. The significant body of work on static analysis of sequential logic programs has already been applied to the analysis of other programming paradigms, including imperative programs. This is achieved via a transformation into Horn clauses [22]. In this paper we concentrate on the analysis of parallel Horn clause programs, which could be the result of such a translation from a parallel imperative program or be themselves the source program. Our starting point is the well-developed technique of setting up recurrence relations representing resource usage functions parameterized by input data sizes [27, 24, 9, 8, 10, 23, 2, 25], which are then solved to obtain (exact or safely approximated) closed forms of such functions (i.e., functions that provide upper or lower bounds on resource usage). We build on this and propose a novel, general, and flexible framework for setting up cost equations/relations which can be instantiated for performing static resource usage analysis of parallel logic programs for a wide range of resources, platforms and execution models. Such an analysis estimates both lower and upper bounds on the resource usage of a parallel program as functions on input data sizes. We have instantiated the framework for dealing with Independent And-Parallelism (IAP) [15, 11], which refers to the parallel execution of conjuncts in a goal. However, the results can be applied to other languages and types of parallelism, by performing suitable transformations into Horn clauses.

The main contributions of this paper can be summarized as follows:

  • We have extended a general static analysis framework for the analysis of sequential Horn clause programs [23, 25], to deal with parallel programs.

  • Our extensions and generalizations support a wide range of resources, platforms and parallel/distributed execution models, and allow the inference of both lower and upper bounds on resource usage. This is the first approach, to our knowledge, to the cost analysis of parallel logic programs that can deal with features such as backtracking, multiple solutions (i.e., non-determinism), and failure.

  • We have instantiated the developed framework to infer useful information for assessing and exploiting the potential and actual parallelism of a system.

  • We have developed a method for finding closed-form functions of cost relations involving the function that arise in the analysis of parallel programs.

  • We have developed a prototype implementation that instantiates the framework for the analysis of logic programs with Independent And-Parallelism within the CiaoPP system [14, 23, 25], and provided some experimental results.

2 Overview of the Approach

Prior to explaining our approach, we provide some preliminary concepts. Independent And-Parallelism arises between two goals when their corresponding executions do not affect each other. For pure goals (i.e., without side effects) a sufficient condition for the correctness of IAP is the absence of variable sharing at run-time among such goals. IAP has traditionally been expressed using the &/2 meta-predicate as the constructor to represent the parallel execution of goals. In this way, the conjunction of goals (i.e., literals) p & q in the body of a clause will trigger the execution of goals p and q in parallel, finishing when both executions finish.

Given a program and a predicate of arity and a set of -tuples of calling data to p, we refer to the (standard) cost of a call (i.e., a call to p with actual data ), as the resource usage (under a given cost metric) of the complete execution of . The standard cost is formalized as a function , where is the set of real numbers augmented with the special symbol (which is used to represent non-termination). We extend the function to the powerset of , i.e., , where . Our goal is to abstract (safely approximate, as accurately as possible) (note that ). Intuitively, this abstraction is the composition of two abstractions: a size abstraction and a cost abstraction. The goal of the analysis is to infer two functions and that give lower and upper bounds respectively on the cost function , where is the set of -tuples whose elements are natural numbers or the special symbol , meaning that the size of a given term under a given size metric is undefined. Such bounds are given as a function of tuples of data sizes (representing the concrete tuples of data of the concrete function ). Typical size metrics are the actual value of a number, the length of a list, the size (number of constant and function symbols) of a term, etc. [23, 25].

We now enumerate different metrics used to evaluate the performance of parallel versions of a logic program, compared against its corresponding sequential version. When possible, we define these metrics parameterized with respect to the resource (e.g., number of resolution steps, execution time, or energy consumption) in which the cost is expressed.

  • Sequential cost (Work): It is the standard cost of executing a program, assuming no parallelism.

  • Parallel cost (Depth): It is the cost of executing a program in parallel, considering an unbounded number of processors.

  • Maximum number of processes running in parallel (): The maximum number of processes that can run simultaneously in a program. This is useful, for example, to determine what is the minimum number of processors that are required to actually run all the processes in parallel.

The following example illustrates our approach.

Example 1

Consider the predicate scalar/3 below, and a calling mode to it with the first argument bound to an integer and the second one bound to a list of integers . Upon success, the third argument is bound to the list of products . Each product is recursively computed by predicate mult/3. The calling modes are automatically inferred by CiaoPP (see [14] and its references): the first two arguments of both predicates are input, and their last arguments are output.

1scalar(_,[],[]).
2scalar(N,[X|Xs],[Y|Ys]):-
3   mult(N,X,Y) & scalar(N,Xs,Ys).
4
5mult(0,_,0).
6mult(N,X,Y):-
7   N>1,
8   N1 is N - 1,
9   mult(N1,X,Y0),
10   Y is Y0 + X.

The call to the parallel &/2 operator in the body of the second clause of scalar/3 causes the calls to mult/3 and scalar/3 to be executed in parallel.

We want to infer the cost of such a call to scalar/3, in terms of the number of resolution steps, as a function of its input data sizes. We use the CiaoPP system to infer size relations for the different arguments in the clauses, as well as dealing with a rich set of size metrics (see [23, 25] for details). Assume that the size metrics used in this example are the actual value of N (denoted int(N)), for the first argument, and the list-length for the second and third arguments (denoted length(X) and length(Y)). Since size relations are obvious in this example, we focus only on the setting up of cost relations for the sake of brevity. Regarding the number of solutions, in this example all the predicates generate at most one solution. For simplicity we assume that all builtin predicates, such as is/2 and the comparison operators have zero cost (in practice they have a “trust”assertion that specifies their cost as if it had been inferred by the system). As the program contains parallel calls, we are interested in inferring both total resolution steps, i.e., considering a sequential execution (represented by the seq identifier), and the number of parallel steps, considering a parallel execution, with infinite number of processors (represented by par). In the latter case, the definition of this resource establishes that the aggregator of the costs of the parallel calls that are arguments of the &/2 meta-predicate is the max/2 function. Thus, the number of parallel resolution steps for p & q is the maximum between the parallel steps performed by p and the ones performed by q. However, for computing the total resolution steps, the aggregation operator we use is the addition, both for parallel and sequential calls. For brevity, in this example we only infer upper bounds on resource usages.

We now set up the cost relations for scalar/3 and mult/3. Note that the cost functions have two arguments, corresponding to the sizes of the input arguments111For the sake of clarity, we abuse of notation in the examples when representing the cost functions that depend on data sizes.. In the equations, we underline the operation applied as cost aggregator for &/2.

For the sequential execution (seq), we obtain the following cost relations:

After solving these equations and composing the closed-form solutions, we obtain the following closed-form functions:

For the parallel execution (par), we obtain the following cost relations:

After solving these equations and composing the closed forms, we obtain the following closed-form functions:

By comparing the complexity order (in terms of resolution steps) of the sequential execution of scalar/3, , with the complexity order of its parallel execution (assuming an ideal parallel model with an unbounded number of processors) , we can get a hint about the maximum achievable parallelization of the program.

Another useful piece of information about scalar/3 that we want to infer is the maximum number of processes that may run in parallel, considering all possible executions. For this purpose, we define a resource in our framework named sthreads. The operation that aggregates the cost of both arguments of the meta-predicate &/2, count_process/3 for the sthreads resource, adds the maximum number of processes for each argument plus one additional process, corresponding to the one created by the call to &/2. The sequential cost aggregator is now the maximum operator, in order to keep track of the maximum number of processes created along the different instructions of the program executed sequentially. Note that if the instruction p executes at most processes in parallel, and the instruction q executes at most processes, then the program p, q will execute at most processes in parallel, because all the parallel processes created by p will finish before the execution of q. Note also that for the sequential execution of both p and q, the cost in terms of the sthreads resource is always zero, because no additional process is created.

The analysis sets up the following recurrences for the sthreads resource and the predicates scalar/3 and mult/3 of our example:

After solving these equations and composing the closed forms, we obtain the following closed-form functions:

As we can see, this predicate will execute, in the worst case, as many processes as there are elements in the input list.

3 The Parametric Cost Relations Framework for Sequential Programs

The starting point of our work is the standard general framework described in [23] for setting up parametric relations representing the resource usage (and size relations) of programs and predicates. 222 We give equivalent but simpler descriptions than in [23], which are allowed by assuming that programs are the result of a normalization process that makes all unifications explicit in the clause body, so that the arguments of the clause head and the body literals are all unique variables. We also change some notation for readability and illustrative purposes.

The framework is doubly parametric: first, the costs inferred are functions of input data sizes, and second, the framework itself is parametric with respect to the type of approximation made (upper or lower bounds), and to the resource analyzed. Each concrete resource to be tracked is defined by two sets of (user-provided) functions, which can be constants, or general expressions of input data sizes:

  1. Head cost : a function that returns the amount of resource used by the unification of the calling literal (subgoal) p and the head of a clause matching p, plus any preparation for entering a clause (i.e., call and parameter passing cost).

  2. Predicate cost : it is also possible to define the full cost for a particular predicate p for resource and approximation , i.e., the function (with the sizes of p’s input data as parameters, ) that returns the usage of resource made by a call to this predicate. This is specially useful for built-in or external predicates, i.e., predicates for which the source code is not available and thus cannot be analyzed, or for providing a more accurate function than analysis can infer. In the implementation, this information is provided to the analyzer through trust assertions.

For simplicity we only show the equations related to our standard definition of cost. However, our framework has also been extended to allow the inference of a more general definition of cost, called accumulated cost, which is useful for performing static profiling, obtaining a more detailed information regarding how the cost is distributed among a set of user-defined cost centers. See [12, 21] for more details.

Consider a predicate p defined by clauses . Assume are the sizes of p’s input parameters. Then, the resource usage (expressed in units of resource with approximation ) of a call to p, for an input of size , denoted as , can be expressed as:

(1)

where is a function that takes an approximation identifier and returns a function which applies over the cost of all the clauses, , for , in order to obtain the cost of a call to the predicate p. For example, if is the identifier for approximation ”upper bound” (ub), then a possible conservative definition for is the function. In this case, and since the number of solutions generated by a predicate that will be demanded is generally not known in advance, a conservative upper bound on the computational cost of a predicate is obtained by assuming that all solutions are needed, and that all clauses are executed (thus the cost of the predicate is assumed to be the sum of the costs of all of its clauses). However, it is straightforward to take mutual exclusion into account to obtain a more precise estimate of the cost of a predicate, using the maximum of the costs of mutually exclusive groups of clauses, as done in [25].

Let us see now how to compute the resource usage of a clause. Consider a clause of predicate p of the form where , is a literal (either a predicate call, or an external or builtin predicate), and is the clause head. Assume that is a tuple with the sizes of all the input arguments to literal , given as functions of the sizes of the input arguments to the clause head. Note that these size relations have previously been computed during size analysis for all input arguments to literals in the bodies of all clauses. Then, the cost relation for clause and a single call to p (obtaining all solutions), is:

(2)

where gives the index of the last body literal that is called in the execution of clause , are the sizes of the input parameters of literal , and represents the product of the number of solutions produced by the predecessor literals of in the clause body:

(3)

where gives the number of solutions produced by , with arguments of size .

Finally, is replaced by one of the following expressions, depending on :

  • If is a call to a predicate which is in the same strongly connected component as p (the predicate under analysis), then is replaced by the symbolic call , giving rise to a recurrence relation that needs to be bounded with a closed-form expression by the solver afterwards.

  • If is a call to a predicate which is in a different strongly connected component than p, then is replaced by the closed-form expression that bounds . The analysis guarantees that this expression has been inferred beforehand, due to the fact that the analysis is performed for each strongly connected component, in a reverse topological order.

  • If is a call to a predicate q, whose cost is specified (with a trust assertion) as , then is replaced by the expression .

4 Our Extended Resource Analysis Framework for Parallel Programs

In this section, we describe how we extend the resource analysis framework detailed above, in order to handle logic programs with Independent And-Parallelism, using the binary parallel &/2 operator. First, we introduce a new general parameter that indicates the execution model the analysis has to consider. For our current prototype, we have defined two different execution models: standard sequential execution, represented by , and an abstract parallel execution model, represented by , where . The abstract execution model is similar to the work and depth model, presented in [6] and used extensively in previous work such as [17]. Basically, this model is based on considering an unbounded number of available processors to infer bounds on the depth of the computation tree. The work measure is the amount of work to be performed considering a sequential execution. These two measures together give an idea on the impact of the parallelization of a particular program. The abstract execution model , where , assumes a finite number of processors.

In order to obtain the cost of a predicate, equation (1) remains almost identical, with the only difference of adding the new parameter to indicate the execution model.

Now we address how to set the cost for clauses. In this case, equation (2) is extended with the execution model , and also the default sequential cost aggregation is replaced by a parametric associative operator , that depends on the resource being defined, the approximation, and the execution model. For or , the following equation is set up:

(4)

Note that the cost aggregator operators must depend on the resource (besides the other parameters). For example, if is execution time, then the cost of executing two tasks in parallel must be aggregated by taking the maximun of the execution times of the two tasks. In contrast, if is energy consumption, then the aggregation is the addition of the energy of the two tasks.

Finally, we extend how the cost of a literal , expressed as , is set up. The previous definition is extended considering the new case where the literal is a call to the meta-predicate &/2. In this case, we add a new parallel aggregation associative operator, denoted by . Concretely, if , where and are two sequences of goals, then:

(5)
(6)

where .

Consider now the execution model , where (i.e., assuming a finite number of processors), and a recursive parallel predicate p that creates a parallel task in each recursion . Assume that we are interested in obtaining an upper bound on the cost of a call to p, for an input of size . We first infer the number of parallel tasks created by p as a function of . This can be easily done by using our cost analysis framework and providing the suitable assertions for inferring a resource named “.” Intuitivelly, the “counter” associated to such resource must be incremented by the (symbolic) execution of the &/2 parallel operator. More formally, . To this point, an upper bound on the number of tasks executed by any of the processors is given by . Then, an upper bound on the cost (in terms of resolution steps, i.e., ) of a call to p, for an input of size can be given by:

(7)

where can be computed in two possible ways: ; or , where denotes an upper bound on the cost of parallel task , and are ordered in descending order of cost. Each can be considered as the sum of two components: , where denotes the cost from the point in which the parallel subtask is created until its execution is started by a processor (possibly the same processor that created the subtask), i.e. the cost of task preparation, scheduling, communication overheads, etc. denotes the cost of the execution of disregarding all the overheads mentioned before, i.e., , where is a tuple with the sizes of all the input arguments to predicate q in the body of p. denotes an upper bound on the cost of creating the parallel tasks . It will be dependent on the particular system in which p is going to be executed. It can be a constant, or a function of several parameters, (such as input data size, number of input arguments, or number of tasks) and can be experimentally determined.

In addition, we propose a method for finding closed-form functions for cost relations that arise in the analysis of parallel programs, where the function usually plays a role both as parallel and sequential cost aggregation operation, i.e, as and respectively. In the following subsection, we detail these methods.

4.1 Solving Cost Recurrence Relations Involving Operation

Automatically finding closed-form upper and lower bounds for recurrence relations is an uncomputable problem. For some special classes of recurrences, exact solutions are known, for example for linear recurrences with one variable. For some other classes, it is possible to apply transformations to fit a class of recurrences with known solutions, even if this transformation obtains an appropriate approximation rather than an equivalent expression.

Particularly for the case of analyzing independent and-parallel logic programs, nonlinear recurrences involving the operator are quite common. For example, if we are analyzing elapsed time of a parallel logic program, a proper parallel aggregation operator is the maximum between the times elapsed for each literal running in parallel. To the best of our knowledge, no general solution exists for recurrences of this particular type. However, in this paper we identify some common cases of this type of recurrences, for which we obtain closed forms that are proven to be correct. In this section, we present these different classes, together with the corresponding method to obtain a correct bound.

Consider the following function , defined as a general form of a first-order recurrence equation with a operator:

(8)

where . and are arbitrary expressions possibly depending on . Note that . We define , for a given , . If and do not depend on , then and do not change through the different recursive instances of . In this case, a closed-form upper bound is defined by the following theorem (whose proof is included in 0.A):

Theorem 1

Given as defined in (8), where and are non-decreasing functions of . Then, :

For the case where and are functions non-decreasing on , then the upper bound is given by the following closed form:

Theorem 2

Given as defined in (8), where and are functions of , non-decreasing on . Then, :

The proof of this Theorem is included in 0.B.

For the remaining cases, where a appears, we try to eliminate the operator by proving either or , for any input. In order to do that, we use the function comparison capabilities of CiaoPP, presented in [19, 20]. In cases where and/or contains non-closed recurrence functions, we use the Z3 SMT solver [7] to find, if possible, a proof either for or , treating the non-closed functions as uninterpreted functions, assuming that they are positive and non-decreasing. As the algorithm used by SMT solvers in this case is not guaranteed to terminate, we set a timeout. In the worst case, when no proof is found, then we replace the operator with an addition, loosing precision but still finding safe upper bounds.

5 Implementation and Experimental Results

We have implemented a prototype of our approach, leveraging the existing resource usage analysis framework of CiaoPP. The implementation basically consists of the parameterization of the operators used for sequential and parallel cost aggregation, i.e., for the aggregation of the costs corresponding to the arguments of ,/2 and &/2, respectively. This allows the user to define resources in a general way, taking into account the underlying execution model.

map_add1/2 Parallel increment by one of each element of a list.
fib/2 Parallel computation of the nth Fibonacci number.
mmatrix/3 Parallel matrix multiplication.
blur/2 Generic parallel image filter.
add_mat/3 Matrix addition.
intersect/3 Set intersection.
union/3 Set union.
diff/3 Set difference.
dyade/3

Dyadic product of two vectors.

dyade_map/3 Dyadic product applied on a set of vectors.
append_all/3 Appends a prefix to each list of a list of lists.
Table 1: Description of the benchmarks.
Bench Res Bound Inferred BigO
map_add1(x) SCost 35.57
PCost
SThreads
fib(x) SCost 52.66
PCost
SThreads
mmatrix() SCost 220.9
PCost
SThreads
blur(m,n) SCost 123.321
PCost
SThreads
add_mat(m,n) SCost 62.72
PCost
SThreads
intersect() SCost 191.16
PCost
SThreads
union() SCost 193.37
PCost
SThreads
diff() SCost 191.16
PCost
SThreads
dyade() SCost 71.08
PCost
SThreads
dyade_map() SCost 248.39
PCost
SThreads
append_all() SCost 108.4
PCost
SThreads
represents the nth element of the Fibonacci sequence. represents the nth Lucas number. represent the size of in terms of the metrics length and int, respectively.
Table 2: Resource usage inferred for Independent And-Parallel Programs.
Bench Bound Inferred BigO
map_add1(x) 54.36
blur(m,n) 205.97
add_mat(m,n) 185.89
intersect() 330.47
union() 311.3
diff() 339.01
dyade() 120.93
append_all() 117.8
p is defined as the minimum between the number of processors and SThreads.
Table 3: Resource usage inferred for a bounded number of processors.

We selected a set of benchmarks that exhibit different common parallel patterns, briefly described in Table 1, together with the definition of a set of resources that help understand the overall behavior of the parallelization.333We will be able to extend the experiments to a bigger set of benchmarks for the talk at the conference and the post-proceedings submission if the paper is accepted. Table 2 shows some results of the experiments that we have performed with our prototype implementation. Column Bench shows the main predicates analyzed for each benchmark. Set operations (intersect, union and diff), as well as the programs append_all, dyade and add_mat, are Prolog versions of the benchmarks analyzed in [17], which is the closest related work we are aware of.

Column Res indicates the name of each of the resources inferred for each benchmark: sequential resolution steps (SCost), parallel resolution steps assuming an unbounded number of processors (PCost), and maximum number of processes executing in parallel (SThreads). The latter gives an indication of the maximum parallelism that can potentially be exploited. Column Bound Inferred shows the upper bounds obtained for each of the resources indicated in Column Res. While in the experiments both upper and lower bounds were inferred, for the sake of brevity, we only show upper bound functions. Column BigO shows the complexity order, in big O notation, corresponding to each resource. For all the benchmarks in Table 2 we obtain the exact complexity orders. We also obtain the same complexity order as in [17] for the Prolog versions of the benchmarks taken from that work. Finally, Column Time (ms) shows the analysis times in milliseconds, which are quite reasonable. The results show that most of the benchmarks have different asymptotic behavior in the sequential and parallel execution models. In particular, for fib(x), the analysis infers an exponential upper bound for sequential execution steps, and a linear upper bound for parallel execution steps. As mentioned before, this is an upper bound for an ideal case, assuming an unbounded number of processors. Nevertheless, such upper-bound information is useful for understanding how the cost behavior evolves in architectures with different levels of parallelism. In addition, this dual cost measure can be combined together with a bound on the number of processors in order to obtain a general asymptotic upper bound (see for example Brent’s Theorem [13], which is also mentioned in [17]). The program map_add1(l) exhibits a different behavior: both sequential and parallel upper bounds are linear. This happens because we are considering resolution steps, i.e., we are counting each head unification produced from an initial call map_add1(l). Even under the parallel execution model, we have a chain of head unifications whose length depends linearly on the length of the input list. It follows from the results of this particular case that the parallelization will not be useful for improving the number of resolution steps performed in parallel.

Another useful information inferred in our experiments is the maximum number of processes that can be executed in parallel, represented by the resource named SThreads. We can see that for most of our examples the analysis obtains a linear upper bound for this resource, in terms of the size of some of the inputs. For example, the execution of intersect(a,b) (parallel set intersection) will create at most processes, where represents the length of the list . For other examples, the analysis shows a quadratic upper bound (as in mmatrix), or even exponential bounds (as in fib). The information about upper bounds on the maximum level of parallelism required by a program is useful for understanding its scalability in different parallel architectures, or for optimizing the number of processors that a particular call will use, depending on the size of the input data.

Finally, the results of our experiments considering a bounded number of processors are shown in Table 3.

6 Related Work

Our approach is an extension of an existing cost analysis framework for sequential logic programs [10, 12, 20], which extends the classical cost analysis techniques based on setting up and solving recurrence relations, pioneered by [27], with solutions for relations involving  max and min functions. The framework handles characteristics such as backtracking, multiple solutions (i.e., non-determinism), failure, and inference of both upper and lower bounds including non-polynomial bounds. These features are inherited by our approach, and are absent from other approaches to parallel cost analysis in the literature.

The most closely-related work to our approach is [17], which describes an automatic analysis for deriving bounds on the worst-case evaluation cost of first order functional programs. The analysis derives bounds under an abstract dual cost model based on two measures: work and depth, which over-approximate the sequential and parallel evaluation cost of programs, respectively, considering an unlimited number of processors. Such an abstract cost model was introduced by [6]

to formally analyze parallel programs. The work is based on type judgments annotated with a cost metric, which generate a set of inequalities which are then solved by linear programming techniques. Their analysis is only able to infer multivariate resource polynomial bounds, while non-polynomial bounds are left as future work. In 

[16] the authors propose an automatic analysis based on the work and depth model, for a simple imperative language with explicit parallel loops.

There are other approaches to cost analysis of parallel and distributed systems, based on different models of computation than the independent and-parallel model in our work. In [3] the authors present a static analysis which is able to infer upper bounds on the maximum number of active (i.e, not finished nor suspended) processes running in parallel, and the total number of processes created for imperative async-finish parallel programs. The approach described in [1] uses recurrence (cost) relations to derive upper bounds on the cost of concurrent object-oriented programs, with shared-memory communication and future variables. They address concurrent execution for loops with a semi-controlled scheduling, i.e., with no arbitrary interleavings. In [4] the authors address the cost of parallel execution of object-oriented distributed programs. The approach is to identify the synchronization points in the program, use serial cost analysis of the blocks between these points, and then, exploiting techniques just mentioned, construct a graph structure to capture the possible parallel execution of the program. The path of maximal cost is then computed. The allocation of tasks to processors (called “locations”) is part of the program in these works, and so although independent and-parallel programs could be modelled in this computation style, it is not directly comparable to our more abstract model of parallelism.

Solving, or safely bounding recurrence relations with max and min functions has been addressed mainly for recurrences derived from divide-and-conquer algorithms [5, 26, 18]. In our experience, our method is able to obtain more accurate bounds.

7 Conclusions

We have presented a novel, general, and flexible analysis framework that can be instantiated for estimating the resource usage of parallel logic programs, for a wide range of resources, platforms, and execution models. To the best of our knowledge, this is the first approach to the cost analysis of parallel logic programs. Such estimations include both lower and upper bounds, given as functions on input data sizes. In addition, our analysis also infers other information which is useful for improving the exploitation and assessing the potential and actual parallelism of a program. We have also developed a method for solving the cost relations that arise in this particular type of analysis, which involve the function. Finally, we have developed a prototype implementation of our general framework, instantiated it for the analysis of logic programs with Independent And-Parallelism, and performed an experimental evaluation, obtaining very encouraging results w.r.t. accuracy and efficiency.

Acknowledgements: Research partially funded by EU FP7 agreement no 318337 ENTRA, Spanish MINECO TIN2015-67522-C3-1-R TRACES project, the Madrid M141047003 N-GREENS program and BLOQUES-CM project, and the TEZOS Foundation TEZOS project.

References

  • [1] E. Albert, P. Arenas, S. Genaim, M. Gómez-Zamalloa, and G. Puebla. Cost Analysis of Concurrent OO programs. In Proc. of APLAS’11, volume 7078 of LNCS, pages 238–254. Springer, 2011.
  • [2] E. Albert, P. Arenas, S. Genaim, and G. Puebla. Closed-Form Upper Bounds in Static Cost Analysis.

    Journal of Automated Reasoning

    , 46(2):161–203, 2011.
  • [3] E. Albert, P. Arenas, S. Genaim, and D. Zanardini. Task-Level Analysis for a Language with Async-Finish parallelism. In Proc. of LCTES’11, pages 21–30. ACM Press, 2011.
  • [4] E. Albert, J. Correas, E. Johnsen, K.I. Pu, and G. Román-Díez. Parallel cost analysis. ACM Trans. Comput. Logic, 19(4), November 2018.
  • [5] L. Alonso, E.M. Reingold, and R. Schott. Multidimensional divide-and-conquer maximin recurrences. SIAM J. Discret. Math., 8(3):428–447, August 1995.
  • [6] Guy E. Blelloch and John Greiner. A provable time and space efficient implementation of NESL. In ACM Int’l. Conf. on Functional Programming, pages 213–225, May 1996.
  • [7] L. Mendonça de Moura and N. Bjørner. Z3: An Efficient SMT Solver. In TACAS, volume 4963 of LNCS, pages 337–340. Springer, 2008.
  • [8] S. K. Debray and N. W. Lin. Cost analysis of logic programs. ACM TOPLAS, 15(5):826–875, November 1993.
  • [9] S. K. Debray, N.-W. Lin, and M. V. Hermenegildo. Task Granularity Analysis in Logic Programs. In Proc. PLDI’90, pages 174–188. ACM, June 1990.
  • [10] S. K. Debray, P. Lopez-Garcia, M. V. Hermenegildo, and N.-W. Lin. Lower Bound Cost Estimation for Logic Programs. In ILPS’97, pages 291–305. MIT Press, 1997.
  • [11] G. Gupta, E. Pontelli, K. Ali, M. Carlsson, and M. V. Hermenegildo. Parallel Execution of Prolog Programs: a Survey. ACM TOPLAS, 23(4):472–602, July 2001.
  • [12] R. Haemmerlé, P. Lopez-Garcia, U. Liqat, M. Klemen, J. P. Gallagher, and M. V. Hermenegildo. A Transformational Approach to Parametric Accumulated-cost Static Profiling. In FLOPS’16, volume 9613 of LNCS, pages 163–180. Springer, 2016.
  • [13] Robert Harper. Practical Foundations for Programming Languages. Cambridge University Press, 2 edition, 2016.
  • [14] M. Hermenegildo, G. Puebla, F. Bueno, and P. Lopez Garcia. Integrated Program Debugging, Verification, and Optimization Using Abstract Interpretation (and The Ciao System Preprocessor). Science of Computer Programming, 58(1–2):115–140, 2005.
  • [15] M. Hermenegildo and F. Rossi. Strict and Non-Strict Independent And-Parallelism in Logic Programs: Correctness, Efficiency, and Compile-Time Conditions. Journal of Logic Programming, 22(1):1–45, 1995.
  • [16] T. Hoefler and G. Kwasniewski. Automatic complexity analysis of explicitly parallel programs. In 26th ACM Symp. on Parallelism in Algorithms and Architectures, SPAA ’14, pages 226–235, 2014.
  • [17] J. Hoffmann and Z. Shao. Automatic static cost analysis for parallel programs. In Jan Vitek, editor, Programming Languages and Systems, pages 132–157, Berlin, Heidelberg, 2015. Springer Berlin Heidelberg.
  • [18] H. Hwang and T.-H. Tsai. An asymptotic theory for recurrence relations based on minimization and maximization. Theoretical Computer Science, 290(3):1475 – 1501, 2003.
  • [19] P. Lopez-Garcia, L. Darmawan, and F. Bueno. A Framework for Verification and Debugging of Resource Usage Properties. In Technical Communications of ICLP, volume 7 of LIPIcs, pages 104–113. Schloss Dagstuhl, July 2010.
  • [20] P. Lopez-Garcia, L. Darmawan, M. Klemen, U. Liqat, F. Bueno, and M. V. Hermenegildo. Interval-based Resource Usage Verification by Translation into Horn Clauses and an Application to Energy Consumption. TPLP, 18:167–223, March 2018.
  • [21] P. Lopez-Garcia, M. Klemen, U. Liqat, and M. V. Hermenegildo. A General Framework for Static Profiling of Parametric Resource Usage. TPLP, 16(5-6):849–865, 2016.
  • [22] M. Méndez-Lojo, J. Navas, and M. Hermenegildo. A Flexible (C)LP-Based Approach to the Analysis of Object-Oriented Programs. In LOPSTR, volume 4915 of LNCS, pages 154–168. Springer-Verlag, August 2007.
  • [23] J. Navas, E. Mera, P. Lopez-Garcia, and M. Hermenegildo. User-Definable Resource Bounds Analysis for Logic Programs. In Proc. of ICLP’07, volume 4670 of LNCS, pages 348–363. Springer, 2007.
  • [24] M. Rosendahl. Automatic Complexity Analysis. In Proc. of FPCA’89, pages 144–156. ACM Press, 1989.
  • [25] A. Serrano, P. Lopez-Garcia, and M. V. Hermenegildo. Resource Usage Analysis of Logic Programs via Abstract Interpretation Using Sized Types. TPLP, ICLP’14 Special Issue, 14(4-5):739–754, 2014.
  • [26] B.-F. Wang. Tight bounds on the solutions of multidimensional divide-and-conquer maximin recurrences. Theoretical Computer Science, 242(1):377 – 401, 2000.
  • [27] B. Wegbreit. Mechanical Program Analysis. Comm. of the ACM, 18(9):528–539, 1975.

Appendix 0.A Proof for Theorem 1

Theorem

Given as defined in (8), where and are non-decreasing functions of . Then, :

Proof

The proof for the case is trivial.

In the following, we prove the theorem for , or equivalently, for . The proof is by induction on this subset.

Base Case. We have to prove that . Using the definition of and we have that

General Case. Assuming
, we need to prove that . By induction hypothesis we have that:

Appendix 0.B Proof of Theorem 2

For all , the following properties hold:

  • Commutative:

  • Associative:

  • Idempotent:

Lemma 1

Lemma 2

Theorem

Given as defined in (8), where and are functions of , non-decreasing on . Then, :

Proof

The proof for the case is trivial.

In the following, we prove the theorem for , or equivalently, for . The proof is by induction on this subset. For brevity, we only show the argument corresponding to the position of in . However, the proof is still valid considering all of the arguments.

Base Case. We have to prove that . Using the definition of and we have that

General Case. Assuming , we need to prove that . By induction hypothesis and Lemma 2 we have that:

By Lemma 1 we have that: