Log In Sign Up

An Approach to Incremental and Modular Context-sensitive Analysis of Logic Programs

Context-sensitive global analysis of large code bases can be expensive, which can be specially problematic in interactive uses of analyzers. However, in practice each development iteration implies small modifications which are often isolated within a few modules, and analysis cost can be reduced by reusing the results of previous analyses. This has been achieved to date on the one hand through modular analysis, which reduce memory consumption and on the other hand often localize the computation during reanalysis mainly to the modules affected by changes. In parallel, context-sensitive incremental fixpoints have been proposed that achieve cost reductions at finer levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs. This work describes, implements, and evaluates a context-sensitive fixpoint analysis algorithm for (Constraint) Logic Programs aimed at achieving both inter-modular (coarse-grain) and intra-modular(fine-grain) incrementality, solving the problems related to propagation of the fine-grain change information and effects across module boundaries, for additions and deletions in multiple modules. The implementation and evaluation of our algorithm shows encouraging results: the expected advantages of fine-grain incremental analysis carry over to the modular analysis context. Furthermore, the fine-grained propagation of analysis information of our algorithm improves performance with respect to traditional modular analysis even when analyzing from scratch.


page 13

page 14


An Approach to Incremental and Modular Context-sensitive Analysis

Context-sensitive global analysis of large code bases can be expensive, ...

Modular Multiplication without Carry Propagation (Algorithm Description)

This paper describes a sufficiently simple modular multiplication algori...

A framework for a modular multi-concept lexicographic closure semantics

We define a modular multi-concept extension of the lexicographic closure...

Joint Tabling of Logic Program Abductions and Updates

Abductive logic programs offer a formalism to declaratively represent an...

Modular Runtime Complexity Analysis of Probabilistic While Programs

We are concerned with the average case runtime complexity analysis of a ...

Automated Refactoring of Legacy JavaScript Code to ES6 Modules

The JavaScript language did not specify, until ECMAScript 6 (ES6), nativ...

An Incremental Slicing Method for Functional Programs

Several applications of slicing require a program to be sliced with resp...

1. Introduction and motivation

Dynamic programming languages are a popular programming tool for many applications, due to their flexibility. They are often the first choice for web programming, prototyping, and scripting. However, this large degree of flexibility often comes at the cost of having to perform at run time a number of additional tasks when compared to static languages. This includes performing dynamic specialization and other optimizations, e.g., for dynamic method selection, or performing run-time checks for detecting property violations in calls to built-ins and libraries, as well in user code (which are covered, at least in part, by full static typing or other forms of full static verification in static languages).

Static analysis can infer information that can help reduce the amount and cost of these dynamic tasks by specializing, eliminating dynamic decisions, or discharging assertion checks as much as possible at compile time, but at a cost: large, real-life programs typically have a complex structure combining a number of modules with other modules coming from system libraries, and context-sensitive global analysis of such large code bases can be expensive. This cost can be specially problematic in interactive uses of analyzers during agile program development and prototyping, one of the contexts where dynamic languages are particularly popular. A practical example is detecting and reporting bugs, such as, e.g., assertion violations, back to the programmer as the program is being edited, by running the analysis in the background. This can be done at small intervals, each time a set of changes is made, when a file is saved, when a commit is made on the version control system, etc. In large programs, triggering a complete reanalysis for each such change set is often too costly. Another typical scenario is when updating the analysis information after some source-to-source transformations and/or optimizations. This usually involves an analyze, perform transformation, then reanalyze cycle for which it is clearly inefficient to start the analysis from scratch.

A key observation is that in practice, each development or transformation iteration normally implies relatively small modifications which are isolated inside a small number of modules. This property can be taken advantage of to reduce the cost of re-analysis by reusing as much information as possible from previous analyses. Such cost reductions have been achieved to date at two levels, using relatively different techniques:

  • Modular analyses have been proposed which obtain a global fixpoint by computing local fixpoints on one module at a time. Such modular techniques are aimed at reducing the memory consumption (working set size) but can also localize analysis recomputation to the modules affected by changes, even in the context-sensitive setting, thus achieving a certain level of coarse-grained incrementality (Bueno et al., 2001; Cousot and Cousot, 2002; Puebla et al., 2004; Correas et al., 2006; Cousot et al., 2009; Fähndrich and Logozzo, 2011).

  • In parallel, context-sensitive (non-modular) incremental fixpoint analyses identify, invalidate, and recompute only those parts of the analysis results that are affected by these fine-grain program changes. They achieve incrementality at finer levels of granularity and for more localized changes (such as at the program line level) (Puebla and Hermenegildo, 1996; Kelly et al., 1997; Hermenegildo et al., 2000; Albert et al., 2012; Arzt and Bodden, 2014; Szabó et al., 2016).

The main problem that we address is that the context-sensitive, fine-grained incremental analysis techniques presented to date are not directly applicable to modular programs. since these algorithms are not aware of the module boundaries, and at the same time the flow of analysis information through the module interfaces is complex and requires iterations, since the analysis of a module depends on the interface and analysis of other modules in complex ways, through several paths to different versions of the exported procedures. In order to bridge this gap, we propose a generic framework that performs context-sensitive fixpoint analysis while achieving both inter-modular (coarse-grain) and intra-modular (fine-grain) incrementality. Our analysis algorithm is based on the generic (i.e., abstract domain-independent), context-sensitive PLAI algorithm (Muthukumar and Hermenegildo, 1990, 1992) that has been the basis of many analyses for both declarative and imperative programs, and also the subject of many extensions –some recent examples are (Courant and Urban, 2017; Frielinghaus et al., 2016; Albert et al., 2012). In particular, we build on the previously mentioned extensions to make it incremental (Hermenegildo et al., 2000; Puebla and Hermenegildo, 1996) or modular (Bueno et al., 2001; Puebla et al., 2004; Correas et al., 2006). Addressing the issues mentioned before, we solve the problems related to delaying the propagation of the fine-grain change information across module boundaries, devising the additional bookkeeping required. We also describe the actions needed to recompute the analysis fixpoint incrementally after multiple additions and deletions across different modules. The new setting modifies the cost-performance tradeoffs, and this requires experimentation to determine if and when the new algorithm offers clear advantages. To this end we have implemented the proposed approach within the Ciao/CiaoPP system (Hermenegildo et al., 2012, 2005) and provide experimental results.

For generality, we formulate our algorithm to work on a block-level intermediate representation of the program, encoded using (constrained) Horn clauses (Méndez-Lojo et al., 2007; Gómez-Zamalloa et al., 2008), i.e., we assume that programs are converted to this representation, on a modular basis. While the conversion itself is beyond the scope of the paper (and dependent on the source language), the process is of course trivial in the case of (C)LP programs or (eager) functional programs: mostly eliminating all syntactic sugar – conditionals, loops, macros, negation, grammars/DCGs, etc.– as done normally by the compiler. For imperative programs we refer the reader to (Henriksen and Gallagher, 2006; Méndez-Lojo et al., 2007; Gómez-Zamalloa et al., 2008) and the references below, and recall some characteristics (and advantages) of this representation: all iterations are represented uniformly as (tail or last call) recursions, non-deterministic or unknown choices are coded through multiple definitions (multiple clauses for the same procedure), all conditional forms are represented uniformly through clause guards, multiple input and output arguments are represented directly as clause head arguments, and clause literals represent either calls to other clauses/blocks or constraints. Such constraints encode primitive operations such as assignment (for source representations), or bytecodes or machine instructions (in lower-level representations), and correspond to transfer functions in the abstract domain. Horn Clauses have been used successfully as intermediate representations for many different programming languages and compilation levels (e.g., bytecode, llvm-IR, or ISA), in a good number of analysis and verification tools (Albert et al., 2007; Banda and Gallagher, 2009; Navas et al., 2009; Grebenshchikov et al., 2012; Hojjat et al., 2012; Jaffar et al., 2012; Liqat et al., 2014; Bjørner et al., 2013; De Angelis et al., 2014; Gurfinkel et al., 2015; Bjørner et al., 2015; Liqat et al., 2016; Madsen et al., 2016; de Moura and Bjørner, 2008; Kafle et al., 2016) (see Section 5 for related work).

2. Preliminaries and notation

Abstract Interpretation. The formalism that our analyses are based on is Abstract Interpretation (Cousot and Cousot, 1977), a technique for static program analysis in which execution of the program is simulated on a description (or abstract) domain () which is simpler than the actual (or concrete) domain (). Values in the description domain and sets of values in the actual domain are related via a pair of monotonic mappings : abstraction , and concretization which form a Galois connection. A description (or abstract value) approximates an actual (or concrete) value if where is the partial ordering on . The correctness of abstract interpretation guarantees that the descriptions computed (by calculating a fixpoint through a Kleene sequence) approximate all of the actual values or traces which occur during any possible execution of the program, and that this fixpoint calculation process will terminate given some conditions on the description domains (such as being finite, or of finite height, or without infinite ascending chains) or by the use of a widening operator (Cousot and Cousot, 1977).

Intermediate Representation. A Constrained Horn Clause program (CHC) is a set of rules of the form , where are literals and is an atom said to be the head of the rule. A literal is an atom or a primitive constraint. We assume that each atom is normalized, i.e., it is of the form where is an -ary predicate symbol and are distinct variables. A set of rules with the same head is called a predicate (procedure). A primitive constraint is defined by the underlying abstract domain(s) and is of the form where is an -ary predicate symbol and the are expressions. Also for simplicity, and without loss of generality, we assume that each rule defining a predicate has identical sequence of variables in the head atom, i.e., . We call this the base form of . Rules in the program are written with a unique subscript attached to the head atom (the rule number), and a dual subscript (rule number, body position) attached to each body literal, e.g.: where is a subscripted atom or constraint. The rule may also be referred to as rule , the subscript of the head atom.

Example 2.1 ().

Factorial program in imperative C-style implementation (left) and its translation to CHC (right):

1int fact(int N){ 2  int R = 1; 3  while(N > 0){ 4    R *= N; 5    N--; 6  } 7  return R; 8}     1fact(N, F) :- 2    while(N,1,F). 3 4while(N, R, R) :- N =< 0. 5while(N, A, R) :- N > 0, 6    A1 is A * N, 7    N1 is N - 1, 8    while(N1, A1, R).

Modular partitions of programs. A partition of a program is said to be modular when its source code is distributed in several source units (modules), each defining its interface with other modules of the program. The interface of a module contains the names of the exported predicates and the names of the imported modules. Modular partitions of programs may be synthesized, or specified by the programmer, for example, via a strict module system, i.e., a system in which modules can only communicate via their interface. We will use and to denote modules. Given a module , we will use:

  • to express the set of predicate names exported by module .

  • to denote the set of modules which imports.

  • to refer to the set generated by the transitive closure of .

We also define to denote the module in which the predicate corresponding to atom is defined.

3. The Algorithm for Modular and Incremental Context-sensitive Analysis

We assume a setting in which we analyze successive “snapshots” of modular programs, i.e., at each analysis iteration, before the analyzer is called, a snapshot of the sources is taken and used to perform the next analysis. We also assume that we can have information on the changes in this snapshot with respect to the previous one (e.g., by comparing the two sources), in the form of a set of added or deleted clauses.

The algorithm is aimed at analyzing separately the modules of a modular program, using context-sensitive fixpoint analysis, while achieving both inter-modular (coarse-grain) and intra-modular (fine-grain) incrementality. Each time an analysis is started, the modules will be analyzed independently (possibly several times) until a global fixpoint is reached. Although in practice we use basically the module partition defined by the programmer our algorithm works with any partition of the sources.

Program analysis graph.

We use analysis graphs, similarly to the PLAI algorithm (Muthukumar and Hermenegildo, 1990), to represent the analysis results. They represent the (possibly infinite) set of (possibly infinite) and-or trees explored by a top-down (SLDT) exploration of the CHC program on the concrete domain for all possible input values –a representation of all the possible executions and states of the original program. Given an analysis graph it is straightforward to reconstruct any program point annotation.

An analysis graph is represented in the algorithm via a pair of data structures . The answer table (AT) contains entries of the form  :  , with is always a base form, representing a node in the analysis graph. It represents that the answer pattern for calls with calling pattern to is . Note that for a given base form , the AT can contain a number of different entries for different call patterns. As usual, denotes the abstract substitution such that . A tuple  :  indicates that all calls to predicate with substitution either fail or loop, i.e., they do not produce any success substitutions. The dependency table (DT) contains the arcs of the program analysis graph that go from atoms in a rule body to the corresponding analysis node ( entry). An arc is of the form  :  : . This represents that calling rule with calling pattern causes literal to be called with calling pattern . The remaining part () is the abstract state at the program point just before and contains information about all variables in rule . The field is not really necessary, but is included for efficiency.111The value of a literal is the combination of the of the previous literals in the body of . In fact, the DT itself is not strictly necessary; it is used to speed up convergence.

Example 3.1 ().

Analyzing the following program that calculates the parity of a message with an abstract domain that captures whether variables take values of 0 or 1:

1:- module(parity, [par/2]).   |
2par([], 0).                   | xor(1,1,0).
3par([M|Ms], X) :-             | xor(1,0,1).
4    xor(M,X0,X),              | xor(0,1,1).
5    par(Ms, X0).              | xor(0,0,0)

Will produce an analysis graph of this shape: LABEL:fig:simple_program






The arrows in this graph represent the and the nodes the .

We use several instances of the analysis graph. The global analysis graph (), represents intermodular (global) information. , where the global answer table () is used to store the results of the boundaries of the modules (exported predicates), and the global dependency table () is used to store the relations between modules with predicate and call pattern precision. entries are of the form  :  :  meaning that in module a call to exported predicate with description may produce a call to imported predicate with . Local analysis graphs, , represent intra-modular information. The local answer table () keeps the results of a module during its analysis, i.e., of the predicates defined in that module and the (possibly temporary) results of its imported predicates, and the local dependency table () contains the arcs between rules defined in the module being analyzed. We use to denote the local analysis graph of partition .

We define  :  if there exists a renaming s.t.  : . This partial function will obtain, if it exists, a renamed answer pattern for  :  from some (any or ).

Domain operations.

The algorithm is parametric on the abstract domain (), which is defined by providing four abstract operations which are required to be monotonic and to approximate the corresponding concrete operations:

  • which performs the abstract restriction of a calling pattern to the variables in the literal ;

  • which performs the abstract operation of conjoining the primitive constraint with the description ;

  • which performs the abstract conjunction of two descriptions;

  • which performs the abstract disjunction of two descriptions.


The algorithm is centered around processing tasks triggered by events. They are separated in two priority queues, one for “coarse-grain” global tasks and the other for “fine-grain” local tasks. Global events will trigger actions that involve the , and similarly, local events will be used to update the . There is one kind of global event, , which indicates that module has to be reanalyzed. There are three kinds of local events:

  •  :  indicates that a new call for atom has been encountered.

  • means that recomputation needs to be performed starting at program point (literal) indicated by dependency .

  •  :  indicates that the answer pattern to has changed.

To add events to any of the queues we use add_event, which inserts the event in the corresponding queue (global or local) depending on its type.

3.1. Operation of the algorithm

The pseudocode of the algorithm is detailed in Fig. 1. 222A worked example illustrating the algorithm(s) is provided in App. B. The algorithm takes as an input a (partitioned) program, a set of program edits in the form of additions and deletions,333Typically computed w.r.t. the previous snapshot for each module. some entries , i.e., the set of initial  :  , and, implicitly, the result of the analysis of the program before the changes. We assume that the first time a program is analyzed all data structures are empty. Each module with changes is scheduled to be reanalyzed. If there are recursive dependencies between modules, the modules in each clique will be grouped and analyzed as a whole module (after doing the corresponding renamings). Finally, the analysis loop for the global queue begins.444Event ordering in the global queue can affect fixpoint convergence speed. Some scheduling policies were studied in (Correas et al., 2006); in our experiments we use the “top-down” strategy defined there.

Processing global events. This task is performed by process() and consists in analyzing the module for some previously annotated entries () and/or updating a previous analysis with new information on the imported rules. load_local_graph ensures that the set of clauses of and (initially empty) are are loaded and available. Then, is updated (update_local_graph) with possibly new results of the imported predicates. Since the algorithm works with one at a time, we simply write .

If there were pending source changes they are processed. Then, for all the entries of that were not in the a event is added and the local analysis loop is performed, i.e., the module is analyzed.

Next, is updated. In the , for all the newly analyzed entries we add (or update) tuples of the form  :  :  for each imported reachable with from the entry  : . Note that this can be determined using the . To update the , we add or replace tuples whose answer pattern () has changed, and add events to (re)analyze the affected modules. Finally, the set of pending entries () is emptied and the current () is ensured to be committed by store_local_graph.

Algorithm Modular Analyze input: global: 1:for all  do 2:    if  then 3:        :  : ,      4:     5:    add_event 6:analysis_loop 7:procedure analysis_loop() 8:    while  := next_event do process()      9:procedure process() 10:    , 11:    load_local_graph, update_local_graph 12:    if  then 13:       delete_clauses 14:       add_clauses 15:             16:    for all  :  do 17:       if  :  then 18:           add_event( : )             19:    analysis_loop 20:    remove s.t. 21:    for all  :  do 22:       for all reachable  :  do 23:           set :  :          24:        :  25:        := set :  26:       if  then 27:           for all  :  :  do 28:               29:              if  then add_event()                30:               :                         31:    store_local_graph, 32:procedure add_clauses() 33:    for all  :-  do 34:       for all  :  do 35:            := Aproject 36:           add_event :  :              37:procedure delete_clauses() 38:     :  :  39:    remove_invalid_info 40:procedure remove_invalid_info() 41:     := depends_call 42:    for all  :  do 43:       remove :  44:       remove :  :  45:procedure process( : ) 46:    if  then 47:       for all rule :-  do 48:            := Aproject 49:           add_event( :  : ))             50:     := initial_guess( : ) 51:    if  then 52:       add_event( : )      53:    set :  54:procedure process( : ) 55:    for all  :  :  56:       where there exists s.t.  :  :  do 57:       add_event()      58:procedure process( :  : ) 59:    if  is a primitive then 60:        := Aadd() 61:    else := lookup_answer( : )      62:    set :  :  63:     := Acombine() 64:    if  and  then 65:        := Aproject 66:       add_event( :  : )) 67:    else if  and  then 68:        := Aproject 69:       insert_answer_info( : )      70:procedure insert_answer_info( : ) 71:     := getans :  72:     := Alub 73:    if  then 74:       set :  75:       add_event( : )      76:function lookup_answer( : ) 77:    if  :  then 78:       return 79:    else 80:       add_event( : ) 81:       where is a renaming s.t is in base form 82:       return      83:procedure update_local_graph() 84:    for all  : ,  do 85:        :  86:       if  then 87:           set :  88:           add_event :  89:       else if  then 90:           remove_invalid_info : 
Figure 1. The generic context-sensitive, modular, incremental fixpoint algorithm.

Processing local events. The process :  procedure initiates the processing of the rules in the definition of atom . If is defined in the module an event is added for the first literal of each of the rules. The initial_guess function returns a guess of to  : . If possible, it reuses the results in the ATs, otherwise returns . For imported rules, if the exact  :  was not present in the , an event is created (adding the corresponding entry).

The process :  procedure propagates the information of new computed answers across the analysis graph by creating events with the program points from which the analysis has to be restarted.

The process :  :  procedure performs the core of the module analysis. It performs a single step of the left-to-right traversal of a rule body. If the literal is a primitive, it is added to the abstract description; otherwise, if it is an atom, an arc is added to the and is looked up (a process that includes creating a event for  :  if the answer is not in the ). The obtained answer is combined with the description from the program point immediately before to obtain the description for the program point after . This is either used to generate an event to process the next literal (if there is one), or otherwise to update the answer of the rule in insert_answer_info. This function combines the new answer with the previous, and creates events to propagate the new answer if needed.

Updating the local analysis graph. The assumptions made for imported rules (in initial_guess) may change during analysis. Before reusing a previous , it is necessary to detect which assumptions changed and how these changes affect analysis graph, and either propagate the changes or delete the information which may no longer be precise (remove_invalid_info).

Adding clauses to a module. If clauses are added to a module, the answer patterns for those rules have to be computed and this information used to update the . Then these changes need to be propagated. The computation and propagation of the added rules is done simply by adding arc events before starting the processing of the local queue. The propagation will come by adding and later processing events.

Removing clauses from a module. Unlike with incremental addition, we do not strictly need to change the analysis results at all. Since the inferred information is an over-approximation it is trivially guaranteed to be correct if a clause is deleted. However, this approach would obviously be very inaccurate. We have adapted one of the strategies proposed in (Hermenegildo et al., 2000) for getting the most precise analysis in the modular setting. The delete_clauses function selects which information can be kept in order to obtain the most precise semantics of the module, by removing all information in the which is potentially inaccurate, i.e., the information related to the calls that depend on the deleted rules (remove_invalid_info). Function depends_call gathers transitively all the callers to the set of obsolete , and the  :  generated from literals that follow in a clause body any , because they were affected by the answers .

When a module is edited and the program is reanalyzed, the source update procedures will be performed only the first time that the module is analyzed in the intermodular fixpoint, because the sources will not change between iterations.

SCC guided deletion strategy. The proposed deletion strategy is quite pessimistic. Deleting a single rule most of the times means reusing only a few dependency arcs and answers. However, it may occur that the analysis does not change after removing a clause or some answers/arcs may still be correct and precise. We would like to partially reanalyze the program without removing these potentially useful results, for example, using information of the strongly connected components. Our proposed algorithm allows performing such partial reanalysis, by running it (within the algorithm) partitioning the desired module into smaller partitions. Concretely, this can be achieved by replacing the call to remove_invalid_info (line 40) in delete_clauses by running the algorithm, taking as an input the partition of the current module into “submodules” of its SCCs, setting the initial of this modular analysis as , and initializing the with the of the current module.

3.2. Fundamental results for the algorithm

We now state the fundamental results for our algorithm. The proofs of the following theorems and propositions are provided in App. A. We assume that the domain  is finite, that initial_guess returns a value below the least fixed point (lfp), that modules which have recursive dependencies in exported and imported predicates are analyzed jointly, and that we analyze snapshots of programs, i.e., sources cannot be modified during the analysis.

functions We represent executing the modular incremental analysis algorithm with the function:

where is the (partitioned) program, is the analysis result of for , and is a pair of (additions, deletions) to for which we want to incrementally update to get ’, the analysis graph of . Note that while is not an explicit input parameter in the pseudocode, we use it and ’ to model the update.

Similarly, we represent the analysis of a module within the algorithm (lines 10 to 19 in the pseudocode), with the function:

where is a module, is the analysis result of for , is a pair of (additions, deletions) with which we want to incrementally update to get ’, the analysis graph of , and contains the (possibly temporary) information for the predicates imported by .

Proposition 3.2 (Analyzing a module from scratch).

If module is analyzed for entries within the incremental modular analysis algorithm from scratch (i.e., with no previous information available):

will represent the least module analysis graph of and , assuming .

Proposition 3.3 (Adding clauses to a module).

Given and s.t., ,

, then

I.e., if module is analyzed for entries , obtaining with the local incremental analysis algorithm, and is incrementally updated adding clauses , the result will be the same as when analyzing from scratch. Note that the analysis of module for from scratch can be seen also as adding incrementally all the clauses of module to the analysis of an empty program .

Proposition 3.4 (Removing clauses from a module).

Given and s.t. ,

, then

I.e., if module is analyzed for entries , obtaining with the local incremental analysis algorithm, and is incrementally updated removing clauses , the analysis result of will be the same as when analyzing it from scratch.

Adding and deleting clauses at the same time.

The results above hold also when combining additions and deletions of clauses, since the actions performed when adding and deleting clauses are compatible: when adding clauses the local analysis graph is reused as is. Deleting clauses erases potentially inaccurate information, which will only imply, in the worst case, some unnecessary recomputations.

Proposition 3.5 (Updating the ).

Given if changes to :

I.e., if module is analyzed for assuming some obtaining , then if the assumptions change to , incrementally updating these assumptions in will produce the same result as when analyzing with assumptions from scratch.

Computing the intermodular lfp.

So far we have seen that LocIncAnalyze calculates the lfp of modules. This guarantees:

Proposition 3.6 (Analyzing modular programs from scratch).

If program is analyzed for entries by the incremental modular analysis algorithm from scratch (with no previous information available):

will represent the least modular program analysis graph of exports, s.t. .

Theorem 3.7 (Modular incremental analysis).

Given modular programs s.t. , , entries , and :

I.e., if is changed to by editions and it is reanalyzed incrementally, the algorithm will return a that encodes the same global analysis graph as if is analyzed from scratch.

Finally, note that these results also hold for the SCC-guided deletion strategy (this follows from Theorem 3.7).

4. Experiments

aiakl       ann        bid       boyer     hanoi  peephole progeom  qsort    rdtok   warplan      read      witt   cleandirs
Figure 2. Analysis time from scratch for the different settings (ms). The order inside each set of bars is:  |mon|mon_inc|mod|mod_inc|.

We have implemented the proposed approach within the Ciao/CiaoPP system (Hermenegildo et al., 2012, 2005). We have selected some well-known benchmarks that have been used in previous studies of incremental analysis and show different characteristics.555Some relevant data on the benchmarks is included in App. C. E.g., Ann (a parallelizer) and boyer (a theorem prover kernel), are programs with a relatively large number of clauses located in a small number of modules. In contrast, e.g., bid is a more modularized program.

The tests performed consist in analyzing the benchmarks with the different approaches. For all our experiments we have used a top-down scheduling policy as defined in (Correas et al., 2006). The exported predicates of the main module of each benchmark were used as starting point, with or the initial call patterns, if specified. We use the well-known sharing and freeness abstract domain (Muthukumar and Hermenegildo, 1991) (pointer sharing and uninitialized pointers).666App. D provides also results for the def domain (dependency tracking via propositional clauses (Dumortier et al., 1993)).

As baseline for our comparisons we use the non-modular incremental and modular algorithms of (Hermenegildo et al., 2000) and (Puebla et al., 2004). We would like to be able to compare directly with the non-modular framework but it cannot be applied directly to the modular benchmarks. Instead, we use the monolithic approach of (Puebla et al., 2004), which is equivalent: given a modular program it builds a single module which is equivalent to , by renaming apart identifiers in the different modules in P to avoid name clashes. In summary, we perform the experiments for four approaches: the monolithic approach, the incremental approach described in (Hermenegildo et al., 2000) (after transforming the benchmarks to form), the modular approach described in (Puebla et al., 2004), and the proposed modular incremental approach.

To perform the tests we have developed a front end that computes the differences between two states of source files at the level of transformed clauses using Myers algorithm (Myers, 1986). Note however that this step is independent of the algorithm, and could also be obtained, for instance, from the IDE. We are also factoring out the time introduced by the current implementation of load_local_graph and store_local_graph, since this depends really on factors external to the algorithm such as representation, disk speeds, etc. In any case, the measured cost of loading and storing in our setup is negligible (a few milliseconds) and the overall module load cost is low (40-110 ms per module). The experiments were run on a MacBook Pro with an Intel Core i5 2.7 GHz processor, 8GB of RAM, and an SSD disk.


We first study the analysis from scratch of all the benchmarks for all approaches, in order to observe the overhead introduced by the bookkeeping in the algorithm. The results are shown in Fig. 2. For each benchmark four columns are shown, corresponding to the four analysis algorithms mentioned earlier: monolithic (mon), monolithic incremental (mon_inc), modular (mod), and modular incremental (mod_inc). The bars are split to show how much time each operation takes: analyze is the time spent processing local events, incAct is the time spent updating the local analysis results (procedure update_local_graph in the algorithm), preProc is the time spent processing clause relations (e.g., calculating the SCCs), and updG is the time spent updating the . In Fig. 2 warplan, witt, and cleandirs use the scale on the right hand side of the graph. In the monolithic setting, the overhead introduced is negligible. Interestingly, incremental modular performs better overall than simply modular even in analysis from scratch. This is due to the reuse of local information specially in complex benchmarks such as ann, peephole, warplan, or witt. In the best cases (e.g., witt) performance competes with monolithic thanks to the incremental updates, dropping from in the modular non incremental to in modular incremental.

Clause addition experiment. For each benchmark and approach, we measured the cost of analyzing the program adding one rule at a time. That is, the analysis was first run for the first rule only. Then the next rule was added and the resulting program (re)analyzed. This process was repeated until all the rules in all the modules were added.

Clause deletion experiment. We timed the case where the program rules are deleted one by one. Starting from an already analyzed program, the last rule was deleted and the resulting program (re)analyzed. This process was repeated until no rules were left. The experiment was performed for all the approaches using the initial top-down deletion strategy (_td) and the SCC-guided deletion strategy of Section 3.1 (_scc).

Figure 3. Analysis time of addition and deletion experiments (warplan).

Fig. 3 shows these two experiments for warplan. Each point represents the time taken to reanalyze the program after adding/deleting one clause. We observe that the proposed incremental algorithm outperforms the non incremental settings when the time needed to reanalyze is large. The analysis time of warplan grows as more clauses are added to the sources, but much slower in the incremental settings. The results for the other benchmarks can be found in App. E. These results are encouraging both in terms of response times and scalability.

aiakl        ann          bid        boyer    cleandirs    hanoi   peephole progeom    read       qsort      rdtok     warplan    witt
Figure 4. Addition experiments. The order inside each set of bars is: |mon|mon_inc|mod|mod_inc|.

Fig. 4 shows the accumulated analysis time of the whole addition experiment. The bar sections represent the same analysis actions as in Fig. 2, including procDiff, the time spent applying the changes to the analysis (procedures add_clauses and delete_clauses). Times are normalized with respect to the monolithic non-incremental algorithm, e.g., if analyzing ann for the monolithic non-incremental setting is taken as 1, the modular incremental setting takes aprox. , so it is aprox. 3.33 times faster.

The incremental settings (mon_inc, mod_inc) are always faster than the corresponding non-incremental settings (mon, mod) (except in aiakl where mod_inc is essentially the same as mod). Furthermore, while the traditional modular analysis is sometimes slower than the monolithic one (for the small benchmarks: hanoi, qsort, and witt), our modular incremental algorithm always outperforms both, obtaining speed-up over monolithic in the best cases (boyer and cleandirs). Furthermore, in the larger benchmarks modular incremental outperforms even the monolithic incremental algorithm.

aiakl         ann           bid         boyer    cleandirs    hanoi   peephole progeom   read       qsort      rdtok     warplan    witt
Figure 5. Deletion experiments. The order inside each set of bars is: |mon|mon_td|mon_scc|mod|mod_td| mod_scc|.

Fig. 5 shows the results of the deletion experiment. The analysis performance of the incremental approaches is in general better than the non-incremental approaches, except for small programs. Again, our proposed algorithm shows very good performance, in the best cases (ann, peephole, and read) we obtain a speed-up of , competing with monolithic incremental scc and outperforming in general monolithic incremental td. The SCC-guided deletion strategy seems to be more efficient than the top-down deletion strategy. This confirms that the top-down deletion strategy tends to be quite pessimistic when deleting information, and modular partitions limit the scope of deletion.

5. Related work

Modular analysis (Cousot and Cousot, 2002) is based on splitting large programs into smaller parts (e.g., based on the source code structure). Exploiting modularity has proved essential in industrial-scale analyzers (Cousot et al., 2009; Fähndrich and Logozzo, 2011). Despite the fact that separate analysis provides only coarse-grained incrementality, there have been surprisingly few results studying its combination with fine-grained incremental analysis.

Classical data-flow analysis: Since the first algorithm for incremental analysis was proposed by (Rosen, 1981), there has been considerable research and proposals in this topic (see the bibliography of (Ramalingam and Reps, 1993)). Depending on how data flow equations are solved, these algorithms can be separated into those based on variable elimination, which include (Burke, 1990)(Carroll and Ryder, 1988), and (Ryder, 1988); and those based on iteration methods which include (Cooper and Kennedy, 1984) and (Pollock and Soffa, 1989). A hybrid approach is described in (Marlowe and Ryder, 1990). Our algorithms are most closely related to those using iteration. Early incremental approaches such as (Cooper and Kennedy, 1984) were based on restarting iteration. That is, the fixpoint of the new program’s data flow equations is found by starting iteration from the fixpoint of the old program’s data flow equations. This is always safe, but may lead to unnecessary imprecision if the old fixpoint is not below the lfp of the new equations (Ryder et al., 1988). Reinitialization approaches such as (Pollock and Soffa, 1989) improve the accuracy of this technique by reinitializing nodes in the data flow graph to bottom if they are potentially affected by the program change. Thus, they are as precise as if the new equations had been analyzed from scratch. These algorithms are generally not based on abstract interpretation. Reviser (Arzt and Bodden, 2014) extends the more generic IFDS (Reps et al., 1995) framework to support incremental program changes. However IFDS is limited to distributive flow functions (related to condensing domains) while our approach does not impose any restriction on the domains.

Constraint Logic Programs: Apart from the work that we extend (Hermenegildo et al., 2000; Puebla and Hermenegildo, 1996), incremental analysis was proposed (just for incremental addition) in the Vienna abstract machine model (Krall and Berger, 1995a, b). It was studied also in compositional analysis of modules in (constraint) logic programs (Codish et al., 1993; Bossi et al., 1994), but it did not consider incremental analysis at the level of rules.

Horn clause-based representations: The frameworks defined in this line are based on abstract interpretation of Constraint Logic Programs (CLP), which by definition are sets of Horn clauses built around some constraint theories. The CLP representation has been successfully used in many analysis frameworks, both as a kernel target language for compiling logic-based languages (Hermenegildo et al., 2012) or used as an intermediate representation for imperative programs. As mentioned in the introduction, our approach within this framework is based on mapping the program to be analyzes (either in source or binary form) to predicates, preserving as much as possible the original program structure. This internal representation may be a straightforward desugared version or include more complex transformations. It may include effectful computations, which may require special treatment from the domains or additional program transformations, such as static single assignment (SSA). Other frameworks implementing a similar top-down abstract interpretation approach support incremental updates (Albert et al., 2012), based on the same principles (Hermenegildo et al., 2000), but not modular analysis. Other Horn clause-based approaches restrict to a pure subset, more recently denoted as Constrained Horn Clauses (CHC). Verifiers using CHCs (Gallagher and Kafle, 2014; Bjørner et al., 2015; Gurfinkel et al., 2015; De Angelis et al., 2014; Jaffar et al., 2012; Hojjat et al., 2012) are based on the generation of specific encodings for some properties of interest. These encodings may not be easy to map to the original program. To the best of our knowledge, CHC-based solvers are focused on verification problems and none of them deal with modularity and incremental updates.

Datalog and tabled logic programming: In a related line to the previous one, other approaches are based on datalog and tabled logic programming. FLIX (Madsen et al., 2016) uses a bottom-up semi-naïve strategy to solve Datalog programs extended with lattices and monotone transfer functions. This approach is similar to CLP analysis via bottom-up abstract interpretation. However it has not been extended to support incremental updates. Incremental tabling (Swift, 2014) offers a straightforward method to design incremental analyses (Eichberg et al., 2007), when they can be expressed as tabled logic programs. While these methods are much closer to our incremental algorithm, they may suffer similar problems than generic incremental computation, as it may be difficult to control.

Generic incremental computation frameworks: Obviously, the possibility exists of using a general incrementalized execution algorithm. Incremental algorithms compute an updated output from a previous output and a difference on the input data, which the hope that the process is (computationally) cheaper than computing from scratch a new output for the new input. The approach of (Szabó et al., 2016) takes advantage of an underlying incremental evaluator, IncQuery, and implements modules via the monolithic approach. There exist other frameworks such as self-adjusting computation (Acar, 2009) which greatly simplify writing incremental algorithms, but in return it is difficult to control the costs of the additional data structures.

6. Conclusions

Dynamic languages offer great flexibility, but it can come at the price of run-time cost. Static analysis, coupled with some dynamic techniques, can contribute to reducing this cost, but it in turn can take excessive time, specially in interactive or program transformation scenarios. To address this we have described, implemented, and evaluated a context sensitive, fixpoint analysis algorithm aimed at achieving both inter-modular (coarse-grain) and intra-modular (fine-grain) incrementality. Our algorithm takes care of propagation of fine-grain change information across module boundaries and implements all the actions required to recompute the analysis fixpoint incrementally after additions and deletions in the program. We have shown that the algorithm is correct and computes the most precise analysis. We have also implemented and benchmarked the proposed approach within the Ciao/CiaoPP system. Our preliminary results from this implementation show promising speedups for programs of medium and larger size. The added finer granularity of the proposed modular incremental fixpoint algorithm reduces significantly the cost with respect to modular analysis alone (which only preserved analysis results at the module boundaries) and produces better results even when analyzing the whole program from scratch. The advantages of fine-grain incremental analysis –making the cost be ideally proportional to the size of the changes– thus seem to carry over with our algorithm to the modular analysis case.


  • (1)
  • Acar (2009) Umut A. Acar. 2009. Self-adjusting computation: (an overview). In PEPM, ACM. 1–6.
  • Albert et al. (2007) E. Albert, P. Arenas, S. Genaim, G. Puebla, and D. Zanardini. 2007. Cost Analysis of Java Bytecode. In Proc. of ESOP’07 (LNCS), Vol. 4421. Springer.
  • Albert et al. (2012) Elvira Albert, Jesús Correas, Germán Puebla, and Guillermo Román-Díez. 2012. Incremental Resource Usage Analysis. In PEPM. ACM Press, 25–34.
  • Arzt and Bodden (2014) Steven Arzt and Eric Bodden. 2014. Reviser: Efficiently Updating IDE-/IFDS-based Data-flow Analyses in Response to Incremental Program Changes. In ICSE. 288–298.
  • Banda and Gallagher (2009) Gourinath Banda and John P. Gallagher. 2009. Analysis of Linear Hybrid Systems in CLP. In LOPSTR (LNCS), Michael Hanus (Ed.), Vol. 5438. Springer, 55–70.
  • Bjørner et al. (2015) Nikolaj Bjørner, Arie Gurfinkel, Kenneth L. McMillan, and Andrey Rybalchenko. 2015. Horn Clause Solvers for Program Verification. In Fields of Logic and Computation II - Essays Dedicated to Yuri Gurevich on the Occasion of His 75th Birthday. 24–51.
  • Bjørner et al. (2013) Nikolaj Bjørner, Kenneth L. McMillan, and Andrey Rybalchenko. 2013. On Solving Universally Quantified Horn Clauses. In SAS. 105–125.
  • Bossi et al. (1994) A. Bossi, M. Gabbrieli, G. Levi, and M.C. Meo. 1994. A Compositional Semantics for Logic Programs. Theoretical Computer Science 122, 1,2 (1994), 3–47.
  • Bueno et al. (2001) F. Bueno, M. García de la Banda, M. V. Hermenegildo, K. Marriott, G. Puebla, and P. Stuckey. 2001. A Model for Inter-module Analysis and Optimizing Compilation. In LOPSTR (LNCS). Springer-Verlag, 86–102.
  • Burke (1990) M. Burke. 1990. An Interval-Based Approach to Exhaustive and Incremental Interprocedural Data-Flow Analysis. ACM TOPLAS 12, 3 (1990), 341–395.
  • Carroll and Ryder (1988) M.D. Carroll and B. Ryder. 1988. Incremental Data Flow Analysis via Dominator and Attribute Updates. In POPL, ACM. ACM Press, 274–284.
  • Codish et al. (1993) M. Codish, S. Debray, and R. Giacobazzi. 1993. Compositional Analysis of Modular Logic Programs. In POPL. ACM, 451–464.
  • Cooper and Kennedy (1984) K. Cooper and K. Kennedy. 1984. Efficient Computation of Flow Insensitive Interprocedural Summary Information. In CC. ACM Press, 247–258.
  • Correas et al. (2006) J. Correas, G. Puebla, M. V. Hermenegildo, and F. Bueno. 2006. Experiments in Context-Sensitive Analysis of Modular Programs. In LOPSTR (LNCS). Springer-Verlag, 163–178.
  • Courant and Urban (2017) Nathanaël Courant and Caterina Urban. 2017. Precise Widening Operators for Proving Termination by Abstract Interpretation. In TACAS, ETAPS. 136–152.
  • Cousot and Cousot (1977) P. Cousot and R. Cousot. 1977. Abstract Interpretation: a Unified Lattice Model for Static Analysis of Programs by Construction or Approximation of Fixpoints. In Proc. of POPL’77. ACM Press, 238–252.
  • Cousot and Cousot (2002) P. Cousot and R. Cousot. 2002. Modular Static Program Analysis, invited paper. In Compiler Construction.
  • Cousot et al. (2009) Patrick Cousot, Radhia Cousot, Jérôme Feret, Antoine Miné, Laurent Mauborgne, and Xavier Rival. 2009. Why does Astrée scale up? Formal Methods in System Design (FMSD) 35, 3 (December 2009), 229–264.
  • De Angelis et al. (2014) Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi, and Maurizio Proietti. 2014. VeriMAP: A Tool for Verifying Programs through Transformations. In TACAS, ETAPS. 568–574.
  • de Moura and Bjørner (2008) Leonardo Mendonça de Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In TACAS (LNCS), Vol. 4963. Springer, 337–340.
  • Dumortier et al. (1993) V. Dumortier, G. Janssens, W. Simoens, and M. García de la Banda. 1993. Combining a Definiteness and a Freeness Abstraction for CLP Languages. In Workshop on Logic Program Synthesis and Transformation.
  • Eichberg et al. (2007) Michael Eichberg, Matthias Kahl, Diptikalyan Saha, Mira Mezini, and Klaus Ostermann. 2007. Automatic Incrementalization of Prolog Based Static Analyses. Springer Berlin Heidelberg, 109–123.
  • Fähndrich and Logozzo (2011) M. Fähndrich and F. Logozzo. 2011. Static Contract Checking with Abstract Interpretation. In FoVeOOS’10 (LNCS), Vol. 6528. Springer, 10–30.
  • Frielinghaus et al. (2016) Stefan Schulze Frielinghaus, Helmut Seidl, and Ralf Vogler. 2016. Enforcing Termination of Interprocedural Analysis. In SAS. 447–468.
  • Gallagher and Kafle (2014) John P. Gallagher and Bishoksan Kafle. 2014. Analysis and Transformation Tools for Constrained Horn Clause Verification. CoRR abs/1405.3883 (2014).
  • Gómez-Zamalloa et al. (2008) M. Gómez-Zamalloa, E. Albert, and G. Puebla. 2008. Modular Decompilation of Low-Level Code by Partial Evaluation. In SCAM. IEEE Computer Society, 239–248.
  • Grebenshchikov et al. (2012) S. Grebenshchikov, A. Gupta, N. P. Lopes, Co. Popeea, and A. Rybalchenko. 2012. HSF(C): A Software Verifier Based on Horn Clauses - (Competition Contribution). In TACAS. 549–551.
  • Gurfinkel et al. (2015) Arie Gurfinkel, Temesghen Kahsai, Anvesh Komuravelli, and Jorge A. Navas. 2015. The SeaHorn Verification Framework. In CAV. 343–361.
  • Henriksen and Gallagher (2006) Kim S. Henriksen and John P. Gallagher. 2006. Abstract Interpretation of PIC Programs through Logic Programming. In Proc. of SCAM’06. IEEE Computer Society, 184–196.
  • Hermenegildo et al. (2012) M.V. Hermenegildo, F. Bueno, M. Carro, P. López, E. Mera, J.F. Morales, and G. Puebla. 2012. An Overview of Ciao and its Design Philosophy. TPLP 12, 1–2 (2012), 219–252.
  • Hermenegildo et al. (2005) M. Hermenegildo, G. Puebla, F. Bueno, and P. López García. 2005. Integrated Program Debugging, Verification, and Optimization Using Abstract Interpretation (and The Ciao System Preprocessor). Science of Comp. Progr. 58, 1–2 (2005).
  • Hermenegildo et al. (2000) M. V. Hermenegildo, G. Puebla, K. Marriott, and P. Stuckey. 2000. Incremental Analysis of Constraint Logic Programs. ACM TOPLAS 22, 2 (March 2000), 187–223.
  • Hojjat et al. (2012) Hossein Hojjat, Filip Konecný, Florent Garnier, Radu Iosif, Viktor Kuncak, and Philipp Rümmer. 2012. A Verification Toolkit for Numerical Transition Systems - Tool Paper. In Proc. of FM 2012 (LNCS), Vol. 7436. Springer, 247–251.
  • Jaffar et al. (2012) Joxan Jaffar, Vijayaraghavan Murali, Jorge A. Navas, and Andrew E. Santosa. 2012. TRACER: A Symbolic Execution Tool for Verification. In CAV. 758–766.
  • Kafle et al. (2016) B. Kafle, J. P. Gallagher, and J. F. Morales. 2016. RAHFT: A Tool for Verifying Horn Clauses Using Abstract Interpretation and Finite Tree Automata. In CAV. 261–268.
  • Kelly et al. (1997) A. Kelly, K. Marriott, H. Søndergaard, and P.J. Stuckey. 1997. A Generic Object Oriented Incremental Analyser for Constraint Logic Programs. In ACSC. 92–101.
  • Krall and Berger (1995a) A. Krall and T. Berger. 1995a. Incremental Global Compilation of Prolog with the Vienna Abstract Machine. In International Conference on Logic Programming. MIT Press.
  • Krall and Berger (1995b) Andreas Krall and Thomas Berger. 1995b. The VAM - an Abstract machine for Incremental Global Dataflow Analysis of Prolog. In ICLP’95 Post-Conference Workshop on Abstract Interpretation of Logic Languages, Maria Garcia de la Banda, Gerda Janssens, and Peter Stuckey (Eds.). Science University of Tokyo, Tokyo, 80–91.
  • Liqat et al. (2016) U. Liqat, K. Georgiou, S. Kerrison, P. Lopez-Garcia, M. V. Hermenegildo, J. P. Gallagher, and K. Eder. 2016. Inferring Parametric Energy Consumption Functions at Different Software Levels: ISA vs. LLVM IR. In Proc. of FOPARA (LNCS), Vol. 9964. Springer, 81–100.
  • Liqat et al. (2014) U. Liqat, S. Kerrison, A. Serrano, K. Georgiou, P. Lopez-Garcia, N. Grech, M. V. Hermenegildo, and K. Eder. 2014. Energy Consumption Analysis of Programs based on XMOS ISA-Level Models. In Proceedings of LOPSTR’13 (LNCS), Vol. 8901. Springer, 72–90.
  • Madsen et al. (2016) Magnus Madsen, Ming-Ho Yee, and Ondrej Lhoták. 2016. From Datalog to FLIX: a Declarative Language for Fixed Points on Lattices. In PLDI, ACM. 194–208.
  • Marlowe and Ryder (1990) T. Marlowe and B. Ryder. 1990. An Efficient Hybrid Algorithm for Incremental Data Flow Analysis. In 17th ACM Symposium on Principles of Programming Languages (POPL). ACM Press, 184–196.
  • Méndez-Lojo et al. (2007) M. Méndez-Lojo, J. Navas, and M. Hermenegildo. 2007. A Flexible (C)LP-Based Approach to the Analysis of Object-Oriented Programs. In LOPSTR (LNCS), Vol. 4915. Springer-Verlag, 154–168.
  • Muthukumar and Hermenegildo (1990) K. Muthukumar and M. Hermenegildo. 1990. Deriving A Fixpoint Computation Algorithm for Top-down Abstract Interpretation of Logic Programs. Technical Report ACT-DC-153-90. Microelectronics and Computer Technology Corporation (MCC), Austin, TX 78759.
  • Muthukumar and Hermenegildo (1991) K. Muthukumar and M. Hermenegildo. 1991. Combined Determination of Sharing and Freeness of Program Variables Through Abstract Interpretation. In ICLP’91. MIT Press, 49–63.
  • Muthukumar and Hermenegildo (1992) K. Muthukumar and M. Hermenegildo. 1992. Compile-time Derivation of Variable Dependency Using Abstract Interpretation. JLP 13, 2/3 (July 1992), 315–347.
  • Myers (1986) Eugene W Myers. 1986. An O(ND) difference algorithm and its variations. Algorithmica 1, 1-4 (1986), 251–266.
  • Navas et al. (2009) J. Navas, M. Méndez-Lojo, and M. V. Hermenegildo. 2009. User-Definable Resource Usage Bounds Analysis for Java Bytecode. In BYTECODE’09 (ENTCS), Vol. 253. Elsevier, 6–86.
  • Pollock and Soffa (1989) L. Pollock and M.L. Soffa. 1989. An Incremental Version of Iterative Data Flow Analysis. IEEE Transactions on Software Engineering 15, 12 (1989), 1537–1549.
  • Puebla et al. (2004) G. Puebla, J. Correas, M. V. Hermenegildo, F. Bueno, M. García de la Banda, K. Marriott, and P. J. Stuckey. 2004. A Generic Framework for Context-Sensitive Analysis of Modular Programs. In Program Development in Computational Logic. Number 3049 in LNCS. Springer-Verlag, 234–261.
  • Puebla and Hermenegildo (1996) G. Puebla and M. V. Hermenegildo. 1996. Optimized Algorithms for the Incremental Analysis of Logic Programs. In SAS’96. Springer LNCS 1145, 270–284.
  • Ramalingam and Reps (1993) G. Ramalingam and T. Reps. 1993. A Categorized Bibliography on Incremental Computation. In POPL, ACM. ACM, Charleston, South Carolina.
  • Reps et al. (1995) Thomas W. Reps, Susan Horwitz, and Shmuel Sagiv. 1995. Precise Interprocedural Dataflow Analysis via Graph Reachability. In POPL. 49–61.
  • Rosen (1981) B. Rosen. 1981. Linear Cost is Sometimes Quadratic. In POPL, ACM. ACM Press, 117–124.
  • Ryder (1988) B. Ryder. 1988. Incremental Data-Flow Analysis Algorithms. ACM Transactions on Programming Languages and Systems 10, 1 (1988), 1–50.
  • Ryder et al. (1988) B. Ryder, T. Marlowe, and M. Paull. 1988. Conditions for Incremental Iteration: Examples and Counterexamples. Science of Computer Programming 11, 1 (1988), 1–15.
  • Swift (2014) Terrance Swift. 2014. Incremental Tabling in Support of Knowledge Representation and Reasoning. TPLP 14, 4-5 (2014), 553–567.
  • Szabó et al. (2016) Tamás Szabó, Sebastian Erdweg, and Markus Voelter. 2016. IncA: a DSL for the definition of incremental program analyses. In Proc. Int. Conf. on Automated Software Engineering. 320–331.

Appendix A Proofs

We introduce some additional notation that will be instrumental in the proofs. Given a (finite) abstract domain , we express the abstract semantics of a program clause with . The abstract semantics of a computation step of program , given the set of clauses , is a function collecting the meaning of all clauses: . As a means to express assumptions on the semantics of the program (i.e., to express the semantics of builtins, to specify properties of the entries of the program, or of the imported predicates777In CiaoPP this is done by means of assertions (Hermenegildo et al., 2012, 2005).), we add a constant part to : . The most precise semantics of and some assumptions is the lfp . Note that the lfp exists because is monotonic and is a constant. Also note that the lfp operation of a fixed program , parametric on the assumptions, is monotonic. Also, we have that , since the lfp is a composition of monotonic functions.

Domain of analysis graphs.

We build the domain of analysis graphs (parametric on ) as sets of (pred_name. This domain is finite, because it is the combination of finite domains. The set of predicate names may be infinite in general, but in each program it is finite. We do not represent the dependencies (DT) in this domain because they are redundant, only needed for efficiency. We define the partial order in this domain as:

For the sake of simplicity,  in the following represents this analysis graph domain.

We recall the function definitions used in the theorems of section 3.2.functions

In the following we show the proofs of the propositions and theorems of section 3.2. We assume that initial_guess that a value below the lfp. Let be the function that represents the semantics of , projected from .

See 3.2

Assuming that all the modular structures are properly initialized, LocIncAnalyze of a module in our algorithm encodes the monolithic algorithm of (Hermenegildo et al., 2000). We recall the basic result for that algorithm:

Theorem A.1 ().

For a program and initial s , the PLAI algorithm returns an AT and a DT which represents the least program analysis graph of and .

This theorem is directly applicable to LocIncAnalyze using the techniques for expressing properties of built-ins in the monolithic algorithm to incorporate the semantics of external predicates.

Our algorithm, when analyzing a module (starting from an empty local analysis graph) is obtaining the lfp by computing the supremum of the Kleene sequence of :

Each composition of involves applying each of the . However, since the operator is commutative, the order of computation of the to obtain each of the will not affect the final result. Furthermore, we can reorder the computation of the supremum in a way that we compose an several times, and then apply an , allowing us not to compute exactly each of the intermediate steps, as long as all the are applied fairly. This is equivalent to obtaining the least fixed point by chaotic iteration (Cousot and Cousot, 1977).

Adding clauses to a module.

Let and be two modules s.t., we add clauses to module to get and let be some initial assumptions.

See 3.3

Proof of Proposition 3.3.

The will be computed processing fairly all its clauses. Let us call applying one random and applying one random . There exists a valid sequence of computation of the Kleene sequence of that consists in applying first all the :

Therefore it is safe to start the analysis of and initial assumptions with