1 Introduction
How can one reconcile static and dynamic program analysis? These two forms of analysis complement each other: static analysis summarizes all possible runs of a program and thus provide soundness guarantees, while dynamic analysis provides information about the particular runs of a program which actually happen in practice and can therefore provide more relevant information. Being able to combine these two paradigms has applications on many forms on analyses, such as alias analysis [16, 20] and dependence analysis [18].
Compilers use program analysis frameworks to prove legality as well as determining benefit of transformations. Specifications for legality are composed of safety and liveness assertions (i.e. universal and existentially quantified properties), while specifications for benefit use assertions that hold in the common case. This reason for adopting the common case is that few transformations improve performance in general (i.e., for every input, environment). Similarly most transformations could potentially improve performance in a least one case. As such, compiler optimizations are instead motivated based on (an approximation of) the majority case, i.e. the (weighted) mean. While determining legality has improved due to advances in the verification community the progress in establishing benefit has been slow.
In this paper we introduce fuzzy dataflow analysis, a framework for static program analysis based on fuzzy logic. The salient feature of our framework is that it can naturally incorporate dynamic information while still being a static analysis. This ability comes thanks to a shift from “crisp” sets where membership is binary, as employed in conventional static analysis, to fuzzy sets where membership is gradual.
We make the following contributions:

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

Section 3 introduces our main contribution, the fuzzy dataflow framework.

Section 4 demonstrates the benefit of our framework by presenting a generalization of a wellknown code motion algorithm and we show how this generalization provides new opportunities for optimizations previous approaches would not discover.

Section 4 shows how fuzzy logic can benefit program analysis by (1) using secondorder fuzzy sets to separate uncertainty in dataflow and controlflow and hence improve an interprocedural analysis and (2) using fuzzy regulators to refine the results of our static analysis, hence improving the precision dynamically.
2 Preliminaries
We introduce and define fuzzy sets and the operators that form fuzzy logic. These concepts will be used in Section 3 to define the transfer functions of our dataflow analysis.
2.1 Fuzzy set
Elements of a crisp set^{1}^{1}1In the context of fuzzy logic, crisp or Boolean set refer to a classical set to avoid confusion with fuzzy sets. are either members or nonmembers w.r.t to a universe of discourse. A fuzzy set (FS) instead allow partial membership denoted by a number from the unit interval . The membership degree typically denotes vagueness. The process to convert crisp membership to fuzzy grades is called fuzzification and the inverse is called defuzzification. Following Dubois et al. [9, 8] let be a crisp set and a membership function (MF) then is a fuzzy set. As a convention, if is understood from context we sometimes refer to as a fuzzy set. The membership function formalizes the fuzzification. Fuzzy sets are ordered pointwise, i.e. .
We can accommodate some notion about uncertainty of vagueness by considering a type2 fuzzy set where the membership degree itself is a fuzzy set. Type2 FS (T2FS) membership functions are composed of a primary () and secondary () membership . Here uncertainty is represented by the secondary membership that define the possibility of the primary membership. When for each and , it holds the T2FS is called an interval T2FS. Gehrke et al. [10] showed that this can equivalently be described as an interval valued fuzzy sets (IVFS) where . IVFS are a special case of lattice valued fuzzy sets (fuzzy sets) where the membership domain forms a lattice over . Defuzzification of T2FS often proceeds in two phases. The first phase applies type reduction to transform the T2FS to a type1 FS (T1FS). The second phase then applies a type1 defuzzification.
2.2 Fuzzy logic
Fuzzy logic defines manyvalued formal systems to reason about truth in the presence of vagueness. Contrary to classical logic the law of excluded middle () and the law of noncontradiction () does not, in general, hold for these systems. Fuzzy logic uses T, S and C norms to generalize the logical operators , and . We compactly represent a fuzzy logic by ^{2}^{2}2Although one would expect the definition of a fuzzy logic to include a “fuzzy implication” operator in this work we do not consider it. which is sometimes called a De Morgan system [9] because it satisfies a generalization of De Morgans laws: and .
Definition 1.
Let be a binary function that is commutative, associative and increasing and has an identity element . If then is a Triangular norm (Tnorm) and if then is a Triangular conorm (Snorm)^{3}^{3}3The general concept, allowing any , is called a uninorm [9] and is either orlike (i.e., ) or andlike (i.e., ). Our work does not require the full generality..
Definition 2.
A Cnorm is a unary function that is decreasing, involutory (i.e., ) with boundary conditions (i.e, ).
Standard examples of fuzzy logics are shown in Table 1 [9, 8]. Examples 13 are special cases (and limits) of the Frank family of fuzzy logics that are central to our work and formally defined in Definition 3.
Fuzzy logic  Tnorm  Snorm  Cnorm  

1  MinMax  
2  Algebraic Sumproduct  
3  Lukasiewicz  
4  Nilpotent 
Definition 3.
Let then the Frank family of Tnorms is defined by:
The set of intervals in forms a bounded partial order ^{4}^{4}4This should not be confused with the partial order used in the interval abstraction. where and . As per Gehrke et al. [10] we can pointwise lift a T1FS fuzzy logic to a IVFS fuzzy logic, i.e., and .
3 Fuzzy dataflow analysis
Static dataflow analyses deduce values of semantic properties that are satisfied the dynamics of the application. The dynamics is formalized as a system of monotone transfer functions and collector functions. Transfer functions describe how blocks alter the semantic properties. Collectors functions merge results from different, possibly mutual exclusive, branches of the application. The solution of the analysis is obtained through Kleene iteration and is a unique fixedpoint of the system of equations. In a classical framework the domain of the values is binary, i.e. either true (1) or false (0). The interpretation of these values depends on the type of analysis. The value true means that the property can possibly hold in a mayanalysis (i.e., it is impossible that the value is always false) while it means that the property always holds in a mustanalysis. The value false could mean either the opposite of true or that the result is inconclusive.
Our fuzzy dataflow analysis instead computes the partial truth of the property, i.e. values are elements of . A value closer to means that the property is biased towards false and vice versa. Furthermore the transfer functions are logical formulas from a Frank family fuzzy logic and the collector functions are weighted average functions where the constant is determined prior to performing the analysis. In contrast to the classical framework Kleene iteration proceeds until the results differ by a constant which is the maximal error allowed by the solution. The error can be made arbitrarily small.
This section introduces the fuzzy dataflow framework and we prove termination using continuity properties and Banach’s fixedpoint theorem. Section 4 then presents an example analysis to demonstrate the benefit of the framework. The analysis is performed on a weighted flowgraph where is a set of logical formulas (denoting the transfer function of each block), is a set of edges (denoting control transfers) and denotes the normalized contribution for each edge . As a running example we will use Figure 1 (left) which shows a flowgraph with four nodes and their corresponding logical formula. The flow graph has four control edges denoting contributions between nodes. For instance, Block 1 (B1) receives 0.1 of its contribution from B0 and 0.9 from B2, i.e. and .
Definition 4.
Let be a finite set of properties and a valuation for each property. We use to denote the interpretation of the fuzzy formula given a . Given a flowgraph with a unique start node the map describes the value of each property at each node and a fuzzy dataflow analysis is a Kleene iteration of :
Figure 1 (middle) shows the equation system, as implied by Definition 4, interpreted in a minmax fuzzy logic for the example flowgraph. The red colored text corresponds to the collector function, i.e. the weighted average, and the normal text is the interpretation of the logical formula. In order to prove termination of a fuzzy analysis we need to introduce a continuity property.
Definition 5.
A function is Lipschitz continuous^{5}^{5}5Our definition restricts the domain and metric of both metric spaces (i.e., for the domain and codomain of ) compared to the more general, and common, definition of a Lipschitz continuous function. iff . Where is norm (i.e., the absolute value) of ^{6}^{6}6Other norms can be used but only if we restrict the logic to the minmax fuzzy logic [15].. If then is called a contraction mapping and if then is called a nonexpansive mapping.
In a sequence of applications of a contraction mapping the difference between two consecutive applications will decrease and in the limit reach zero. By imposing a bounded error we guarantee that this sequence terminates in a bounded amount of time. The analysis error and result of B2 as a function of iteration for the example is shown in Figure 1 (right). Note that the error (red line) is decreasing and the value of B2 (blue line) tends towards a final value. We next proceed to prove that any fuzzy dataflow analysis iteratively computes more precise results and terminates in a bounded amount of time for a finite maximum error from some . We let denote the maximal congruence set of elements from that are at least apart, i.e. . The set of intervals on , i.e. are defined analogously. For this we prove the nonexpansive property of fuzzy formulas.
Theorem 1.
Let , for some , be Lipschitz and
be Lipschitz.

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

Functions , , , and are Lipschitz. Constants are Lipschitz.

If then is Lipschitz.

The composition is Lipschitz.
Finally,

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

Formulas defined in a Frank family Fuzzy logic are Lipschitz.

If satisfies then F is Lipschitz.
In summary, as per Theorem 1:

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

Transfer functions in a Frank family fuzzy logic are nonexpansive mappings.

is constant and hence a contraction mapping.

The composition of 1) Two nonexpansive functions is a nonexpansive function and 2) A nonexpansive and a contraction function is a contraction function.
As the analysis is performed on the unit interval which together with the norm forms a complete metric space we can guarantee termination by Banach’s fixedpoint theorem.
Theorem 2 (Banach fixedpoint theorem).
Let be a complete metric space and a contraction. Then has a unique fixedpoint in .
This concludes our development of fuzzy dataflow analysis.
4 Lazy code motion
Improving performance often means removing redundant computations. Computations are said to be fully redundant, but not dead, if the operands at all points remain the same. For two such computations it is enough to keep one and store away the result for later. We can eliminate this redundancy using (global) common subexpression elimination (GCSE). Furthermore a computation that does not change on some paths is said to be partially redundant. Loop invariant code motion (LICM) finds partially redundant computations inside loops and move these to the entry block of the loop. Lazy code motion is a compiler optimization that eliminate both fully and partially redundant computations, and hence subsumes both CSE and LICM. KnoopRüthingSteffen (KRS) algorithm [13, 7] performs LCM in production compilers such as GCC when optimizing for speed.
It consists of a series of dataflow analyses and can be summarized in these four steps:

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

Introduce a set that describes the earliest block where an expression must be evaluated.

Determine the latest control flow edge where the expression must be computed.

Introduce and sets which describe where expressions should be evaluated.
The target domain of the analysis is the set of static expressions in a program. Input to the analysis is three predicates determining properties about the expressions in different blocks:

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

An expression “” is downward exposed if it produces the same result if evaluated at the end of the block where it is defined. We use to denote if “” is downward exposed in block “”.

An expression “” is upward exposed if it produces the same result if evaluated at the start of the block where it is defined. We use to denote this.

An expression “” is killed in block “” if any variable appearing in “” is updated in “”. We use to denote this.
Very busy expression analysis is a backwardmust dataflow analysis that depends on and and computes the set of expressions that is guaranteed to be computed at some time in the future. Similarly Available expression analysis is a forwardmust dataflow analysis that depends on and and deduces the set of previously computed expressions that may be reused. The fixedpoint system of these two analyses are shown in Figure 3. It is beyond the scope of this paper to further elaborate on the details of these analyses, the interested reader should consider Nielson et al. [17]. Here the LCM algorithm and the dataflow analyses it depends on, are applications we use to demonstrate the benefit of our framework. As such a rudimentary understanding is sufficient.
⬇ void diffPCM() { b = 0, A = 0, B = 0; for(i = 0; i < N; i++) Update ANFIS decision of updating Leave ANFIS decision of leaving if(in[i] != b) b = abs(in[i]b); If Update Leave: Decision error! else If Update Leave: Decision error! B = Transform(b); A = IncRate(i); out[i] = A*B; }
Expression  abs(a[i]b)  Transform(b)  IncRate(i)  A*B  i+1  in[i] != b  

Index  6  5  4  3  2  1  0 
Consider the simplified differential pulsecode modulation routine diffPCM in Figure 2 (left). We assume that and the relative number of times block (denoted ) is statically known^{8}^{8}8In this demonstration we let and , but our conclusions hold as increases and approaches .. In each iteration diffPCM invokes the pure functions Transform, to encode the differential output, and IncRate to get a quantification rate. We use the KRSLCM algorithm to determine if these invocations can be made prior to entering the loop and contrast this to a situation where the dataflow analyses are performed in the fuzzy framework. As we will show the “fuzzy KRSLCM” allows us to uncover opportunites the classical KRSLCM would miss.
4.1 Type1 static analysis
The dataflow problems of the KRS algorithm use expressions as domain. The mapping between expressions of diffPCM and indexes are listed in Figure 3 (bottom) together with the values of , and for each block (top right). The classical KRS algorithm conclude that both calls must be evaluated in B4 (bottom light gray box, “Delete” matrix, Column 4 and 5).
For the fuzzy dataflow analyses we use the Type1 MinMax fuzzy logic. The corresponding fuzzy sets of , and are given in Figure 3 (top dark gray box). Step (1) of the fuzzy KRSLCM is hence the fixedpoint to below system of equations:
Steps (2) and (4) introduce (constant) predicates and are performed outside the analysis framework. Step (3) is done similarly to step (1). Figure 3 (bottom dark gray box) shows the result from step (4). In contrast to the classical LCM the result implies that it is very plausible (0.998) that we can delete the invocation of Transform (“Delete” matrix, Column 5) from block B4 and instead add it at the end of B0 and B3 (or start of B1 and B4). However, result for the invocation of IncRate remains. This is because the invocation depends on the value of which is updated at the end of B4.
4.2 Type2 static analysis
To increase dataflow analysis precision a function call is sometimes inlined at the call site. The improvement can however be reduced if the controlflow analysis is inaccurate and multiple targets are considered for a particular call site. We show how the uncertainty in controlflow and dataflow can be quantified in two different dimensions using type2 interval fuzzy sets. As per Section 2 we can lift an arbitrary fuzzy predicate to intervals. Here we assume no knowledge about the relative number of calls to each target and treat the different calls nondeterministically.
inlined in block B4 (left); DEE, UEE and Kill vectors of block B4 and
Delete Insert analysis result for expression IncRate(i) (right)We assume two different IncRate functions, as in Figure 4 (left), have been determined as targets. Their respective and entries are the same but since is updated at the end of block B4 their entry will differ. The result of IncRate_1 depends on the variable and therefore , in contrast the entry for IncRate_2 is , where and . The new entry for block B4 is given by . The new , and sets are given in Figure 4 (right).
Applying the fuzzy KRSLCM, but with Type1 minmax fuzzy logic lifted to Interval type2 minmax fuzzy logic gives the values of Delete and Insert for expression IncRate(i) in Figure 4 (right). The result for invoking IncRate prior to the loop is as opposed to 0.001 from the Type1 analysis in Section 4.1. The added dimension in the result of the type2 fuzzy analysis allows us to differentiate uncertain results from pessimistic results. In the given example we showed that the result of Section 4.1 is a pessimistic overgeneralization and that the two paths need to be considered seperately to increase precision.
4.3 Hybrid analysis
The result from a fuzzy dataflow analysis is a set of fuzzy membership degrees. This section shows how the result can automatically be improved following the static analysis using a fuzzy regulator/classifier, if more specific information is provided at a later point. The classifier, a TakagiSugeno AdaptiveNetworkbased fuzzy inference system (TSANFIS) [11, 12] shown in Figure 5, is composed of five layers:

[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

Lookup fuzzy membership degree of the input value.

Compute the firing strength of a rule, i.e. conjunction of all membership degrees from each rule.

Normalize the firing strengths, i.e., .

Weight the normalized firing strength to the consequent output of the rule .

Combine all rule classifiers, i.e. .
This classifier uses a polynomial (i.e., the consequent part of the adaptive IFTHEN rules) to decide the output membership. The order of the TSANFIS is the order of the polynomial. The classification accuracy of the TSANFIS can be improved online/offline by fitting the polynomial to the input data. For a firstorder TSANFIS this can be implemented as follows:

(Offline) (Affine) Least square (LS) optimization [11] is a convex optimization problem that finds an affine function (i.e., ) which minimizes where and are the input and output vectors of the training set.

(Online) Least mean square (LMS) [11] is an adaptive filter that gradually (in steps of a given constant ) minimizes , where is an input/output sample.
To exemplify the functionality of the TSANFIS we consider the classification of using the two rule TSANFIS from Figure 5 (left). Let , and membership functions be given as in Figure 5 (right). The membership degrees are marked in the figure as , for the first rule and , for the second rule. Hence the weight of the first rule (i.e., ) is and the second rule (i.e., ) is . The normalized weights are then and . As the consequence functions output and we produce the prediction .
We return to the diffPCM function and again consider if we can invoke Transform(b) prior to entering the loop. We saw in Section 4.1 that the fuzzy membership degree was 0.998. To improve classification accuracy we let the TSANFIS also use the variable and the first input value (i.e., ). These variables were not part of the analysis and so we conservatively assume the fuzzy membership degree to be the same for any value of these variables (in our experiments: ). As shown in Figure 2 (right), we inserted calls to compute the ANFIS decision of updating and keeping the variable constant in the diffPCM function. If the incorrect decision was made the error was noted and an error rate computed after handling all input samples.
We consider invoking the diffPCM function on four different input sets. Each input set defined as 10 periods with 25 input values in each period. The input sets (i.e., in[...]) is given in Figure 6 (left). We use the LMS algorithm^{9}^{9}9The constant for the four different runs was set to 0.001, 0.05, 0.15 and 0.1 respectively. after each incorrect classification and the LS algorithm if the error rate of a period was larger than or equal to . Note that the values of a period is not always perfectly representable by a linear classifier and sometimes varies between different periods, although periods are “similar”. Hence we do not expect the classifier to be monotonically improving with increasing period. As shown in the result in Figure 6 (right) the classification error decreases fast with both period and input sample. In two cases a small residual error remains after the final period. This show that the TSANFIS can improve the analysis result dynamically and hence increase the accuracy of when Transform can be invoked prior to entering the loop.
5 Related work
Most systems include elements (e.g., input values, environment state) where information is limited but probabilistic and/or nondeterministic uncertainty can be formulated. For these systems a most likely or even quantitative analysis
of properties is possible. Often this analysis relies on a probability theory for logical soundness. Cousot and Monerau
[4] introduced a unifying framework for probabilistic abstract interpretation. Much work have since, although perhaps implicitly, relied on their formulation. Often probabilistic descriptions are known with imprecision that manifests as nondeterministic uncertainty [3]. Adje et al. [2] introduced an abstraction based on the zonotope abstraction for DempsterShafer structures and Pboxes^{10}^{10}10Lower and upper bounds on a cumulative probability distribution functions
.Di Pierro et al. [6] developed a probabilistic abstract interpretation framework and demonstrated an alias analysis algorithm that could guide the compiler in this decision. They later formulated dataflow problems (e.g., liveness analysis) in the same framework [5]. An important distinction between their (or similar probabilistic frameworks) and classical frameworks is the definition of the confluence operator. In contrast to a classical may or must framework they use the weighted average. This is similar to the work by Ramalingam [19] that showed that the meetoverpaths (MOP) solution exists for such confluence operator with a transfer function defined in terms of min, max and negation (i.e., the Minmax fuzzy logic). Our work extends this to allow other transfer functions and integrates the static dataflow analysis with a dynamic refinement mechanism through fuzzy control theory.
6 Conclusion
A major problem for static program analysis is the limited input information and hence the conservative results. To alleviate the situation dynamic program analysis is sometimes used. Here accurate information is available, but in contrast to its static counterpart the results only cover a single or few runs. To bridge the gap, and find a promising middleground, probabilistic/speculative program analysis frameworks have been proposed. These frameworks can be considered to intersect both by being a static program analysis that uses dynamic information. We have introduced an dataflow framework based on fuzzy sets that supports such analyses. We solved dataflow problems of use for speculative compilation and showed how our analysis unveils opportunities that previous approaches could not express and reason about. We furthermore showed that our dataflow framework based on fuzzy sets admit mechanisms from fuzzy control theory to enhance the analysis result dynamically allowing for a hybrid analysis framework.
References
 [1]
 [2] Assale Adje, Olivier Bouissou, Jean GoubaultLarrecq, Eric Goubault & Sylvie Putot (2014): Static Analysis of Programs with Imprecise Probabilistic Inputs. In: Verified Software: Theories, Tools, Experiments, Lecture Notes in Computer Science 8164, Springer Berlin Heidelberg, pp. 22–47, doi:10.1007/97836425410872.
 [3] Patrick Cousot & Radhia Cousot (2014): Abstract Interpretation: Past, Present and Future. In: Proceedings of the Joint Meeting of the TwentyThird EACSL Annual Conference on Computer Science Logic (CSL) and the TwentyNinth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSLLICS ’14, ACM, pp. 2:1–2:10, doi:10.1145/2603088.2603165.
 [4] Patrick Cousot & Michaël Monerau (2012): Probabilistic Abstract Interpretation. In: 22nd European Symposium on Programming (ESOP 2012), Lecture Notes in Computer Science 7211, SpringerVerlag, pp. 166–190, doi:10.1007/97836422889133.
 [5] Pierro A Di & H Wiklicky (2013): Probabilistic data flow analysis: a linear equational approach. In: Proceedings of the Fourth International Symposium on Games, Automata, Logics and Formal Verification, pp. 150–165, doi:10.4204/EPTCS.119.14.
 [6] Alessandra Di Pierro, Chris Hankin & Herbert Wiklicky (2007): A Systematic Approach to Probabilistic Pointer Analysis. In: Programming Languages and Systems, Lecture Notes in Computer Science 4807, Springer Berlin Heidelberg, pp. 335–350, doi:10.1007/978354076637723.
 [7] KarlHeinz Drechsler & Manfred P. Stadel (1993): A Variation of Knoop, Rüthing, and Steffen’s Lazy Code Motion. SIGPLAN Not. 28(5), pp. 29–38, doi:10.1145/152819.152823.
 [8] D. Dubois & H. Prade (1980): Fuzzy sets and systems  Theory and applications. Academic press, New York.
 [9] D. Dubois, H.M. Prade & H. Prade (2000): Fundamentals of Fuzzy Sets. The Handbooks of Fuzzy Sets, Springer US, doi:10.1007/9781461544296.
 [10] Mai Gehrke, Carol Walker & Elbert Walker (1996): Some comments on interval valued fuzzy sets. International Journal of Intelligent Systems 11(10), pp. 751–759, doi:10.1002/(SICI)1098111X(199610)11:10¡751::AIDINT3¿3.0.CO;2Y.
 [11] J.S.R. Jang (1993): ANFIS: adaptivenetworkbased fuzzy inference system. Systems, Man and Cybernetics, IEEE Transactions on 23(3), pp. 665–685, doi:10.1109/21.256541.
 [12] JyhShing Roger Jang & ChuenTsai Sun (1997): Neurofuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. PrenticeHall, Inc., Upper Saddle River, NJ, USA.
 [13] Jens Knoop, Oliver Rüthing & Bernhard Steffen (1992): Lazy Code Motion. In: Proceedings of the ACM SIGPLAN 1992 Conference on Programming Language Design and Implementation, PLDI ’92, ACM, pp. 224–234, doi:10.1145/143095.143136.
 [14] S. Maleki, Yaoqing Gao, M.J. Garzaran, T. Wong & D.A. Padua (2011): An Evaluation of Vectorizing Compilers. In: Parallel Architectures and Compilation Techniques (PACT), 2011 International Conference on, pp. 372–382, doi:10.1109/PACT.2011.68.
 [15] A. Mesiarová (2007): klpLipschitz tnorms. International Journal of Approximate Reasoning 46(3), pp. 596 – 604, doi:10.1016/j.ijar.2007.02.002. Special Section: Aggregation Operators.
 [16] Markus Mock, Manuvir Das, Craig Chambers & Susan J. Eggers (2001): Dynamic Pointsto Sets: A Comparison with Static Analyses and Potential Applications in Program Understanding and Optimization. In: Proceedings of the 2001 ACM SIGPLANSIGSOFT Workshop on Program Analysis for Software Tools and Engineering, PASTE ’01, ACM, pp. 66–72, doi:10.1145/379605.379671.
 [17] Flemming Nielson, Hanne R. Nielson & Chris Hankin (1999): Principles of Program Analysis. SpringerVerlag New York, Inc., doi:10.1007/9783662038116.
 [18] P.M. Petersen & D.A. Padua (1996): Static and dynamic evaluation of data dependence analysis techniques. Parallel and Distributed Systems, IEEE Transactions on 7(11), pp. 1121–1132, doi:10.1109/71.544354.
 [19] G. Ramalingam (1996): Data Flow Frequency Analysis. In: Proceedings of the ACM SIGPLAN 1996 Conference on Programming Language Design and Implementation, PLDI ’96, ACM, pp. 267–277, doi:10.1145/231379.231433.
 [20] ConstantinoG. Ribeiro & Marcelo Cintra (2007): Quantifying Uncertainty in PointsTo Relations. In: Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science 4382, Springer Berlin Heidelberg, pp. 190–204, doi:10.1007/978354072521315.
Appendix A: Omitted proofs
of Theorem 1.
Let , for some , be Lipschitz and be Lipschitz.

Functions , , , and are Lipschitz. Constants are Lipschitz Let for some :

:
By definition 
:
By definition Triangle inequality Distributivity 
:
By definition
