Bridging Static and Dynamic Program Analysis using Fuzzy Logic

by   Jacob Lidman, et al.
Chalmers University of Technology

Static program analysis is used to summarize properties over all dynamic executions. In a unifying approach based on 3-valued logic properties are either assigned a definite value or unknown. But in summarizing a set of executions, a property is more accurately represented as being biased towards true, or towards false. Compilers use program analysis to determine benefit of an optimization. Since benefit (e.g., performance) is justified based on the common case understanding bias is essential in guiding the compiler. Furthermore, successful optimization also relies on understanding the quality of the information, i.e. the plausibility of the bias. If the quality of the static information is too low to form a decision we would like a mechanism that improves dynamically. We consider the problem of building such a reasoning framework and present the fuzzy data-flow analysis. Our approach generalize previous work that use 3-valued logic. We derive fuzzy extensions of data-flow analyses used by the lazy code motion optimization and unveil opportunities previous work would not detect due to limited expressiveness. Furthermore we show how the results of our analysis can be used in an adaptive classifier that improve as the application executes.


Applications of fuzzy logic to Case-Based Reasoning

The article discusses some applications of fuzzy logic ideas to formaliz...

Similarity, Cardinality and Entropy for Bipolar Fuzzy Set in the Framework of Penta-valued Representation

In this paper one presents new similarity, cardinality and entropy measu...

Conquering the Extensional Scalability Problem for Value-Flow Analysis Frameworks

With an increasing number of value-flow properties to check, existing st...

A multivalued knowledge-base model

The basic aim of our study is to give a possible model for handling unce...

Representing and Reasoning about Dynamic Code

Dynamic code, i.e., code that is created or modified at runtime, is ubiq...

Heaps Don't Lie: Countering Unsoundness with Heap Snapshots

Static analyses aspire to explore all possible executions in order to ac...

Sorald: Automatic Patch Suggestions for SonarQube Static Analysis Violations

Previous work has shown that early resolution of issues detected by stat...

1 Introduction

How can one reconcile static and dynamic program analysis? These two forms of analysis complement each other: static analysis summarizes all possible runs of a program and thus provide soundness guarantees, while dynamic analysis provides information about the particular runs of a program which actually happen in practice and can therefore provide more relevant information. Being able to combine these two paradigms has applications on many forms on analyses, such as alias analysis [16, 20] and dependence analysis [18].

Compilers use program analysis frameworks to prove legality as well as determining benefit of transformations. Specifications for legality are composed of safety and liveness assertions (i.e. universal and existentially quantified properties), while specifications for benefit use assertions that hold in the common case. This reason for adopting the common case is that few transformations improve performance in general (i.e., for every input, environment). Similarly most transformations could potentially improve performance in a least one case. As such, compiler optimizations are instead motivated based on (an approximation of) the majority case, i.e. the (weighted) mean. While determining legality has improved due to advances in the verification community the progress in establishing benefit has been slow.

In this paper we introduce fuzzy data-flow analysis, a framework for static program analysis based on fuzzy logic. The salient feature of our framework is that it can naturally incorporate dynamic information while still being a static analysis. This ability comes thanks to a shift from “crisp” sets where membership is binary, as employed in conventional static analysis, to fuzzy sets where membership is gradual.

We make the following contributions:

  • [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  • Section 3 introduces our main contribution, the fuzzy data-flow framework.

  • Section 4 demonstrates the benefit of our framework by presenting a generalization of a well-known code motion algorithm and we show how this generalization provides new opportunities for optimizations previous approaches would not discover.

  • Section 4 shows how fuzzy logic can benefit program analysis by (1) using second-order fuzzy sets to separate uncertainty in data-flow and control-flow and hence improve an inter-procedural analysis and (2) using fuzzy regulators to refine the results of our static analysis, hence improving the precision dynamically.

2 Preliminaries

We introduce and define fuzzy sets and the operators that form fuzzy logic. These concepts will be used in Section 3 to define the transfer functions of our data-flow analysis.

2.1 Fuzzy set

Elements of a crisp set111In the context of fuzzy logic, crisp or Boolean set refer to a classical set to avoid confusion with fuzzy sets. are either members or non-members w.r.t to a universe of discourse. A fuzzy set (FS) instead allow partial membership denoted by a number from the unit interval . The membership degree typically denotes vagueness. The process to convert crisp membership to fuzzy grades is called fuzzification and the inverse is called defuzzification. Following Dubois et al. [9, 8] let be a crisp set and a membership function (MF) then is a fuzzy set. As a convention, if is understood from context we sometimes refer to as a fuzzy set. The membership function formalizes the fuzzification. Fuzzy sets are ordered point-wise, i.e. .

We can accommodate some notion about uncertainty of vagueness by considering a type-2 fuzzy set where the membership degree itself is a fuzzy set. Type-2 FS (T2FS) membership functions are composed of a primary () and secondary () membership . Here uncertainty is represented by the secondary membership that define the possibility of the primary membership. When for each and , it holds the T2FS is called an interval T2FS. Gehrke et al. [10] showed that this can equivalently be described as an interval valued fuzzy sets (IVFS) where . IVFS are a special case of lattice valued fuzzy sets (-fuzzy sets) where the membership domain forms a lattice over . Defuzzification of T2FS often proceeds in two phases. The first phase applies type reduction to transform the T2FS to a type-1 FS (T1FS). The second phase then applies a type-1 defuzzification.

2.2 Fuzzy logic

Fuzzy logic defines many-valued formal systems to reason about truth in the presence of vagueness. Contrary to classical logic the law of excluded middle () and the law of non-contradiction () does not, in general, hold for these systems. Fuzzy logic uses T-, S- and C- norms to generalize the logical operators , and . We compactly represent a fuzzy logic by 222Although one would expect the definition of a fuzzy logic to include a “fuzzy implication” operator in this work we do not consider it. which is sometimes called a De Morgan system [9] because it satisfies a generalization of De Morgans laws: and .

Definition 1.

Let be a binary function that is commutative, associative and increasing and has an identity element . If then is a Triangular norm (T-norm) and if then is a Triangular conorm (S-norm)333The general concept, allowing any , is called a uninorm [9] and is either orlike (i.e., ) or andlike (i.e., ). Our work does not require the full generality..

Definition 2.

A C-norm is a unary function that is decreasing, involutory (i.e., ) with boundary conditions (i.e, ).

Standard examples of fuzzy logics are shown in Table 1 [9, 8]. Examples 1-3 are special cases (and limits) of the Frank family of fuzzy logics that are central to our work and formally defined in Definition 3.

Fuzzy logic T-norm S-norm C-norm
1 Min-Max
2 Algebraic Sum-product
3 Lukasiewicz
4 Nilpotent
Table 1: Common instantiations of fuzzy logics
Definition 3.

Let then the Frank family of T-norms is defined by:

The set of intervals in forms a bounded partial order 444This should not be confused with the partial order used in the interval abstraction. where and . As per Gehrke et al. [10] we can point-wise lift a T1FS fuzzy logic to a IVFS fuzzy logic, i.e., and .

3 Fuzzy data-flow analysis

Static data-flow analyses deduce values of semantic properties that are satisfied the dynamics of the application. The dynamics is formalized as a system of monotone transfer functions and collector functions. Transfer functions describe how blocks alter the semantic properties. Collectors functions merge results from different, possibly mutual exclusive, branches of the application. The solution of the analysis is obtained through Kleene iteration and is a unique fixed-point of the system of equations. In a classical framework the domain of the values is binary, i.e. either true (1) or false (0). The interpretation of these values depends on the type of analysis. The value true means that the property can possibly hold in a may-analysis (i.e., it is impossible that the value is always false) while it means that the property always holds in a must-analysis. The value false could mean either the opposite of true or that the result is inconclusive.

Our fuzzy data-flow analysis instead computes the partial truth of the property, i.e. values are elements of . A value closer to means that the property is biased towards false and vice versa. Furthermore the transfer functions are logical formulas from a Frank family fuzzy logic and the collector functions are weighted average functions where the constant is determined prior to performing the analysis. In contrast to the classical framework Kleene iteration proceeds until the results differ by a constant which is the maximal error allowed by the solution. The error can be made arbitrarily small.

This section introduces the fuzzy data-flow framework and we prove termination using continuity properties and Banach’s fixed-point theorem. Section 4 then presents an example analysis to demonstrate the benefit of the framework. The analysis is performed on a weighted flow-graph where is a set of logical formulas (denoting the transfer function of each block), is a set of edges (denoting control transfers) and denotes the normalized contribution for each edge . As a running example we will use Figure 1 (left) which shows a flow-graph with four nodes and their corresponding logical formula. The flow graph has four control edges denoting contributions between nodes. For instance, Block 1 (B1) receives 0.1 of its contribution from B0 and 0.9 from B2, i.e. and .






Figure 1: Example flow-graph (left) and its corresponding equation system (middle) and the analysis result and error as a function of iteration (right)
Definition 4.

Let be a finite set of properties and a valuation for each property. We use to denote the interpretation of the fuzzy formula given a . Given a flow-graph with a unique start node the map describes the value of each property at each node and a fuzzy data-flow analysis is a Kleene iteration of :

Figure 1 (middle) shows the equation system, as implied by Definition 4, interpreted in a min-max fuzzy logic for the example flow-graph. The red colored text corresponds to the collector function, i.e. the weighted average, and the normal text is the interpretation of the logical formula. In order to prove termination of a fuzzy analysis we need to introduce a continuity property.

Definition 5.

A function is -Lipschitz continuous555Our definition restricts the domain and metric of both metric spaces (i.e., for the domain and co-domain of ) compared to the more general, and common, definition of a Lipschitz continuous function. iff . Where is -norm (i.e., the absolute value) of 666Other -norms can be used but only if we restrict the logic to the min-max fuzzy logic [15].. If then is called a contraction mapping and if then is called a non-expansive mapping.

In a sequence of applications of a contraction mapping the difference between two consecutive applications will decrease and in the limit reach zero. By imposing a bounded error we guarantee that this sequence terminates in a bounded amount of time. The analysis error and result of B2 as a function of iteration for the example is shown in Figure 1 (right). Note that the error (red line) is decreasing and the value of B2 (blue line) tends towards a final value. We next proceed to prove that any fuzzy data-flow analysis iteratively computes more precise results and terminates in a bounded amount of time for a finite maximum error from some . We let denote the maximal congruence set of elements from that are at least apart, i.e. . The set of intervals on , i.e. are defined analogously. For this we prove the non-expansive property of fuzzy formulas.

Theorem 1.

Let , for some , be -Lipschitz and
be -Lipschitz.

  • [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  • Functions , , , and are -Lipschitz. Constants are -Lipschitz.

  • If then is -Lipschitz.

  • The composition is -Lipschitz.


  • [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  • Formulas defined in a Frank family Fuzzy logic are -Lipschitz.

  • If satisfies then F is -Lipschitz.

In summary, as per Theorem 1:

  • [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  • Transfer functions in a Frank family fuzzy logic are non-expansive mappings.

  • is constant and hence a contraction mapping.

  • The composition of 1) Two non-expansive functions is a non-expansive function and 2) A non-expansive and a contraction function is a contraction function.

As the analysis is performed on the unit interval which together with the -norm forms a complete metric space we can guarantee termination by Banach’s fixed-point theorem.

Theorem 2 (Banach fixed-point theorem).

Let be a complete metric space and a contraction. Then has a unique fixed-point in .

This concludes our development of fuzzy data-flow analysis.

4 Lazy code motion

Improving performance often means removing redundant computations. Computations are said to be fully redundant, but not dead, if the operands at all points remain the same. For two such computations it is enough to keep one and store away the result for later. We can eliminate this redundancy using (global) common sub-expression elimination (GCSE). Furthermore a computation that does not change on some paths is said to be partially redundant. Loop invariant code motion (LICM) finds partially redundant computations inside loops and move these to the entry block of the loop. Lazy code motion is a compiler optimization that eliminate both fully and partially redundant computations, and hence subsumes both CSE and LICM. Knoop-Rüthing-Steffen (KRS) algorithm [13, 7] performs LCM in production compilers such as GCC when optimizing for speed.

It consists of a series of data-flow analyses and can be summarized in these four steps:

  1. [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  2. Solve a very busy expression777Knoop et al. [13] refer to this as anticipatable expression data-flow problem. and an available expression data-flow problem [17].

  3. Introduce a set that describes the earliest block where an expression must be evaluated.

  4. Determine the latest control flow edge where the expression must be computed.

  5. Introduce and sets which describe where expressions should be evaluated.

The target domain of the analysis is the set of static expressions in a program. Input to the analysis is three predicates determining properties about the expressions in different blocks:

  • [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  • An expression “” is downward exposed if it produces the same result if evaluated at the end of the block where it is defined. We use to denote if “” is downward exposed in block “”.

  • An expression “” is upward exposed if it produces the same result if evaluated at the start of the block where it is defined. We use to denote this.

  • An expression “” is killed in block “” if any variable appearing in “” is updated in “”. We use to denote this.

Very busy expression analysis is a backward-must data-flow analysis that depends on and and computes the set of expressions that is guaranteed to be computed at some time in the future. Similarly Available expression analysis is a forward-must data-flow analysis that depends on and and deduces the set of previously computed expressions that may be reused. The fixed-point system of these two analyses are shown in Figure 3. It is beyond the scope of this paper to further elaborate on the details of these analyses, the interested reader should consider Nielson et al. [17]. Here the LCM algorithm and the data-flow analyses it depends on, are applications we use to demonstrate the benefit of our framework. As such a rudimentary understanding is sufficient.

void diffPCM() {    b = 0, A = 0, B = 0;    for(i = 0; i < N; i++)       if(in[i] != b)          b = abs(in[i]-b);       B = Transform(b);       A = IncRate(i);       out[i] = A*B; }


i N

in[i] != b

b = abs(a[i]-b)

B = Transform(b);
A = IncRate(i);
out[i] = A*B
i = i + 1

    void diffPCM() {    b = 0, A = 0, B = 0;    for(i = 0; i < N; i++)       Update ANFIS decision of updating       Leave ANFIS decision of leaving       if(in[i] != b)          b = abs(in[i]-b);          If Update Leave: Decision error!       else          If Update Leave: Decision error!       B = Transform(b);       A = IncRate(i);       out[i] = A*B; }

Figure 2: diffPCM function (left), the corresponding flow-chart (middle) and the version used in Section 4.3 which is annotated with ANFIS classifier invocations (right)

Knoop-Ruthing-Steffen LCM





6543210 6543210 6543210
B0 0000000 0000000 1111111
B1 0000001 0000001 0000000
B2 0000010 0000010 0000000
B3 0000000 1000000 1100010
B4 0101000 0110100 1011111
B5 0000000 0000000 0000000

Edge Insert Block Delete
6543210 6543210
B0B1 0000000 B0 0000000
B1B5 0000000 B1 0000000
B1B3 0000000 B2 0000000
B2B3 0000000 B3 0000000
B2B4 0000000 B4 0000000
B3B4 0000000 B5 0000000
B4B1 0000000

6 5 4 3 2 1 0
B0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B1 0.0 0.0 0.0 0.0 0.0 0.0 1.0
B2 0.0 0.0 0.0 0.0 0.0 1.0 0.0
B3 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B4 0.0 1.0 0.0 1.0 0.0 0.0 0.0
B5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
6 5 4 3 2 1 0
B0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B1 0.0 0.0 0.0 0.0 0.0 0.0 1.0
B2 0.0 0.0 0.0 0.0 0.0 1.0 0.0
B3 1.0 0.0 0.0 0.0 0.0 0.0 0.0
B4 0.0 1.0 1.0 0.0 1.0 0.0 0.0
B5 0.0 0.0 0.0 0.0 0.0 0.0 0.0
6 5 4 3 2 1 0
B0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
B1 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B2 0.0 0.0 0.0 0.0 0.0 0.0 0.0
B3 1.0 1.0 0.0 0.0 0.0 1.0 0.0
B4 1.0 0.0 1.0 1.0 1.0 1.0 1.0
B5 0.0 0.0 0.0 0.0 0.0 0.0 0.0

Edge Insert 6 5 4 3 2 1 0 B0B1 0.001 0.998 0.001 0.000 0.001 0.001 0.000 B1B5 0.001 0.001 0.001 0.000 0.001 0.001 0.000 B1B2 0.001 0.001 0.001 0.000 0.001 0.001 0.000 B2B3 0.001 0.001 0.001 0.000 0.001 0.000 0.000 B2B4 0.001 0.001 0.001 0.000 0.001 0.000 0.000 B3B4 0.000 0.998 0.001 0.000 0.001 0.000 0.000 B4B1 0.001 0.000 0.001 0.000 0.001 0.001 0.000 Block Delete 6 5 4 3 2 1 0 B0 0.000 0.000 0.000 0.000 0.000 0.000 0.000 B1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 B2 0.000 0.000 0.000 0.000 0.000 0.001 0.000 B3 0.001 0.000 0.000 0.000 0.000 0.000 0.000 B4 0.000 0.998 0.001 0.000 0.001 0.000 0.000 B5 0.000 0.000 0.000 0.000 0.000 0.000 0.000

Expression abs(a[i]-b) Transform(b) IncRate(i) A*B i+1 in[i] != b
Index 6 5 4 3 2 1 0
Figure 3: Knoop-Rüthing-Steffen LCM formulation (middle) using classical (left) and fuzzy (right/bottom) data-flow analysis

Consider the simplified differential pulse-code modulation routine diffPCM in Figure 2 (left). We assume that and the relative number of times block (denoted ) is statically known888In this demonstration we let and , but our conclusions hold as increases and approaches .. In each iteration diffPCM invokes the pure functions Transform, to encode the differential output, and IncRate to get a quantification rate. We use the KRS-LCM algorithm to determine if these invocations can be made prior to entering the loop and contrast this to a situation where the data-flow analyses are performed in the fuzzy framework. As we will show the “fuzzy KRS-LCM” allows us to uncover opportunites the classical KRS-LCM would miss.

4.1 Type-1 static analysis

The data-flow problems of the KRS algorithm use expressions as domain. The mapping between expressions of diffPCM and indexes are listed in Figure 3 (bottom) together with the values of , and for each block (top right). The classical KRS algorithm conclude that both calls must be evaluated in B4 (bottom light gray box, “Delete” matrix, Column 4 and 5).

For the fuzzy data-flow analyses we use the Type-1 Min-Max fuzzy logic. The corresponding fuzzy sets of , and are given in Figure 3 (top dark gray box). Step (1) of the fuzzy KRS-LCM is hence the fixed-point to below system of equations:

Steps (2) and (4) introduce (constant) predicates and are performed outside the analysis framework. Step (3) is done similarly to step (1). Figure 3 (bottom dark gray box) shows the result from step (4). In contrast to the classical LCM the result implies that it is very plausible (0.998) that we can delete the invocation of Transform (“Delete” matrix, Column 5) from block B4 and instead add it at the end of B0 and B3 (or start of B1 and B4). However, result for the invocation of IncRate remains. This is because the invocation depends on the value of which is updated at the end of B4.

4.2 Type-2 static analysis

To increase data-flow analysis precision a function call is sometimes inlined at the call site. The improvement can however be reduced if the control-flow analysis is inaccurate and multiple targets are considered for a particular call site. We show how the uncertainty in control-flow and data-flow can be quantified in two different dimensions using type-2 interval fuzzy sets. As per Section 2 we can lift an arbitrary fuzzy predicate to intervals. Here we assume no knowledge about the relative number of calls to each target and treat the different calls non-deterministically.

B = Transform(b);
… = IncRate(i);
A = …
out[i] = A*B
i = i + 1

int IncRate_1(int i) {
   return 2*i;

int IncRate_2(int i) {
   return 1;
Block Kill DEE UEE 6 5 4 3 2 1 0 Edge Insert Block Delete B0B1 B0 B1B5 B1 B1B3 B2 B2B3 B3 B2B4 B4 B3B4 B5 B4B1
Figure 4: Implementations of IncRate

inlined in block B4 (left); DEE, UEE and Kill vectors of block B4 and

Delete Insert analysis result for expression IncRate(i) (right)

We assume two different IncRate functions, as in Figure 4 (left), have been determined as targets. Their respective and entries are the same but since is updated at the end of block B4 their entry will differ. The result of IncRate_1 depends on the variable and therefore , in contrast the entry for IncRate_2 is , where and . The new entry for block B4 is given by . The new , and sets are given in Figure 4 (right).

Applying the fuzzy KRS-LCM, but with Type-1 min-max fuzzy logic lifted to Interval type-2 min-max fuzzy logic gives the values of Delete and Insert for expression IncRate(i) in Figure 4 (right). The result for invoking IncRate prior to the loop is as opposed to 0.001 from the Type-1 analysis in Section 4.1. The added dimension in the result of the type-2 fuzzy analysis allows us to differentiate uncertain results from pessimistic results. In the given example we showed that the result of Section 4.1 is a pessimistic over-generalization and that the two paths need to be considered seperately to increase precision.

4.3 Hybrid analysis

The result from a fuzzy data-flow analysis is a set of fuzzy membership degrees. This section shows how the result can automatically be improved following the static analysis using a fuzzy regulator/classifier, if more specific information is provided at a later point. The classifier, a Takagi-Sugeno Adaptive-Network-based fuzzy inference system (TS-ANFIS) [11, 12] shown in Figure 5, is composed of five layers:

  1. [noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt]

  2. Lookup fuzzy membership degree of the input value.

  3. Compute the firing strength of a rule, i.e. conjunction of all membership degrees from each rule.

  4. Normalize the firing strengths, i.e., .

  5. Weight the normalized firing strength to the consequent output of the rule .

  6. Combine all rule classifiers, i.e. .

IF is and is THEN
IF is and is THEN






Figure 5: First-order Takagi-Sugeno ANFIS with two rules and two variables (left) and four example fuzzy sets (right)

This classifier uses a polynomial (i.e., the consequent part of the adaptive IF-THEN rules) to decide the output membership. The order of the TS-ANFIS is the order of the polynomial. The classification accuracy of the TS-ANFIS can be improved online/offline by fitting the polynomial to the input data. For a first-order TS-ANFIS this can be implemented as follows:

  • (Offline) (Affine) Least square (LS) optimization [11] is a convex optimization problem that finds an affine function (i.e., ) which minimizes where and are the input and output vectors of the training set.

  • (Online) Least mean square (LMS) [11] is an adaptive filter that gradually (in steps of a given constant ) minimizes , where is an input/output sample.

To exemplify the functionality of the TS-ANFIS we consider the classification of using the two rule TS-ANFIS from Figure 5 (left). Let , and membership functions be given as in Figure 5 (right). The membership degrees are marked in the figure as , for the first rule and , for the second rule. Hence the weight of the first rule (i.e., ) is and the second rule (i.e., ) is . The normalized weights are then and . As the consequence functions output and we produce the prediction .

We return to the diffPCM function and again consider if we can invoke Transform(b) prior to entering the loop. We saw in Section 4.1 that the fuzzy membership degree was 0.998. To improve classification accuracy we let the TS-ANFIS also use the variable and the first input value (i.e., ). These variables were not part of the analysis and so we conservatively assume the fuzzy membership degree to be the same for any value of these variables (in our experiments: ). As shown in Figure 2 (right), we inserted calls to compute the ANFIS decision of updating and keeping the variable constant in the diffPCM function. If the incorrect decision was made the error was noted and an error rate computed after handling all input samples.

We consider invoking the diffPCM function on four different input sets. Each input set defined as 10 periods with 25 input values in each period. The input sets (i.e., in[...]) is given in Figure 6 (left). We use the LMS algorithm999The constant for the four different runs was set to 0.001, 0.05, 0.15 and 0.1 respectively. after each incorrect classification and the LS algorithm if the error rate of a period was larger than or equal to . Note that the values of a period is not always perfectly representable by a linear classifier and sometimes varies between different periods, although periods are “similar”. Hence we do not expect the classifier to be monotonically improving with increasing period. As shown in the result in Figure 6 (right) the classification error decreases fast with both period and input sample. In two cases a small residual error remains after the final period. This show that the TS-ANFIS can improve the analysis result dynamically and hence increase the accuracy of when Transform can be invoked prior to entering the loop.

Figure 6: input values (left) and the corresponding classification error rate (right)

5 Related work

Most systems include elements (e.g., input values, environment state) where information is limited but probabilistic and/or non-deterministic uncertainty can be formulated. For these systems a most likely or even quantitative analysis

of properties is possible. Often this analysis relies on a probability theory for logical soundness. Cousot and Monerau 

[4] introduced a unifying framework for probabilistic abstract interpretation. Much work have since, although perhaps implicitly, relied on their formulation. Often probabilistic descriptions are known with imprecision that manifests as non-deterministic uncertainty [3]. Adje et al. [2] introduced an abstraction based on the zonotope abstraction for Dempster-Shafer structures and P-boxes101010

Lower and upper bounds on a cumulative probability distribution functions


Di Pierro et al. [6] developed a probabilistic abstract interpretation framework and demonstrated an alias analysis algorithm that could guide the compiler in this decision. They later formulated data-flow problems (e.g., liveness analysis) in the same framework [5]. An important distinction between their (or similar probabilistic frameworks) and classical frameworks is the definition of the confluence operator. In contrast to a classical may- or must framework they use the weighted average. This is similar to the work by Ramalingam [19] that showed that the meet-over-paths (MOP) solution exists for such confluence operator with a transfer function defined in terms of min, max and negation (i.e., the Min-max fuzzy logic). Our work extends this to allow other transfer functions and integrates the static data-flow analysis with a dynamic refinement mechanism through fuzzy control theory.

6 Conclusion

A major problem for static program analysis is the limited input information and hence the conservative results. To alleviate the situation dynamic program analysis is sometimes used. Here accurate information is available, but in contrast to its static counter-part the results only cover a single or few runs. To bridge the gap, and find a promising middle-ground, probabilistic/speculative program analysis frameworks have been proposed. These frameworks can be considered to intersect both by being a static program analysis that uses dynamic information. We have introduced an data-flow framework based on fuzzy sets that supports such analyses. We solved data-flow problems of use for speculative compilation and showed how our analysis unveils opportunities that previous approaches could not express and reason about. We furthermore showed that our data-flow framework based on fuzzy sets admit mechanisms from fuzzy control theory to enhance the analysis result dynamically allowing for a hybrid analysis framework.


  • [1]
  • [2] Assale Adje, Olivier Bouissou, Jean Goubault-Larrecq, Eric Goubault & Sylvie Putot (2014): Static Analysis of Programs with Imprecise Probabilistic Inputs. In: Verified Software: Theories, Tools, Experiments, Lecture Notes in Computer Science 8164, Springer Berlin Heidelberg, pp. 22–47, doi:10.1007/978-3-642-54108-72.
  • [3] Patrick Cousot & Radhia Cousot (2014): Abstract Interpretation: Past, Present and Future. In: Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS ’14, ACM, pp. 2:1–2:10, doi:10.1145/2603088.2603165.
  • [4] Patrick Cousot & Michaël Monerau (2012): Probabilistic Abstract Interpretation. In: 22nd European Symposium on Programming (ESOP 2012), Lecture Notes in Computer Science 7211, Springer-Verlag, pp. 166–190, doi:10.1007/978-3-642-28891-33.
  • [5] Pierro A Di & H Wiklicky (2013): Probabilistic data flow analysis: a linear equational approach. In: Proceedings of the Fourth International Symposium on Games, Automata, Logics and Formal Verification, pp. 150–165, doi:10.4204/EPTCS.119.14.
  • [6] Alessandra Di Pierro, Chris Hankin & Herbert Wiklicky (2007): A Systematic Approach to Probabilistic Pointer Analysis. In: Programming Languages and Systems, Lecture Notes in Computer Science 4807, Springer Berlin Heidelberg, pp. 335–350, doi:10.1007/978-3-540-76637-723.
  • [7] Karl-Heinz Drechsler & Manfred P. Stadel (1993): A Variation of Knoop, Rüthing, and Steffen’s Lazy Code Motion. SIGPLAN Not. 28(5), pp. 29–38, doi:10.1145/152819.152823.
  • [8] D. Dubois & H. Prade (1980): Fuzzy sets and systems - Theory and applications. Academic press, New York.
  • [9] D. Dubois, H.M. Prade & H. Prade (2000): Fundamentals of Fuzzy Sets. The Handbooks of Fuzzy Sets, Springer US, doi:10.1007/978-1-4615-4429-6.
  • [10] Mai Gehrke, Carol Walker & Elbert Walker (1996): Some comments on interval valued fuzzy sets. International Journal of Intelligent Systems 11(10), pp. 751–759, doi:10.1002/(SICI)1098-111X(199610)11:10¡751::AID-INT3¿3.0.CO;2-Y.
  • [11] J.-S.R. Jang (1993): ANFIS: adaptive-network-based fuzzy inference system. Systems, Man and Cybernetics, IEEE Transactions on 23(3), pp. 665–685, doi:10.1109/21.256541.
  • [12] Jyh-Shing Roger Jang & Chuen-Tsai Sun (1997): Neuro-fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
  • [13] Jens Knoop, Oliver Rüthing & Bernhard Steffen (1992): Lazy Code Motion. In: Proceedings of the ACM SIGPLAN 1992 Conference on Programming Language Design and Implementation, PLDI ’92, ACM, pp. 224–234, doi:10.1145/143095.143136.
  • [14] S. Maleki, Yaoqing Gao, M.J. Garzaran, T. Wong & D.A. Padua (2011): An Evaluation of Vectorizing Compilers. In: Parallel Architectures and Compilation Techniques (PACT), 2011 International Conference on, pp. 372–382, doi:10.1109/PACT.2011.68.
  • [15] A. Mesiarová (2007): k-lp-Lipschitz t-norms. International Journal of Approximate Reasoning 46(3), pp. 596 – 604, doi:10.1016/j.ijar.2007.02.002. Special Section: Aggregation Operators.
  • [16] Markus Mock, Manuvir Das, Craig Chambers & Susan J. Eggers (2001): Dynamic Points-to Sets: A Comparison with Static Analyses and Potential Applications in Program Understanding and Optimization. In: Proceedings of the 2001 ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, PASTE ’01, ACM, pp. 66–72, doi:10.1145/379605.379671.
  • [17] Flemming Nielson, Hanne R. Nielson & Chris Hankin (1999): Principles of Program Analysis. Springer-Verlag New York, Inc., doi:10.1007/978-3-662-03811-6.
  • [18] P.M. Petersen & D.A. Padua (1996): Static and dynamic evaluation of data dependence analysis techniques. Parallel and Distributed Systems, IEEE Transactions on 7(11), pp. 1121–1132, doi:10.1109/71.544354.
  • [19] G. Ramalingam (1996): Data Flow Frequency Analysis. In: Proceedings of the ACM SIGPLAN 1996 Conference on Programming Language Design and Implementation, PLDI ’96, ACM, pp. 267–277, doi:10.1145/231379.231433.
  • [20] ConstantinoG. Ribeiro & Marcelo Cintra (2007): Quantifying Uncertainty in Points-To Relations. In: Languages and Compilers for Parallel Computing, Lecture Notes in Computer Science 4382, Springer Berlin Heidelberg, pp. 190–204, doi:10.1007/978-3-540-72521-315.

Appendix A: Omitted proofs

of Theorem 1.

Let , for some , be -Lipschitz and be -Lipschitz.

  1. Functions , , , and are -Lipschitz. Constants are -Lipschitz Let for some :

    1. :

      By definition
    2. :

      By definition
      Triangle inequality
    3. :

      By definition